<img height="1" width="1" src="https://www.facebook.com/tr?id=&amp;ev=PageView &amp;noscript=1">

Microservice Architectures: A Framework for Implementation and Deployment

Posted by Carl Pulley on Sat, Dec 20, 2014

Here we present a flexible and generic framework within which distributed applications, built upon a architecture, may be implemented and deployed.

We achieve this by deploying microservices to a cluster of machines (complete with for service discovery and for controlling services and specifying affinity rules).

Microservices are implemented using Akka actors that support clustering, Cassandra persistence and data sharding. Interaction with the microservices is mediated using a load balancer that round-robin connects (via circuit-breakers) to microservice REST endpoints.



Jan has previously published a number of articles on his exercise application (see [1], [2], [3] and [4]). However, these articles have thus far been very development focused. By this, I mean that the articles have focused primarily on how we may build components for a specific microservice architecture and then run them within a local (development) environment.

Clearly, doing this succeeds in demonstrating that one has a decoupled collection of microservices that are capable of collaborating with each other (i.e. the beginnings of a viable and scalable distributed application). However, what we do not have yet is any idea as to how aspects such as networking, provisioning and deployment might impact such microservice architectures. For example:

  • what happens if our code has assumed a DNS server is present?
  • what happens if we need to perform network communications between compute nodes (i.e. different virtual instances) and across data centers?
  • how should we setup load balancing to ensure seamless and flexible interaction with the REST endpoints for our microservices (in a manner!)?
  • in the face of failing compute nodes or docker containers, how do we ensure reliable monitoring and recovery?

This post addresses these short comings by abstracting the core code components of the exercise application into a series of discrete code modules (see the project directory). We present an application framework that is capable of supporting the implementation and deployment of distributed applications (based on a microservice architecture). For ease of presentation, we do not focus on the application, but instead work with the simpler Hello World style of distributed application.

Microservice Architectures

Microservice architectures have become popular in recent years as they offer the opportunity to modularly and elastically scale applications. They achieve this by decomposing an applications functionality into a decoupled and discrete set of microservices. As a result, the number of microservice instances that are active at any point in time may be elastically scaled (up or down) according to factors such as demand and availability.

A microservice architecture displays the following characteristics:

  • set of collaborating (micro)services, each of which implements a limited set of related functionality (there is typically no central control)
  • microservices communicate with each other asynchronously over the network (i.e. decoupled application) using language agnostic APIs (e.g. REST)
  • microservices are developed and deployed independently of each other
  • microservices use their own persistent storage area (c.f. sharding) with data consistency maintained using data replication or application level events.

However, care needs to be exercised when decomposing an application into microservices, since it can be all too easy to refactor application complexity elsewhere!

Microservice Implementation Overview

In this post, I present and discuss a framework within which one may:

  • implement microservice architectures based upon Akka clustering and persistence
  • provision, deploy and control such distributed applications into cloud based infrastructures (e.g. AWS or Rackspace clouds).

For this article, we will support distributed applications with:

  • service discovery and affinity rules
    • service discovery will allow microservice components to locate each other dynamically with minimal knowledge of the overall application architecture - here, will be used for implementation
    • affinity rules will be attached to each microservice to control where services execute and run (thus, services are able to consume resources and colocate to nodes with supporting services having minimal knowledge of the overall application architecture) - here, unit files (via ) will be used for implementation
  • an ability to delay actual service startup and to cleanup resources following service suspension
    • services will be used for this
  • microservice components executing in virtualised application containers (e.g. )
  • an ability to network containers across compute nodes
    • here we will use network overlays (e.g. as provided by )
  • support for Akka clustering, Cassandra persistence and REST APIs.

What we will omit here is any discussion of:

  • pipelines for performing message/data validation and verification
  • how to abstract Akka service discovery clients and registration into monitoring agents (c.f. )
  • centralisation of logging and monitoring
  • networking and application security
    • for the moment, we'll make the overly simplistic assumption that our local application subnets are isolated from the outside world!

Microservice Framework Implementation

Our implementation is spread across two Github projects:

  • the project
    • this provides an implementation for the provisioning and deployment framework for a simple Hello World style microservice
  • and the project
    • this provides the Scala/Akka implementation framework for implementing a simple Hello World style microservice
    • code is organized around a series of library modules (see the project's lib directory), with the Hello World microservice in the main module.

The following subsections will discuss each of these projects in greater detail.

Microservice Deployment

Each (micro)service is to be deployed within a Docker container which will, in turn, be networked together using Weave (see for more information). As a result, we simplify our networking configuration by logically viewing all Docker containers as being on a common subnet. For the purposes of this post, we will not utilize and so intentionally work with simple IP addresses. As a result, we will utilize the following naming conventions to simplify network configuration:

  • each compute node will be numbered with a value in the range 1-254
    • these numbers encode the network name for each compute node (independently of its provisioning type) and map to the 4th byte of the IPv4 address
  • each provisioning type will be numbered with a value in the range 0-254
    • provisioning types are used to define how a given compute node will be provisioned (e.g. here we will encode cassandra provisioning to be 0 and akka provisioning to be 1) and map to the 3rd byte of the IPv4 address.
A future enhancement will utilize in order to replace these number centric naming conventions.

When compute nodes are provisioned (e.g. vagrant up or similar), environment variables (e.g. METADATA and CLOUD_CONFIG) are used to specify the file to be ran and what metadata should be associated with this compute node. Metadata is used within affinity rules and specifies how a compute node has been provisioned (and so what resources and services it may offer or support).

Provisioning is defined using files. These are defined using Ruby ERB templating - thus allowing common provisioning code to be shared. As a result, for most microservices, provisioning boils down to defining how to download and install Docker container images for the respective microservice:

<%= ERB.new(File.read("cloud-config/default.erb"), 0, "<>", '_default').result(binding) %>

# Cloud config data for creating Akka compute nodes (no Weave DNS service is launched here!)
#   - downloads application Docker image
#   - tags compute node with metadata

    - name: install-app.service
      command: start
      enable: true
      content: |
        Description=Download Hello World Docker image
        # Login and download our Akka container image from the Docker repository
        ExecStartPre=/usr/bin/docker login -u "${USERNAME}" -p "${PASSWORD}" -e "${EMAIL}"
        ExecStartPre=/usr/bin/docker pull <%= @docker[:app] %>
        ExecStart=/bin/echo Docker Image Installed
        # Only download on akka deployment nodes

Control over our microservices is accomplished using (typically by SSHing into a clustered compute node). Fleet provides a distributed implementation of and is able to automatically restart microservices that fail (assuming suitable resources are available). So, in addition to downloading microservice docker images, the provisioning files also need to ensure that service scripts are installed. Due to the distributed nature of (i.e. service scripts can be initiated from any compute node in the cluster), we install all service scripts on all compute nodes.

Each microservice has its own unit file describing how to start and stop that service:

  @image                 = @docker[:app]
  @description           = "CoreOS Akka Cluster Application: HelloWorld"
  @roles                 = "hello-world"
  @ip_address            = "10.42.1.%i"
  @service_discovery_key = @service_discovery[:akka]
  @affinity_rules        = "MachineMetadata=akka=true"

<%= ERB.new(File.read("unit-files/default-akka.erb"), 0, "<>", '_unit').result(binding) %>

Notice how such unit files specify:

  • the Docker image holding the microservice (for installation purposes)
  • the Akka role that the service offers (amongst other things, this determines the type of Akka sharding region that will be launched)
  • the provisioning type number (here the 1 of the IP address) and the compute node number (here this is specified via %i in the IP address)
  • the service discovery key that this microservice will register itself under.

In addition to our application specific service scripts, a number of auxiliary service scripts are installed:

  • - running this service script allows an Akka cluster to be seeded and so to form (this is a one shot script)
  • - running this service script installs a cluster (this is needed for Akka persistence)
  • - running this script installs the load balancer (this is used to interface external REST API requests to microservices within our Akka cluster).

Finally, it is worth pointing out that general application configuration is performed in (for example) the file helloworld.rb:

# Hello World docker images
  :app => "carlpulley/#{@application_name}:v0.1.0-SNAPSHOT"

# Hello World service templates (sidekicks are inferred by naming convention)
@service_templates = @service_templates + [
  "#{@application_name}/[email protected]"

# Domain from which load balancer accepts REST API requests
@domain = "#{@application_name}.loanswithnocreditcheck.info"

Microservice Implementation

Service discovery allows microservices to locate each other with minimal knowledge of the overall application. In order to achieve this, the application registers itself upon startup and, for a normal shutdown, also unregisters itself. services (here implemented within the deployment framework) are used to continuously monitor the microservice and, should it be judged as unresponsive, unregisters that microservice.

An application gains access to the underlying service discovery store by extending the WithEtcd trait (of the etcd module). In doing this, application code then communicates with via a REST API that is implemented within the etcd.Client class.

trait WithEtcd {
  this: Configuration =>

  lazy val etcd = new Client(config.getString("etcd.url"))


Each microservice exposes itself to the "outside" world via a REST API (accessed via a load balancer). Application code achieves this by implementing the BootableService trait (in the api module) as follows:

trait Service extends Directives with Configuration with BootableService {
  import HelloWorld._

  implicit val timeout: Timeout = Timeout(config.getDuration("application.timeout", SECONDS).seconds)

  override def boot(address: Address, handlers: ActorRef*) = {
    require(handlers.nonEmpty, "At least one routing handler needs to be specified")

    super.boot(address, handlers: _*) + RestApi(
      route = Some({ ec: ExecutionContext => applicationRoute(handlers.head)(ec) })

  private[api] def applicationRoute(actorRef: ActorRef)(implicit ec: ExecutionContext) = {
    path("ping" / IntNumber) { index =>
      get {
        complete {
          (actorRef ? Ping(index)).mapTo[Pong].map(_.message)

Notice here how the boot function returns an instance of the RestApi case class. This case class is used to aggregate routing information. Thus allowing Spray routes to be potentially partitioned within a microservice.

The load balancer is used to physically wire the outside world to each microservice REST endpoint. In order to do this, microservices need to register their URL paths with . This is achieved using the start and stop functions of the RestApi case class. By separating the balancer registration code from the REST API code, we are able to support microservices that wish to expose REST endpoints only.

trait BalancedService extends Service with WithLoadBalancer {
  this: WithEtcd =>

  import WithLoadBalancer._

  private val log = Logger(this.getClass())
  val balancer = LoadBalance(config.getString("application.domain"))(etcd)
  val pingUpstream = "hello-world-ping"

  override def boot(address: Address, handlers: ActorRef*) = super.boot(address, handlers: _*) + RestApi(
    start = Some({ () => start(address) }),
    stop  = Some({ () => stop(address) })

  private[api] def start(address: Address): Unit = {
    log.debug(s"Starting REST routes using address $address")

    balancer + (MicroService("hello-world") -> Location("/ping/.*", pingUpstream))
    balancer ++ (pingUpstream -> Endpoint(s"http://${address.host.getOrElse("")}:${config.getInt("application.port")}"))

  private[api] def stop(address: Address): Unit = {
    log.debug(s"Stopping REST routes using address $address")

    balancer -- (pingUpstream -> Endpoint(s"http://${address.host.getOrElse("")}:${config.getInt("application.port")}"))

As the microservice starts, it needs to:

  • join an Akka cluster
  • startup the Spray REST API server
  • register its advertised services with the service discovery code.

For the most part, this is simply a matter of extending the abstract class BootableCluster (in the cluster module) as follows:

class Main extends BootableCluster(ActorSystem("HelloWorld")) with api.BalancedService with Configuration with WithEtcd with WithApi {
  cluster.registerOnMemberUp {
    // Register and boot the microservice when member is 'Up'
    val handler = ClusterSharding(system).start(
      typeName = HelloWorld.shardName,
      entryProps = HelloWorld.shardProps,
      idExtractor = HelloWorld.idExtractor,
      shardResolver = HelloWorld.shardResolver
    val api = boot(cluster.selfAddress, handler)

    system.registerOnTermination {

Notice here how we use the cluster.registerOnMemberUp callback to boot (i.e. register the REST URL paths with the load balancer) and start the microservice's REST APIs. Also note how we register our main application (i.e. the HelloWorld actor) with the cluster sharding coordinator.

Additional customisation is available as follows:

  • service discovery may be customized by overriding the register and unregister functions
  • constraints on when we may join a cluster (e.g. a minimal number of microservices need to be ready to join) may be controlled using the JoinConstraint mixin trait.


So, in conclusion, provisioning a Scala/Akka distributed application here consists of (for each type of compute node):

  • defining a unique provisioning type (i.e. a number in the range 0-254)
  • defining a provisioning script to (essentially) download the microservice's docker image (e.g. see )
and for each microservice:
  • defining a unit service script to start and stop the microservice (e.g. see ).

Whilst, implementing a Scala/Akka microservice consists of:

  • implementing a persistent actor (e.g. see )
  • implementing a REST Spray API (e.g. see )
  • optionally, implementing load balancer registration (e.g. see )
  • finally, we wire everything up by extending the abstract class BootableCluster and adding in callbacks using cluster.registerOnMemberUp (e.g. see ).

The above demonstrates:

  • provisioning 3 helloworld and 1 cassandra typed nodes (1 helloworld node is tagged as suitable for the load balancer)
  • SSHing into a compute node and launching 4 akka microservices, 1 vulcand microservice and 1 cassandra microservice
  • auto-seeding the Akka cluster
  • load balanced interaction with the distributed application
  • cluster sharding routing messages to the correct microservice
  • evidence of actor spin-up and passivation.

As always, code is available on the following Github repositories:

  • - this repository holds the Hello World provisioning code
  • - this repository holds the Hello World distributed application implementation

If you want to see a more practical application of the framework, then checkout the lift branches of the above Github repositories. Finally, if you want to try out the framework with your own code, then the master branch provides just the framework with no application "clutter"!

References to Relevant Posts by Jan Machacek

Posts by Topic

see all

Subscribe to Email Updates