Skip to content

This tutorial describes how to bootstrap a production-ready Lagom microservices cluster in a Kubernetes environment.

License

Notifications You must be signed in to change notification settings

mfactory-lab/lagom-on-kube

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lagom on Kube

This tutorial describes how to bootstrap a production-ready Lagom microservices cluster in a Kubernetes environment.

alt lagom on kube

The Problem

Lagom is a great java- and scala-based reactive microservices framework. Lagom is free to use in both development and production. However, as detailed later in this doc, deploying lagom requires some extra weight lifting by the end user or using Lightbend's ConductR (which is free to use up to three nodes).

With this tutorial, you can bootstrap a Kubernetes cluster hosting Lagom microservices, which is free-to-use, however production-ready.

The Lagom framework lacks following fuctionality and attributes required for a production-ready microservices framework:

  • Service orchestration
  • Scalability
  • Monitoring
  • Resilience
  • Load balancing
  • Service discovery

Turns out, all of the above can be achieved with Docker and Kubernetes. (I'm pretty sure it's also possible using Docker Swarm or Apache Mesos, but those ways are outside of the scope of this tutorial.)

The Solution

  • Service orchestration: we package all of the Lagom services into Docker containers, which are then 'orchestrated' (i.e. deployed, scaled, managed, etc.) through standard Kubernetes tools.

  • Scalability is provided on two levels

    • Within a Kubernetes cluster, scaling is performed by Deployments and Replication controllers
    • AWS offers Auto Scaling group for further scalability when a Kubernetes cluster hits its limit
  • Monitoring is provided on two levels as well

    • Heapster (InfluxDB and Grafana) to monitor containers and nodes
    • New Relic to monitor everything at service level
  • Resilience

    • Kubernetes Pods can be configured to track the health status of Docker containers
    • Deployments keep number of active containers in line with current load
  • Load balancing is provided by Kubernetes services which select a Pod randomly for each request (more complex load balancing strategies are available using Ingress or an external load balancer, however these methods are outside of the scope of this document)

  • Service Discovery was the only part which had to be implemented, not just configured. It is based on the free-to-use Lagom develempent-mode service register and received a number of improvements making it resilient, persistent and compatible with the Kubernetes architecture.

The Technologies

  • AWS as a cloud platform
  • kops to bootstrap Kubernetes on AWS
  • Kubernetes for service orchestration
  • Ansible for configuration management
  • Docker as a container
  • Lagom as a microservices framework
  • Heapster for container and node monitoring
  • New Relic for application monitoring
  • Cassandra as a persistent storage for service registry

The Description

The Demo infrastructure is described in this document