- Home
- >
- Software Development
- >
- Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps 2025
Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps in today’s post !
Key Summary
This article from InApps Technology, authored by Phu Nguyen, explores how Cloud Foundry’s open-source components, Diego and Lattice, simplify microservices management by providing efficient container orchestration and workload scheduling. Presented at SpringOne 2GX, the article details their combined capabilities for managing large-scale microservices with minimal complexity. Key points include:
- Microservices Context:
- Trend: Microservices are increasingly adopted, requiring robust orchestration for multiple containers running diverse services.
- Solution: Diego and Lattice offer a streamlined platform for managing microservices in a Cloud Foundry environment, reducing the learning curve.
- Diego Overview:
- Function: A scheduling system for container-based workloads, supporting both single tasks and long-running processes (LRPs), including persistent systems like databases.
- Components:
- Brain: Manages workload scheduling across cells (VMs running containers).
- Bulletin Board System (BBS): Tracks the system’s current state and desired state for apps, ensuring alignment.
- Auctioneer: Assigns workloads to cells via an auction-based system, where cells “bid” based on available CPU resources, selecting the optimal cell to run tasks.
- Lattice Overview:
- Function: A lightweight, cluster-based workload manager that complements Diego, including Loggregator for monitoring container health and status.
- Flexibility: Supports evolving cluster layers independently, allowing developers to update tasks or LRPs without altering namespaces.
- How They Work Together:
- Workload Management: Diego’s Receptor communicates desired workloads to the BBS, which matches them against the actual cluster state. The Converger adjusts the system to align actual and desired states, stopping or starting tasks as needed.
- Polling: Lattice uses polling to assess system state, ensuring workloads meet desired configurations.
- Container Orchestration: Integrates with Garden (a Go-based container orchestrator for Linux) and Warden (for managing container namespaces and processes), simplifying setup and execution.
- Scalability and Tools:
- Scalability: Lattice supports both small and large-scale deployments, using Terraform for infrastructure configuration and X-Ray.CF dashboard for visualizing and managing clusters.
- Ease of Use: Eliminates stringent access controls, allowing developers to focus on coding rather than permissions or debugging.
- Applications:
- Versatility: Supports diverse workloads, including user-facing apps, NoSQL/SQL databases, and large server clusters, suitable for both enterprise and small teams.
- Load Balancing: Dynamically distributes workloads across cells to ensure uptime and resource efficiency.
- InApps Insight:
- Diego and Lattice align with modern microservices orchestration trends, similar to Mesosphere Marathon and Mantl, offering lightweight, scalable solutions for container management.
- InApps Technology can integrate Diego and Lattice into client projects, leveraging their compatibility with Go-based APIs (e.g., Garden) and tools like Terraform to deliver efficient, scalable microservices architectures, complementing frameworks like Vue.js or Meteor Galaxy.
Read more about Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps at Wikipedia
You can find content about Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps from the Wikipedia website
Microservices have been the talk of the software development and operations world recently. As more and more software development teams look into using microservices, the need for orchestrating large group of containers to run multiple services at once is crucial.
At SpringOne 2GX in September, Cloud Foundry platform engineer Matt Stine spoke about how Cloud Foundry’s Diego, paired alongside Lattice — both open source — could work together to offer a new, simplified way to manage microservices in a Cloud Foundry environment.
Diego provides scheduling for container-based workloads. It can be used to manage both single tasks and long running processes (LRPs), even those that run indefinitely, such as database systems. Lattice is a lightweight, cluster-based workload manager. Lattice also includes Loggregator to compile logs on the health and status of running containers.
Together these components make an entire platform for managing large numbers of microservices, one without a steep learning curve.
Brains of the Operation
Lattice clusters are comprised of VMs running individual containers, called cells. These cells are distributed systems available for executing tasks within a container, and are monitored by software that Diego controls, called the brain, which is able to schedule workloads onto cells and orchestrate their functions.
All cells are monitored by the brain to ensure that services continue to run. Cells that are not performing well or that have failed altogether have their workloads re-balanced elsewhere to ensure overall system uptime remains consistent. Diego uses a bulletin board system (BBS) that keeps track of both a baseline that reflects how the system is currently operating, as well as a set of specifications for the desired states for building, testing, or running an app in production.
The brain has an unusual approach to assigning workloads. It auctions a particular container’s workload to cells based on the resources available in the system. The auctioneer, as this feature is called, offers cells the opportunity to bid on the right to run a task or LRP, and thus become the representative of a task.
A bid is a report line, based off of available resources offered by the VM’s CPU. The auctioneer then collects bids from the Lattice cells, scoring them with a unique algorithm, and choosing a winner based on these variables. The winning cell runs the workload.
Pictured: An overview of Diego.
Cell representatives are limited in that they only are able to run what a task’s API tells them to. The instructions are passed along via JSON-formatted messages. The cell representative communicates with an executor, which then runs the action requested.
Lattice works well with a container orchestrator called Garden, available for those running Linux-based systems. Written in Go, Garden provides users the ability to also deploy Garden containers using BOSH with a Linux-based backend.
As Garden is a series of Go interfaces, running it on Linux requires a bit of active setup before deploying one’s first Garden container. BTRFS Tools makes setting a loopback simple, allowing for developers to then build out and test their Garden containers for functionality.
To further enhance one’s development stack on Garden with Lattice, a Warden RootFS can be provisioned. Warden is a crucial component of container management, as it offers a simple API for handling instanced environments. Diego dovetails with the Garden API to orchestrate name spaces and processes running in containers. A particular task asking for a workload has no idea it is being run on Linux, making setup simple when utilizing a VM environment for development.
Cell representatives can ask Garden to run tasks on containers in a way that is independent of any particular platform. Lattice delves deeper into the specifics surrounding tasks to be run before spinning up containers, which allows layers of a Lattice cluster to evolve apart from other layers. This results in more flexibility for developers, as developers can update tasks to be run or assign new LRPs without changing individual namespaces.
Lattice and Diego Workload Management
When a task is posted to Diego, the BBS then attempts to match as best as possible the desired state of a cluster against the actual state of the cluster at the current time.
When events are scheduled, Diego’s Receptor informs the BBS of the desired workload. The cell representative informs BBS of the actual workload, with other events captured and stored. When running multiple microservices, sometimes events take longer to run, or to return user data. Lattice uses polling to assess the state of the system, in order to reconcile as best as possible the desired state of a workload against its actual operational state at the moment.
Another feature of the brain, called the converger, works to make the workload’s actual state come as close to possible to the desired state. It also informs cell representatives of tasks that can be stopped, while instructing the auctioneer to start contacting cell representatives for bids on new workloads.
Working With Lattice At-Scale
As a container scheduler, Lattice not only works well for smaller instances, but can be used to deploy multiple clusters as well, by using Terraform, a tool for configuring and launching large-scale infrastructure deployments.
Working with Lattice and Diego requires setting up a virtual machine. If not working in the Lattice command line interface, developers can use X-Ray.CF dashboard to visually build a particular cluster. X-Ray will display pulsing cells for those containers which have workloads starting up. Load balancing will then be shared around different containers.
As more cells are fired up, the number of available cells will exceed the number of cells needed. Lattice also allows users to deploy a single cluster, so that there will be no stringent access control taking away from time spent developing to gain permission for users to deploy, debug, or manage an active cluster.
Lattice is a structure enabling software developers to accomplish many tasks at once, a set of scaffolding upon which users can then run applicable microservices, including large swaths of servers, user-facing applications, NoSQL or SQL databases, and much more. Depending on the needs of a user, Lattice can be used both at enterprise level and for smaller teams looking to experiment with starting to use microservices for their application management.
Feature image: “Chrome Alum Crystals” by Paul is licensed under CC BY-SA 2.0.
Source: InApps.net
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.