The Case for Running Monolithic Applications in Docker Containers – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn The Case for Running Monolithic Applications in Docker Containers – InApps in today’s post !
Read more about The Case for Running Monolithic Applications in Docker Containers – InApps at Wikipedia
You can find content about The Case for Running Monolithic Applications in Docker Containers – InApps from the Wikipedia website
Laura is the Director of Engineering at Codeship and a Docker Captain. Her primary focus has been on fortifying Codeship’s Docker infrastructure and improving the overall CI/CD experience. Prior to her involvement with Codeship and Docker, she worked on HPE’s public cloud offering and on the OpenStack project.
In order to have a complex functioning system, you first need to have a simple functioning system. Your monolithic application didn’t start out as a monolith; it was a simple solution that grew in complexity. Most projects that start fresh with a multi-service approach will fail. It’s impossible to plan for a large amount of complexity, when you don’t even know what the simple solution looks like.
As you’re evaluating the path your migration project may take, it’s easy to get caught in a state of paralysis. It begins when you hear Docker being touted as a great tool for deploying and running microservices. So it’s easy to become convinced that if you’re not using microservices, you can’t get started with Docker.
While Docker is a tool that enables microservices workflows, it’s also a good solution for all types of architectures and applications. By packaging your application inside a Docker image, you can modernize your development and deployment workflows to ship code more frequently, and make sure your customers are using the best you have to offer.
Docker is no longer a new shiny thing reserved only for early adopters. It’s proven that it has staying power in the enterprise market, with huge customers like Visa and eBay running production workloads in Docker. But most companies don’t start using Docker only with a greenfield project. When we discuss the path to Docker adoption with customers — especially our own Codeship Pro users — the first step we suggest is to simply get what they already have running, inside container images. From there, the necessary abstractions become clearer.
Here’s what you don’t need in order to start running your application with Docker:
- A full microservices architecture.
- A completely perfect CI/CD pipeline (though you will need testing, to enable both confident deployments and refactoring).
- A perfectly-tuned orchestration system.
- Persistent data in containers.
- Super-optimized, multi-stage images for every environment.
All complex systems have to start with something simple and boring first. Small, incremental evolution is a great approach to take here. Get the main service, or only service, of your application running inside a container. Start with building the image because you can’t run anything without it. “It works!” is good enough to start with. Rely on tools like Image2Docker to aid in Dockerfile authoring, or start with tried-and-tested base images like the latest Ubuntu LTS release.
Just use the same database you were using before. It doesn’t have to be in a container. Next, add some alerts and monitoring. At Codeship, we love Librato.
The path to microservices is often joined by the path to containerization, but it’s important to keep in mind that these shifts are not necessarily bound to each other. Microservices can, and usually should come later. While it’s true that Docker often catalyzes the transition (because the team wants the benefit of the Docker toolset) it’s not impossible for these changes to happen independently from one another.
You can do microservices without Docker. And you can run monoliths in Docker! And no one will tell you that you’re Doing It Wrong.
Embrace Neutral Changes
Each decision you make on the way to containerization should have a neutral to positive impact on your application. Neutrality isn’t a bad thing. In fact, isolating the amount of change you introduce to your system can help you reach your goal faster because the transition path is simplified.
Here’s what I mean: Maybe you’re holding off containerizing your monolithic app because you want it to be highly available, and you want to avoid a situation where one container serves as a single point of failure. But your single VM running the same code is also a single point of failure. The decision to run it in a container is neutral, or even neutral trending positive. You’re trading one single point of failure for another one. But by going through the exercise, you’re nudging your application — and your team — closer to the goal of having a highly available application running inside containers.
“A complex system that works is invariably found to have evolved from a simple system that worked.” — John Gall
Similarly, you may want to start with a one-node Swarm cluster (a.k.a., “Baby Swarm”), where a single node is a manager and runs workload tasks. This is a neutral change — still a single point of failure — but it trends positive because it readies your application for highly-available scaling. Your team gains experience with operating a Swarm cluster without having to deeply understand the added complexity of a multi-node cluster.
Positive changes are easily visible because you should have some concrete metric that you’re measuring against. Changes with significant positive impact can only happen when you’ve already made a neutral change in order to facilitate them. Optimizing your Dockerfile to reduce image size, or using new features like multi-stage builds, can only happen if you’ve first made a Dockerfile. Using the example of image size, it’s simple to track the impact because this change is measured in bytes. This small change can ripple through your system; it then reduces image build time, cuts down on pushing and pulling time, and inevitably leads to a faster release cycle.
A continuous integration/continuous deployment (CI/CD)-based workflow is also an easily measurable positive change. By adding visibility into bugs and failures before you deploy, your team can address defects more effectively, reducing customer-facing issues. You can measure this by uptime, reduced number of incidents, or more infrequent pages to your on-call engineer. The speed of deployments will also increase, meaning you’re able to ship code at a higher frequency. Build times are easily measurable. As you continue to optimize and evolve your Docker images, your testing suite, and your CI/CD pipeline, your goal will be to see software quality increase while build and deployment times.
But keep a close eye on a decision that could have a negative impact, because it can disguise itself as neutral or just as “necessary technical debt.” For example, don’t starting using containers in production without monitoring and metrics.
Negative changes are ones that:
- Reduce visibility into the system, especially if the component is customer-facing.
- Add complexity that won’t allow for future positive changes.
- Introduce volatility into your system, such as not pinning version numbers in your Dockerfile.
Plan the Next Stage of Evolution
The noted system theorist John Gall wrote, “A complex system that works is invariably found to have evolved from a simple system that worked.” This pattern is especially true for your containerized application. By focusing on neutral or positive changes, it’s easier to build out a simple solution that can grow in complexity.
But running really fast won’t get you where you want to be if you don’t have a clear target. As with any system evolution, it’s important for you to identify strategic priorities and measurements before you start, and keep revisiting them with your team as new patterns are introduced and new features are built. If you can’t or don’t want to measure something, it’s probably not worth building.
The perfect highly-available application doesn’t exist; it’s a set of goalposts that keeps moving and changing. Docker is an important first step in getting to the next evolution of your architecture, and it comes with a rich ecosystem of tools to support developers working on similar problems. You also set your team up for easier onboarding since new developers can use tools like Docker Compose to get up and running quickly.
There are countless talks and blog posts about migrating your traditional app to use Docker, and even a dedicated track at DockerCon where speakers share tips and guide you through common scenarios.
Docker isn’t magical dust that will suddenly make your applications run perfectly, or even something that can make it easy to decompose your monolith into smaller services. In fact, it’s likely that the process of containerization can surface deeper architectural issues for your application. That’s why starting small is so important. Docker can help you see those issues earlier and enables you to isolate and streamline your development and deployment practices to keep moving toward those goalposts of a resilient highly-available application.
By packaging your application inside a Docker image, you can modernize your development and deployment workflows to ship code more frequently, and make sure your customers are using the best you have to offer. You should run your legacy monoliths in Docker. And you should want to.
InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.
Feature image: Standing Stones of Stenness, Scotland, from a recovered slide by Greg Willis, licensed under Creative Commons 2.0.
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.