Lyft’s Envoy Proxy Server Helped Move the Company to a Service-Oriented Architecture – InApps Technology is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Lyft’s Envoy Proxy Server Helped Move the Company to a Service-Oriented Architecture – InApps Technology in today’s post !
Read more about Lyft’s Envoy Proxy Server Helped Move the Company to a Service-Oriented Architecture – InApps Technology at Wikipedia
At the Microservices Practitioner Summit held in San Francisco on January 31, Matt Klein, software “plumber” at Lyft, delved into how the car-sharing service moved its monolithic applications to a service-oriented architecture (SOA) by way of Envoy, a home-grown, self-contained proxy server and communications bus. Envoy was open sourced last September and already several companies are interested in becoming contributors, and Lyft itself is looking for ways to build developer community.
“When I joined Lyft, people were actively fearful of making service calls, Klein said. They feared that services calls would fail or bring high latency, both of which could slow an application’s performance. Thus was born Envoy.
The proxy architecture provides two key pieces missing in most stacks moving from monolith legacy systems to SOA — robust observability and easy debugging. Having these tools in place allows developers to focus on business logic.
Klein shared the purpose of Envoy: “The network should be transparent to applications. When network and application problems do occur it should be easy to determine the source of the problem.”
If developers can’t understand where the root cause is coming from, they won’t trust the system, explained Klein. Until now, good debugging tools have been hard to find.
The plan was to implement a lot of features all in one place. By co-locating the proxy or sidecar next to each and every application in the system eliminates the need for translation, so that several apps, all written in different languages can use Envoy. The app talks to Envoy, Envoy processes the data then returns the response to the application.
This approach sounded great, said Klein, but it turned out to be really, really hard to do.
Complexity = Confusion
Just think about it: Companies have been implementing cloud architecture and microservices piecemeal as the technology advances, leaving a hodge-podge of services, languages and frameworks.
A typical system, said Klein, might be using three to five different languages (e.g., Java, Go, Scala, PHP, Python) on different frameworks. In addition, libraries for service calls are per-language. Typically every language has a different library to make the calls and different ways to see the stats and observability out of it.
So over time, an organization can end up with lots of protocols, tons of databases, and different caching layers. And that’s before we get to infrastructure, which can be based on virtual servers, infrastructure services, containers, and load balancers.
This leads to chaotic output, with each system producing its own logs and stats, making figuring out what is going on very difficult from an operations standpoint.
What people don’t understand is how all these components come together, Klein explained. At Lyft, developers physically could not understand where the failure was happening. Was it in the app? Did the Amazon Web Services fail? Was there a networking issue? It was impossible to tell.
Not surprisingly, this confusion leads to a lot of pain. Building trust from developers is critical, said Klein, but was nearly impossible in this environment.
Companies know SOA is the future, he explained, but on a day-to-day, rubber-meets-the-road basis, the change is going to create a lot of hurt. And that hurt is mostly about debugging.
The most important point, according to Klein, is that from a distributed-systems standpoint, there are a lot of best practices for things like retries, timeouts, circuit breaking, rate limiting, etc., but in a piecemeal system like this, what often happens is a partial implementation of best practices.
Klein decided to build Envoy from scratch. While there are fully consistent service discovery systems available (Zookeeper, Consul), they are hard to run at scale and most companies using them have a team of people managing them. New Relic provides observability and analytics but primarily focuses on the visualization side (graphing, monitoring, analytics, etc.)
Klein explained that Envoy is primarily a communication bus for large SOA. It handles things like rate limiting, load balancing, circuit breaking, service discovery, and active/passive health checking, etc. As a byproduct of this, it generates a ton of observability data, which could be fed into a system like New Relic for visualization.
Envoy’s advanced load balancing is robust: Retry, timeouts, circuit breaking, rate limiting, shadowing, outlier detection, etc. According to Klein, load balancing is an area that is often missed, either people are not using it or their not using it correctly.
Most importantly, he stressed, Envoy provides best-in-class observability showing statistics, logging, and tracing.
For Envoy, it doesn’t matter which system you are in the system, the mesh just works no matter where the code is running. So whether Lyft developers are writing code on a dev box on their laptop, or in staging, or in scenarios, in production, it just works.
Further, it allows developers to unlock interesting scenarios in development, where the services don’t even know about the networking. This, said Klein, can be very, very powerful.
No services in the Lyft stack happen without running through Envoy. From an operations perspective, it just makes sense to use a single software to get fully distributed tracing, logging, and statistics.
Lyft developers questioned why they needed Envoy to do something conceptually simple like retry, but Klein pointed out that retries are one of the best ways to take down your system from exponential overload.
“Nothing,” he said, “is easy in a complex system.”
Envoy is designed from the get-go to treat service discover as lossy. The code assumes that hosts are going to come and go, layer on top both active and passive health checking.
The Technical Breakdown
Each instance of Envoy is co-located with a single service. The Envoy instances communicate with each other and may also be accessing external services or discovery services.
From the service perspective, the service is only aware of the local Envoy instance.
Envoy is written in C++ for performance reasons and is an L3/L4 proxy, or byte-oriented proxy. A TCP proxy connections come in, the bytes are operated on pushed back out. This protocol can be used for simple things like a simple HTTP proxy or more complex processes typically managed with MongoDB, Redis, Stunnel, or a TCP rate limiters.
It was written to be HTTP/2 first but it is a fully transparent proxy, Including gRPC and a nifty gRPC HTTP/1.1 bridge. The HTTP L7 filter architecture makes it easy to plug in different functionality.
Deploying Envoy on Your System
Klein suggested that a company deploying Envoy should take small steps. Lyft itself started small and rolled it out incrementally, taking a year to get fully deployed.
He recommends running one thread per core, calling it “embarrassingly parallel.” There is a perception that it’s very wasteful, he explained, but in reality, the use of health checks using plain text calls do not consume a lot of resources. For very, very, very large scale operations, this approach may not be good for active health checking, but for most companies, the usage is not an issue.
Then end result is that developers are able to focus on what they do best — deliver code.
To get started using Envoy, check out this tutorial from Datawire.
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.