• Home
  • >
  • DevOps News
  • >
  • How Reddit Solved DevOps’ ‘Three-Stooges-in-a-Door’ Problem – InApps 2022

How Reddit Solved DevOps’ ‘Three-Stooges-in-a-Door’ Problem – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn How Reddit Solved DevOps’ ‘Three-Stooges-in-a-Door’ Problem – InApps in today’s post !

Read more about How Reddit Solved DevOps’ ‘Three-Stooges-in-a-Door’ Problem – InApps at Wikipedia



You can find content about How Reddit Solved DevOps’ ‘Three-Stooges-in-a-Door’ Problem – InApps from the Wikipedia website

This month a staff software engineer at Reddit shared a real-world example of how microservices have helped improve Reddit’s resilience — a thoughtful case study drawn from his own experience of handling sudden spikes in search requests. It’s a great example of sharing what you know, helping others in the larger community just by walking through interesting problems and solutions you’ve encountered. But it’s also an example of how a long-standing problem can find new solutions in the world of microservices for DevOps.

And best of all, he illustrated it all perfectly with a funny metaphor from the Three Stooges.

On LinkedIn, Rajiv Shah describes himself as a backend distributed systems and search engineer (after over a year as a senior software engineer II for machine learning). But in his post, Shah first succinctly explained the chaos of the Three Stooges to readers unfamiliar with the classic Hollywood comedy trio.

“They often attempted to collaborate on simple daily tasks but invariably ended up getting in each other’s way and injuring each other,” Shah wrote. For example, walking through a door, they all collide, “and ultimately, no one could get through.”

“Just like forcing Stooges through a doorway, we’ve encountered similar patterns pushing requests through a distributed microservices architecture.”

Taming the ‘Thundering Herd’

It’s another way to explain a decades-old issue that also been called the Thundering Herd problem. The phenomenon is so common it received its own entry in “The Jargon File,” a seminal compendium of programmer culture (last updated in 2003) that describes its occurrence in Unix systems.

There’s also a Wikipedia page for the Thundering Herd problem, detailing how it’s been handled in the Linux kernel and the I/O completion ports of Microsoft Windows. (The page notes that some systems even try randomizing the wait times before a retry — to break the synchronization that might otherwise happen when the thundering herd returns.)

Read More:   Update Reddit CTO: Stick to Boring Tech when Building Your Startup

And the phenomenon came up last year in a talk at the LISA conference of the USENIX computing systems association by Laura Nolan, formerly one of Google’s staff site reliability engineers in Ireland. (The talk’s title? “What Breaks Our Systems: A Taxonomy of Black Swans.“)

 

The classic example is when several processes are scheduled to run overnight, and the engineers creating them all unthinkingly choose the stroke of midnight as their start time, Nolan explained. But other examples include mobile clients that all suddenly start updating at the same specific time, and even just unusually large processes all at once.

“I worked at Google, so we had to worry about people starting up 10,000 worker MapReduces — especially the interns!” Nolan said.

She also saw it happen at Slack, when restarting a server suddenly triggered a massive wave of Slack clients reconnecting (and then querying for their lists of channels and users and any recent messages…)

“The real defense is to not fall into the trap of thinking, ‘Oh well, how would I get a thundering herd to my service?’” Nolan said. “Because as we’ve seen, there are all sorts of different ways this can happen.”

And something similar has happened at Reddit. Yes, the teams have implemented a system where responses can be cached (to avoid having to make a second full trip if they later receive an identical request). But Shah’s post explains that those responses also have a “Time to Live,” and if there’s an outage that lasts even longer, that cache ends up getting flushed.

The end result? “When the site recovers, we get inundated with requests… many of which are duplicates, made within a short period of time,” he writes. But with zero already-cached responses to handle those duplicates, “this causes such a flood of traffic that none of the requests succeeds within the request timeout, so no responses get cached; and the site promptly faceplants again.”

In fact, Shah has also heard the phenomenon referred to as a “cache stampede” (as well as “the Dogpile Effect”).

Diving into the Details

Reddit’s solution? Moving the deduplicating/caching to a different point in its stack — specifically, to its microservice level — along with “a web stack that can handle many concurrent requests,” Shah wrote. Then the engineers just implement code making sure no two duplicates are ever inadvertently processed concurrently (using a distributed lock). All that’s left is to check for the existence of already-retrieved responses, and only creating new ones if a cached one isn’t found.

Read More:   Update Kinetica Brings the Power of GPU Parallel Processing to a Database System

Shah wrote that the reduction on load has been dramatic — especially in an environment handling tens of thousands of requests.

“Think of deduplication as forcing The Stooges to form an orderly line at the doorway to the kitchen. Then the first Stooge enters the kitchen and exits with a bowl of lentil soup, and that bowl of soup gets cached.”

“Then the other two Stooges get cached bowls of soup.”

Although really, they’re all actually getting the exact same bowl of soup, Shah acknowledges in a follow-up comment. So he offers an alternate metaphor. “Maybe instead of a kitchen, it’s an art studio? The first Stooge waits, then gets a drawing. The other two Stooges get photocopies of that drawing?”

Reddit’s API gateway collates all the incoming requests from different platforms into a standard form for easier processing (while throwing out any superfluous variables that just aren’t relevant). But when they reach the microservices level, deduplication ends up getting handled using a simple programming construct known as a hash table — where a value gets paired with a unique identifier that can retrieve it later (a key). This creates an easy way to spot the duplicate values, since they’ve already been assigned an identifier.

And “deduplication” can also involve other operations on the original request. (For example, discarding the tracking parameters in the URL, conforming with the request’s Content-Type header, adjusting for the time of day.) “Phrased differently, you should include every variable that could affect the response in any way,” Shah wrote.

 

Because the engineers are working at the microservices level, it’s also easy for them to customize individual responses. And this stage also gives them a chance to log and monitor the requests, or set up responses to specific events. In the post’s grand finale, Shah called microservices “your last line of defense between your users and your underlying datastores.”

He added, “We believe this solution to be a natural fit, easy to reason about, flexible, maintainable, and resilient.”

Read More:   How Pokemon Go Creator Builds on Kubernetes for Developers – InApps Technology 2022

Reddit’s Stacks

Stooges_Malice in the Palace_curly_scene (public domain, via Wikipedia)

Shah provides code snippets in an example where distributed locks are implemented using Pottery’s implementation of Redis’s Redlock. (In its GitHub repository Pottery described itself as “Redis for Humans,” and “a Pythonic way to access Redis… useful for accessing Redis more easily, and also for implementing microservice resilience patterns, and it has been battle-tested in production at scale.”) This lets Shah use Python’s threading. Lock API (which includes a handy built-in timeout that releases the lock if the thread dies prematurely).

Shah also shared some details about Reddit’s technology stack, acknowledging how useful its engineers have found the gevent Python library for handling a large number of requests with a small number of instances. It lets the Reddit engineers to tuck all the duplicate requests into a large queue (each one waiting its turn to get past the distributed lock), he wrote, “and then for those queues to be drained as threads acquire the lock and execute serially.” And all without ever exhausting the pool of available threads.

 

Reddit has some experience running that gevent library with a combination of the Django and Flask frameworks — but also with a combination of Python 3 and its own Baseplate library (defined in its GitHub repository as “Reddit’s specification for the common shape of our services”).

But Shah’s post added that there are lots of ways to implement his particular solution to the Three-Stooges-in-a-doorway problem. “You can make it work for any web or microservice stack that can handle many concurrent input/output-bound requests” — as long as there’s some way to share data between all the back-end instances (for the distributed lock, and for the cached response).

Reframing the Problem

When Shah’s shared his post in Reddit’s programming forum, it received more than 1,600 upvotes (along with 186 different comments). And the post even drew a positive reaction from Chris Slowe, Reddit’s CTO and founding engineer. Slowe posted on LinkedIn, “I honestly like the reframing from the ‘Thundering Herd’ to the ‘Three Stooges’ Problem, as it much better assesses how you feel when you run up against it.”

Ultimately, Shah is hoping that devs everywhere can get some benefit from his Three Stooges analogy.

As he put it in his blog post, “We hope that you’ll use it to make your own microservice boundary doorways more resilient to rowdy slapstick traffic.”

Three_Stooges_1937 (via Wikipedia) Public Domain - old advertising trade card.

Featured image via Creative Commons.




Source: InApps.net

5/5 - (1 vote)
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...