• Home
  • >
  • DevOps News
  • >
  • Serverless and Demand Spikes – InApps Technology 2022

Serverless and Demand Spikes – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Serverless and Demand Spikes – InApps Technology in today’s post !

Read more about Serverless and Demand Spikes – InApps Technology at Wikipedia



You can find content about Serverless and Demand Spikes – InApps Technology from the Wikipedia website

Thundra sponsored this post.

Emrah Samdan

Emrah is VP of Product at Thundra. He is enthusiastic about serverless, observability and chaos engineering.

In these extraordinary times that modern enterprises find themselves in, the ability to scale digital infrastructure seamlessly to meet volatile demand is as critical as ever. However, lean and distributed engineering teams will find it difficult to uphold the traditional “onsite” operational model of infrastructure management.

This article explores serverless technology as a potential solution for a flexible and scalable infrastructure, that can be effectively managed remotely via a lean and nimble engineering group. After looking at the scalability constraints of the traditional monolithic architecture, we’ll describe four key areas where serverless provides significant advantages: separation of concerns into microservices, support for DevOps automation, effortless scalability, and cost-efficiency.

For more detailed information on these issues, we invite you to download our white paper.

Traditional Infrastructure: The Monolith

Figure 1: Traditional monolithic application

As shown in Figure 1, the monolithic architecture comprises a cluster of compute nodes, each of which represents a logical server with an operating system that must be patched and maintained. A configuration management tool such as Ansible or Chef can automate this to some degree, but the environment has additional infrastructure and code to manage. To update the core application, in whole or in part, you must take down portions of the fleet — a dangerous and error-prone task.

Read More:   How can an offshore team achieve maximum productivity?

Scaling to meet a sudden increase in user traffic is either a lengthy process that impacts the user experience, and/or costly since infrastructure must be over-provisioned in anticipation of demand spikes. And implementing concurrency correctly in a distributed system is a non-trivial task that becomes more difficult as the system grows. Vertical scaling (adding more RAM/CPU) is limited, and horizontal scaling (adding more nodes) adds complexity. Plus, special attention must be paid to the database layer.

Traditional compute fleets also carry heavier administrative and operational burdens that are hard to manage with a leaner, distributed engineering group.

Separation of Concerns: Enter Microservices

Figure 2: Microservices application

In a microservices architecture, the functionality of the core application has been broken into individual, stateless components — which enables scalability for both the overall system and the individual modules. This modular architecture makes it easy to identify, isolate and remediate bottlenecks, without impacting application performance. With the AWS managed Lambda service ecosystem, engineering teams can also easily scale databases (DynamoDB), caching (ElastiCache), or the compute itself (Lambda), without complex automation logic. Last but not least, with the compute layer separated into discrete, stateless functions, the application never has to “go down for maintenance” to deploy new features.

Engineering Culture: Building for Operational Excellence

Managing a monolithic system infrastructure is typically based on a sysadmin approach. Servers are decommissioned to handle patching, deployments or troubleshooting. Meanwhile, new capacity is provisioned with some degree of automation via configuration management and auto-scaling, but you need constant testing and configuration to maintain up-to-date, compatible operating system images.

The sysadmin model, however, faces serious challenges when transitioning to a distributed and leaner engineering paradigm. For example, it may prove impossible to coordinate an all-hands-on-deck response to an unexpected surge in user traffic.

By contrast, the serverless application stack lends itself well to a distributed team working in a DevOps culture. The 12-factor principles are the first-class consideration right from the initial design specifications. Everything is automated and configuration is stored in code. No manual intervention or configuration is required to scale the platform to handle millions of requests; concurrency and scalability are batteries-included features.

Read More:   Chaos Engineering Progressively Moves to Production – InApps Technology 2022

Scaling and Concurrency: Meeting Demand

To meet dynamic demand requirements, the application infrastructure must be built from the start with a focus on scalability and concurrency. Traditional compute resources can offer both, but historically this has required herculean engineering efforts — including fragile hacks, which ultimately result in unstable systems. They also present considerable provisioning and capacity-planning challenges.

Cloud computing and its on-demand, pay-per-use service model revolutionized the software industry, by decoupling capacity management from the management of the underlying infrastructure. However, the cloud computing model does not relieve engineering teams of the responsibility to manage scalability. EC2 auto-scaling groups are designed more for high availability and resiliency than for real-time, dynamic responses to traffic surges. Even when auto-scaling is linked to Amazon CloudWatch metrics, scalability has to be carefully planned and managed to meet application performance and UI requirements.

Nor is cloud computing a silver bullet for database-layer scaling and concurrency. Ensuring that the data is consistent, highly available and performant requires an expensive engineering effort and constant juggling with the CAP theorem.

By contrast, implementing serverless compute resources, such as AWS Lambda, lets software engineers outsource the operational problems and focus instead on delivering features and a premium user experience. With Lambda, you don’t have to configure or manage scaling. It is handled automatically by AWS and offers functionally unlimited capacity. AWS also offers a robust ecosystem of managed resources to support a serverless compute infrastructure.

For the database layer, DynamoDB is a fully managed, NoSQL database that offers on-demand capacity pricing, as well as highly performant auto-scaling. With services like ElastiCache, CloudFront, and API Gateway, agile teams of developers can deploy a complex fleet of services with a minimum investment in operational availability and infrastructure engineering.

Cost Efficiency: Paying for What You Need

Even with modern cloud-scaling capabilities, engineering organizations often find themselves caught between a rock and a hard place when it comes to dealing with demand surge and capacity allocation: Keep the entire fleet of instances fully-scaled, incurring potentially enormous usage bills, or stay scaled-down and try to plan and engineer scaling policies and automation to deal with spikes in demand.

Read More:   Infrastructure Is Code and with Pulumi 2.0, so Is Architecture and Policy – InApps 2022

One of the key benefits of the serverless model is its cost-efficient billing model. Most serverless functions (Lambda included) bill only for execution time — typically measured in milliseconds. Contrast that with a standard EC2 instance, billed in whole seconds, and a minimum of 60 seconds for start/launch times.

If a function or application feature never takes more than 150ms to fully execute, it will be far more cost-effective to isolate it in a serverless function and execute it only when needed. There are numerous examples of software companies and engineering teams saving as much as 90% on infrastructure costs by using serverless infrastructure.

The operational overhead of a stateless, serverless infrastructure is a fraction of that of a server-based model — with no operating system to patch, no network ports to firewall, and no storage devices to fill up with log files and stack dumps.

Leveraging Serverless to Meet Demand

Serverless technology providers offer a highly scalable, highly concurrent computing ecosystem, all within a flexible, usage-based billing model. When enterprises face times of unpredictable and unprecedented demand for their digital services, as well as a fluid and distributed workforce, they are better served by making the choice to implement not only the technology, but the culture as well.

Amazon Web Services is a sponsor of InApps Technology.

Feature image via Pixabay.

At this time, InApps Technology does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: [email protected].




Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...