Red Hat sponsored this post.
There are no silver bullets in IT. For instance, it’s easy to think the public cloud can relieve all your business continuity worries. And major cloud providers certainly offer stronger data resilience than many on-premises data centers. But failures can and do happen, from major disruptions in business services, to backend outages that affect consumers.
Spreading workloads across multiple clouds, as many organizations now do, only adds complexity. Your mission-critical data no longer resides in a single location, and where the data sits can change frequently. Your disaster recovery has become more complicated.
Containerization and Kubernetes container orchestration can help. Kubernetes enables you to more quickly and easily shift applications among environments. If an application process fails, Kubernetes can immediately spawn a new instance so the application doesn’t go down.
It’s one reason Kubernetes has been embraced for production environments. In fact, 85% of enterprises say Kubernetes is key to cloud native application strategies, according to a recent Red Hat study.
But managing mission-critical applications with Kubernetes raises the bar on data resilience. For starters, those applications often require persistent data storage. What’s more, a microservices architecture segments applications into collections of discrete services that interact. All the data associated with each microservice is necessary to compose the application and its outputs. If an application is truly mission-critical, any downtime can affect your organization’s operations, revenues and reputation.
Fortunately, the right Kubernetes platform should give you the tools to optimize data resilience in your containerized, multicloud environment.
Take a Snapshot
Snapshots are fundamental to data resilience. They give you point-in-time copies of data that you can quickly and easily go back to if needed. For instance, they’re helpful during tests, patches and upgrades if you encounter a problem and need to return to a prior state.
The Container Storage Interface (CSI) plays an important role in enabling snapshots in Kubernetes. CSI is a standardized plug-in that enables connectivity between your container orchestration tool and your data storage systems.
An effective Kubernetes distribution will allow you to use CSI to manage your snapshot functionality. You can achieve customizable, point-in-time snapshots of persistent data volumes, making it faster and easier for you to return to a prior state.
CSI offers an abstraction to the complexity of individual storage backends. Because CSI abstracts storage and brand-specific actions, you don’t need to be a storage specialist to create snapshots.
Note that snapshots are possible only on the storage system where they’re created. You can’t “send” a snapshot to another location. But the content of a snapshot can be taken into a backup with a disaster recovery or backup application. Once that’s executed, you can send the data to another location.
Gaining Protection from APIs
Snapshots aren’t the only area where you can take advantage of Kubernetes-specific integrations for data resilience. Backups can also benefit.
With Kubernetes, your applications run in containers, while related data is kept in persistent storage offered from a storage class. To recover from failures, your backup solution must protect data in the context of the containers and the applications running in them. To work with container clusters, the backup also needs cluster-resource context. That way, when applications are restored, recovery tools can recover and reconfigure the namespaces associated with their persistent data.
Without APIs, you’d need a complete understanding of the Kubernetes namespace to restore your environment to a prior state. It likely would take days to manually restore and re-associate all files to their correct places.
But with the right APIs, you can create an application-consistent backup image, along with the persistent data volumes and metadata that describes the associated cluster resources. These types of APIs can allow for application portability across both clusters and cluster versions. Your backups can handle local application failures and restore to alternate clusters if an entire cluster fails.
Syncing (and Asyncing) Your Data Replication
Finally, an effective Kubernetes data services solution will provide tools for both synchronous and asynchronous data replication. Synchronous replication simultaneously writes data to a local disk and a remote disk. If the local disk experiences an error, the system fails over to the replicated data as quickly as possible. Synchronous replication depends on prerequisites like high bandwidth and low latency.
Asynchronous replication, in contrast, initially writes data to primary disk storage, then replicates it at set intervals to another medium. It can be executed over WAN-latency connections.
Your Kubernetes platform should enable “metro” data recovery for synchronous replication, where latencies don’t exceed single-digit-millisecond round-trip time between container platform nodes in different locations. It should also support “regional” data recovery, for asynchronous replication across container clusters. In both cases, the replication should be enabled by a Kubernetes data services solution separate from your backup or disaster recovery application.
Kubernetes has become the orchestration platform of choice for CIOs managing mature containerized environments. The technology is empowering organizations to optimize their applications to deliver greater functionality and richer user experiences. Even better, Kubernetes can help you achieve these goals, leveraging the data services solutions that help you maintain business-protecting data resilience.
Photo by Vlado Paunovic from Pexels.