AB Periasamy
AB Periasamy is the CEO and co-founder of MinIO. One of the leading thinkers and technologists in the open source software movement, AB was a co-founder and CTO of GlusterFS which was acquired by Red Hat in 2011. Following the acquisition, he served in the office of the CTO at RedHat prior to founding MinIO in late 2015. AB is an active angel investor and serves on the board of H2O.ai and the Free Software Foundation of India. He earned his BE in Computer Science and Engineering from Annamalai University.

As a general rule, when people think about object storage they think about one thing — the price per TB/GB. Though a legitimate cost metric, it has the effect of making object storage one-dimensional and relegates it to archival use cases. Further, it distorts the value associated with this increasingly important part of the enterprise tech stack.

Frankly, the legacy object storage players are to blame. For years they have under-innovated on the technology front in favor of making ever-cheaper appliances. While these old school vendors might argue that is what customers wanted, they would be wrong.

The evidence can be found in the $25 billion in revenue that Amazon Web Services racked up last year — the vast majority of it in high performance, primary object storage. If we conservatively attribute $20 billion to S3 storage service, we can also safely say that S3 is likely as big as the rest of the appliance market combined. Throw in the similarly priced and rapidly growing Azure Blob and Google Cloud revenue and the case becomes clear — cost is but one consideration.

That is why modern enterprises focus on a broader set of metrics — metrics that emphasize performance, operational efficiency, flexibility and price — not just price alone. They recognize that putting your data on ice reduces its value to the organization. At a time when the goal is to maximize the value of the organization’s data, the appliance vendor’s approach seems antithetical.

What should enterprises be considering? Well, they fall into five broad categories:

  1. Performance
  2. Scalability
  3. S3 compatibility
  4. Failure response
  5. Consistency
Read More:   Hygieia, Capital One’s Entry Into the Neflix and Etsy Land of Open Source – InApps Technology 2022

These five elements, in addition to cost, are what define the new metrics in object storage. They are the super six. Let us look at them in turn.

Performance

Object storage has not traditionally been known for performance. In the race to the bottom on price, appliance vendors continually sacrificed performance. To wit, they use terms like “glacial” to define their product offerings.

Modern object storage changes that.

From Amazon to MinIO, we are seeing speeds that approach Hadoop and even surpass it. The new metrics for object storage relate to read and write speeds of 10s of GB/s for HDD to 35+GB/s for NVMe. This throughput is plenty fast for Spark, Presto, Tensorflow, Teradata, Vertica, Splunk and the rest of the modern computational frameworks in the analytics stack. The fact that MPP databases are targeting object storage is evidence of that object storage is increasingly the primary storage.

If your object storage system can’t deliver these speeds then you can’t interact with all of your data and can’t extract the appropriate value from it. Even if you pull the data out of your traditional object store to an in-memory processing framework you still need the throughput to shuttle data in and out of that memory — you simply cannot get that throughput from legacy object appliances.

This is a key point. The new performance metric is throughput, not latency. This is what is required for data at scale — something that is the norm in the modern data infrastructure.

It should be noted that while performance benchmarks are a nice proxy, one does not truly know what performance looks like until they have run the specific application in that environment. Only then can they understand if the bottleneck is the storage software, the drives, the network or the compute layer.

Scalability

Scalability is usually referred to as the number of petabytes that fit into a single namespace. Every vendor claims zeta scale but it hides the fact that massive, monolithic systems become brittle, complex, unstable and expensive as you scale.

The new metric for scalability is how many different namespaces or tenants you can handle.

This metric is taken directly from the hyper-scalers — where the building blocks are small but scale to the billions. It is, in short, the cloud native way.

When the building blocks are small everything can be understood and optimized more effectively — security, access control, policy management, lifecycle management, non-disruptive upgrades and updates and ultimately performance. The size of the building block is a function of the manageability of the failure domain. This is how highly resilient systems are architected.

Read More:   Top 12 Best Practices for Better Incident Management Postmortems – InApps 2022

Multitenancy has multiple dimensions in the modern enterprise. While it certainly refers to how enterprises organize access to data and applications it also refers to the applications themselves and how they are logically isolated from each other.

A modern approach to multitenancy has the following characteristics:

  • Tenants can grow from a few hundred to a few million in a short span of time.
  • Tenants are fully isolated from each other enabling them to run different versions of the same object storage software with different configurations, permissions, features, security and service levels. This is an operational fact of life when new servers, updates and geographies are scaled.
  • Elastic and on-demand.
  • Every operation is API driven and automated without a human in the loop looking at a dashboard.
  • Where the software is light enough to be containerized and leverages industry-standard orchestration services like Kubernetes.

S3 Compatibility

The Amazon S3 API is the defacto standard for object storage to the point where every object storage software vendor claims compatibility. That said, AWS S3 compatibility is effectively binary. It works in all cases or it doesn’t. The metric for S3 compatibility is 1.

That means is that there are hundreds, perhaps thousands of corner cases where what you expect to happen does not. This is particularly challenging for proprietary software or appliance vendors. The reason is that most of their use cases are straight archival or backup so the diversity of the API calls is quite low and the use cases pretty homogeneous. Clearly this is an area where open source software has a significant advantage. Given the size and diversity of the applications, operating systems and hardware architecture they have seen most of the corner cases.

As an application creator this matters. You will need to test your applications against those vendors. Open Source makes it easy to assess vendor claims and determine what platform performs against your applications. If your vendor is good enough to serve as a gateway, and get used as one by others — you can have confidence it will serve your needs too. One last point on Open Source and S3. Open Source means enterprises avoid vendor lock-in and increase transparency. This provides comfort the solution will be around far longer than where it is deployed.

A few more quick points on S3 compatibility.

If you are running any big data application S3 SELECT provides orders of magnitude improvement for performance and efficiency by using SQL to extract only what is needed from the object-store.

Additionally, support of bucket notifications is key. Bucket notifications facilitate serverless computing — a critical component in any microservices-based function as a service. Given that object storage is the defacto storage in the cloud — this capability becomes table stakes when exposing your object server to cloud native applications.

Read More:   Update Red Hat’s Quay 3 Container Supports Multiple Architectures

Finally, an S3 implementation needs to support Amazon S3 server-side encryption APIs (SSE-C, SSE-S3, SSE-KMS). Better yet, it should support tamper-proofing which is provably secure. Anything less invites unnecessary risk.

Failure

Perhaps the most overlooked metric in object storage is how a system handles failure. Failure happens and comes in multiple flavors. An object storage system needs to handle all of them gracefully.

For example, there is the single point of failure. The metric for this is zero.

Unfortunately, many object storage systems will employ “special” nodes that have to be up in order for the cluster to work correctly. These include name nodes or metadata servers. This creates a single point of failure.

Even where there are multiple points of failure, the ability to endure catastrophic failure is paramount. Drives fail. Servers go down. The key is to adopt software that is designed to treat failure as a normal condition. This means that when either a disk or node goes down the software keeps functioning unaffected.

The revolution brought about by inline erasure coding and bitrot protection ensures that you can lose as many disks or nodes as you have parity blocks before the software is longer capable of returning data. This will generally be half the drives.

Failure is rarely tested in the scale but should be mandatory. Simulating failure under load will provide an accurate picture of the aggregate costs of that failure (data loss, time and skills).

Consistency

The metric for consistency is 100% — otherwise known as strict. Consistency is a key component in any storage system but strict consistency is somewhat rare. For example, Amazon S3 ListObject is not strictly consistent, it is only eventually consistent.

What do we mean by strict consistency? For all operations after an acknowledged PUT operation the following must hold:

  • The updated value is visible on read from any node.
  • The update is protected from node failure with redundancy.

This means that if the plug is pulled in the middle of a write that nothing is lost. The result is that the system never returns corrupt or stale data. This is a high bar and has implications from transactional applications to backup and restore use cases.

Conclusion

These are the new metrics in object storage and reflect the usage patterns of modern enterprises — where performance, consistency, scalability, failure domains and S3 compatibility serve as the building blocks for cloud native applications and big data analytics. We encourage readers to use this list, in addition to the considerations of cost as they build modern data stacks.

Feature image by arielrobin from Pixabay.