Ashish Thusoo
Ashish Thusoo is the CEO and co-founder of Qubole. Before co-founding Qubole in 2011, Ashish led Facebook’s Data Infrastructure team and built one of the largest data processing and analytics platforms in the world. Ashish’s work at Facebook not only helped achieve the bold aim of making data accessible to analysts, engineers and data scientists, but also drove the big data revolution.

The sharp rise in real-time data use has inevitably led to increased investments in streaming analytics. Businesses are ingesting more and more data from sensors, smartphones, IT equipment, websites and other non-traditional sources, and processing this data in real-time to improve operations and better serve customers.

More often than not, data comes in from multiple sources and is collected in an open data lake where it’s combined with existing historical data to deliver business value and results, often with machine learning and AI. The challenge for data engineers is to build streaming data pipelines that allow for rapid experimentation and operate reliably at scale.

When assembling a stream processing stack with a combination of open source and commercial software, there are several hurdles that can hinder the success of a project. This piece explores some of the best practices to overcome those hurdles and build streaming pipelines that are fast, scalable and robust.

Read More:   The Growing Complexity of Kubernetes — And What’s Being Done to Fix It – InApps Technology 2022

Allow for Rapid Experimentation

The ability to try out new models and datasets is critical. The important step here is to reduce the time it takes to acquire and prepare new data sources so they can be combined with other data assets. For example, use pre-packaged connectors to common streaming data sources, which allow you to experiment quickly with different data sets without writing code. These can include connectors for Kafka, Kinesis, S3, S3-SQS, GCS, HIVE, HIVE-ACID, Snowflake, BigQuery, ElasticSearch, MongoDB and Druid.

But using connectors is not enough for rapid experimentation. It’s also important to use tools that offer the ability to develop pipelines with code that is generated automatically and can then be edited — for example, code for connecting to the data, and building the business logic through a UI driven interface.

Control TCO with Automation

Spark Structured Streaming pipelines are long-running, which means costs can quickly spiral out of control. Applying automation for cluster management can significantly reduce costs while ensuring business SLAs are maintained.

The key is to make full use of the spot and preemptible node types from the big cloud providers alongside their traditional on-demand types, as these spot and preemptible types allow you to run workloads more cost-effectively. Selecting the right instance types and knowing when to scale them up and down is a guessing game for data engineers, who tend to oscillate between over-provisioning and under-provisioning. The key is using heterogeneous cluster configurations, and automation that can adjust cluster sizes instantly based on usage patterns, and even predict what cluster size to deploy based on historical data.

Allow for Cloud Portability

A Gartner survey last year found that more than 80% of respondents were using more than one public cloud provider. In a world where data is generated and stored in multiple clouds, it’s imperative to have a streaming pipelines strategy that doesn’t lock you into a particular repository, storage format, data processing framework or user interface. Adaptability is key.

Read More:   Update TNS Research: Serverless and Distributed Database Services are Besties Now

That requires technology that provides the same streaming pipelines capabilities on every public cloud you expect to use. Engineers should be able to bring their own code (JARs) and re-create an existing pipeline in each cloud by doing some reconfiguration, and without needing to completely rewrite their code.

Make It Easy to Test and Debug

Since building pipelines is an iterative, acyclical process, ensuring you have efficient ways to test and debug pipelines is key. Engineers should be able to test a pipeline with a “dry run” that lasts just a few minutes using a subset of the input data. This helps with verifying connectivity, making sure the data schema is correct, and ensuring the business logic is complete and performs as expected.

Ensure Data Accuracy and Consistency

Long-running pipelines can experience issues with consistency, accuracy and quality, due to the arrival of data in no particular order and schema evolution. To address this, look closely at the data transfer process and how pipeline updates are managed. As a best practice, establish pre-defined, periodic “control points” for the data after it’s been stored in the data lake. This ensures you have reliable, ordered, and error-checked data (conceptually analogous to the Transmission Control Protocol in internet packet switching).

In general, eliminate complexity and potential error points when making changes to the code of your data pipelines or applications. The applications will change in time as business needs evolve, requiring continuous data engineering.

Allow for Replaying and Reprocessing Data

When models are updated or business logic changes, users may need to edit the pipelines, make changes and re-play from a particular checkpoint.

It’s important to have the capability to replay or reprocess a streaming pipeline efficiently when changes occur, and be able to do so within different points of the process. This is analogous to editing video — it would be impractical and time-consuming to replay an entire video every time you made a single edit. The same principle applies to streaming data pipelines.

Read More:   SQL vs NoSQL - Which One Is The Best For You?

One way to deal with this is providing an “error sink.” The schema in the input data can evolve and lead to inconsistencies. So it’s important to have the ability to detect schema mismatch and invalid records, filter those records and write the metadata of bad records — along with the exception message — in a separate storage location. The user can set alerts for when this occurs and prevent data loss by cleansing and reprocessing these erroneous records.

Minimize Time to Use with Native Streaming

Some applications require true real-time data, making it important to reduce the latency between when data is received and when it can be put to use. Some data warehouses support “microbatching,” which collects data often and in small increments, but this introduces too much latency for true real-time use cases.

Data lakes are more appropriate for these use cases because of their support native streaming, where data is processed and made available for analytics as it arrives. The data pipeline transforms data as it is received and triggers the computations required for analytics. Choosing between native streaming and microbatching depends on the needs of your application.

Closing Thoughts

Real-time data provides opportunities for deeper analytics and exciting new use cases, but it’s a relatively new field where engineers need the ability to experiment quickly while controlling costs. Building a stream processing stack that takes advantage of automation and pre-written code will allow your team to develop new ways of analyzing data while maximizing efficiency and innovation. The guidelines above should help make your streaming data projects successful.

Feature image via Pixabay.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Real.