• Home
  • >
  • DevOps News
  • >
  • The Missing Link to Fully Automate Your Pipeline – InApps Technology 2022

The Missing Link to Fully Automate Your Pipeline – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn The Missing Link to Fully Automate Your Pipeline – InApps Technology in today’s post !

Read more about The Missing Link to Fully Automate Your Pipeline – InApps Technology at Wikipedia



You can find content about The Missing Link to Fully Automate Your Pipeline – InApps Technology from the Wikipedia website

VMware sponsored this post.

Dan Illson

Dan is a cloud advocate for the Cloud Services organization at VMware. At present, Dan is primarily focused on the Kubernetes ecosystem, function-as-a-service frameworks and cloud native application development. He is also concerned with the management and operation of applications in the public cloud. Prior to this role, he served as an NSBU systems engineer since 2013. Before joining VMware, Dan was a consulting systems engineer at Cisco Systems in the service provider organization. Dan holds both a B.S. And M.S. degrees in electrical and computer engineering from Drexel University.

It would be hard to find any organization shipping software that is not under pressure to deploy more often with fewer defects and greater feature velocity. Inconsistent processes and manual operations are some of the common issues preventing teams from realizing their potential with regard to software delivery. In response to these challenges, much as been written about the benefits of adopting Continuous Integration and continuous delivery (CI/CD) practices on software velocity.

However, many applications are not ready for use at the completion of their deployment pipelines. Post-deployment work to “operationalize” or “harden” services is still fairly common practice and generally slows the overall rate of software delivery. The numbers tell the story: a Riverbed-sponsored study by Enterprise Management Associates (EMA) found that for 63% of the organizations surveyed, less than half of the end-to-end continuous deployment processes were automated. Only 6% characterized their process as 90 to 100 percent automated.

Read More:   CNCF Working Group Sets Some Standards for ‘GitOps’ – InApps 2022

In this post, we describe how an alternative practice, Continuous Verification (CV), can reduce and potentially eliminate these post-deployment actions in order to accelerate the pace and reliability of software delivery.

Bahubali (Bill) Shetti

Bahubali leads a team of Public Cloud Advocates at VMware, focused entirely on best practices of application development, deployment on AWS/Azure/GCP. Bahubali has held previous roles in cloud companies as a developer, product manager and sales engineer. He’s been working on various topics and capabilities for the cloud for the last 10 years.

VMware defines CV as:

“A process of querying external system(s) and using information from the response to make decision(s) to improve the development and deployment process.”

This definition depends on the existence of a deployment process, which includes one or more pipelines for CI and/or CD. Continuous verification is an optimization practice, where continuous integration and delivery are output driven, resulting in an artifact or deployment. In many organizations, a series of post-deployment activities are required before a deployed application is ready for its intended users to take advantage of its features. Often, these steps have not been included in the pipeline because they are manually performed or customized scripting drives them, which was not designed to be part of a broadly orchestrated process.

Each human-driven interaction with the deployed application creates additional time in which the application is not delivering value to the organization. Shifting these activities into the pipeline increases the overall efficiency of the process and reduces the operational “lag” between deployment and availability. These efficiency gains are realized through an overall reduction in human-driven actions.

Figure 1. Block diagram of a deployment pipeline and post-deployment steps.

CV can augment the process pictured above by moving some or all of the post-deployment steps into feedback loop(s) within the pipeline. Once these activities are codified into the pipeline, they are executed reliably and predictably according to the procedure of the pipeline(s) and the logic defined by the organization. This uniformity of approach and action decreases variability between deployments and increases the reliability of deployments. These trends, in turn, allow humans to focus on designing the policies necessary to further increase the scope and efficiency of the organization’s pipeline(s). We’ve written about these factors in greater detail in a piece titled “Fences and Gates.

Figure 2. “Post Deployment Step 1” from the previous figure has moved into the pipeline.

As these post-deployment steps are incorporated into the pipeline, the overall efficiency increases. These “verification” steps create feedback loops capable of altering the eventual output of the pipeline. As more of these operations occur, the time to deployment (Td) trends lower and the deployments become a more accurate reflection of organizational policy.

Read More:   Update Volcano: A Kubernetes Native Batch System for AI, Big Data and HPC Workloads

What are some examples of human-initiated verifications?

  • Verifying the utilization and/or cost of the resources is not exceeding authorized limits;
  • Validating that the infrastructure (AWS VPC/AWS EKS/etc) configuration follows organizational guardrails;
  • Understanding if developers have exposed vulnerabilities in artifacts from CI;
  • Ensuring the application performs within latency limits on the infrastructure
  • Confirming the application authorized to use the correct set of resources and services.

Depending on your internal requirements and processes some of these post-deployment steps can be integrated into one or multiple stages of a CD pipeline.

In most cases, the cost of deployment is managed on an ongoing basis. While business operations is continuously monitoring costs (pre/post-deployment), check on whether your budget is exceeded is done post-deployment. However, automating a simple check with tools like Cloudability, CloudHealth by VMware or Cloudyn in the CD pipeline prior to deployment helps reduce not only the time of deployment but also removes any rollbacks due to overages. Our team has detailed out how you can achieve this in one of our blog posts: “Being Budget Conscious in a Continuously Automated World.”

Prior to deploying software, it is important to know if the infrastructure on which that software resides is correctly configured according to organizational policy. Without that assurance, deployments may fail or be compromised from the start. There are many options available, from open source tools (Cloud Custodian, etc) to off-the-shelf products (Redlock, VMware Secure State, etc.). An example of this concept in action is detailed in our article, “Implementing a Continuous Security Model in the Public Cloud with Jenkins and Terraform.”

One very important verification must be done prior to initiating ANY deployment is ensuring the built artifacts are not exposing any vulnerabilities. Whether its an AWS AMI, an executable, container image, etc, that artifact should be checked with tools like Clair (for container vulnerabilities) post build or during the build with tools like OWASP dependency-check (java builds, etc). There is an abundance of vulnerability analysis tools, and these should always be added in the CI process and just prior to deployment. We’ve detailed out how to analyze vulnerabilities with docker containers in our blog post: “What did your developer violate today?

Read More:   Update Getting Cloud Data Lakes Right

These are just a few examples of how CV helps organizations to deploy production-ready software more quickly. Many other policy-driven tasks can be moved “into the pipeline” in order to increase the efficiency of the deployment process and bring “production-ready” deployments closer to reality.

To continue the discussion on this topic, or others related to modern applications and public cloud, please visit us at cloudjourney.io.

Feature image via Pixabay.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...