• Home
  • >
  • DevOps News
  • >
  • How to Improve Your Kubernetes CI/CD Pipelines with GitLab and Open Source – InApps 2022

How to Improve Your Kubernetes CI/CD Pipelines with GitLab and Open Source – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn How to Improve Your Kubernetes CI/CD Pipelines with GitLab and Open Source – InApps in today’s post !

Read more about How to Improve Your Kubernetes CI/CD Pipelines with GitLab and Open Source – InApps at Wikipedia



You can find content about How to Improve Your Kubernetes CI/CD Pipelines with GitLab and Open Source – InApps from the Wikipedia website

In my previous article “How Containerized CI/CD Pipelines Work with Kubernetes and GitLab,” I wrote about Kubernetes’ popularity and importance in 2019. I also described the advantages of containerized pipelines with GitLab CI/CD and Kaniko offer. In this post, I would like to introduce more open source projects and GitLab features that help you deploy and run your cloud native application.

Enhance Application Deployments

Nico Meisenzahl

Nico Meisenzahl works as a senior cloud and DevOps consultant at white duck. As an elected GitLab Hero and Docker Community Leader, his current passion is for topics around Kubernetes, CI/CD, Automation, DevOps and the Cloud. Nico is also a frequent speaker at conferences, user group events and Meetups in Europe and the U.S.

Now let’s get back to application deployment and introduce you to the open source project Kustomize. Kustomize, which is part of the Kubernetes project and sponsored by sig-cli, lets you customize raw and template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. Kustomize is a CLI tool that is also integrated into kubectl by default.

Read More:   Update How Regional Differences Towards Technology Trends Play Out in Australia’s API Scene

For me, Kustomize is the perfect tool to deploy containerized applications into a Kubernetes cluster using continuous delivery (CD). It allows us to define customizations with a declarative approach, supporting us to deploy our applications to different environments by not duplicating our code. Unlike other deployment tools, Kustomize has the smallest overhead and only focuses on features that are needed in an automated CD pipeline.

To customize our existing YAML manifests, we only need to define our customizations in a kustomization.yaml, which Kustomize then uses as a ruleset to build the outcome YAML definitions. Let me give you an example (you can review the whole example here):

This kustomization.yaml, for example, is used to deploy an application into a development environment. Therefore, it customizes our existing YAML manifest (linked via the bases parameter) and adds some specific configurations:

  • adds an env=dev label to all resources.
  • patches the existing YAML manifest based on the defined files; in this example, it updates the replica count and adds a specific container environment variables.
  • adds the name Prefix dev- to all resources.

In a complex deployment, we also have the possibility to define multiple customization definitions. The Kustomize documentation serves as a good overview of possible use cases and further customization options.

Let’s have a look at how we can integrate Kustomize into a containerized pipeline (you can review the whole example here):


Once again, we have a pipeline definition with one job only. The script section will be executed in a container providing kubectl based on Alpine. The job executes the kubectl CLI with the apply parameter. -k tells kubectl to use the Kustomize plugin. Both parameters are followed by the path where our deployment files are located. In this example, we use a pipeline variable to define the customizations we would like to deploy.

Secure Your Application Ingress

We now are able to build and deploy our application using containerized pipelines. We now show how we can secure our application workload running in our Kubernetes cluster.

Read More:   IaC Cloud Misconfiguration Tools too Noisy without Context – InApps 2022

When integrating our existing Kubernetes cluster with a GitLab project or group, we can opt-in to install an Ingress controller. The deployed Ingress controller is called GitLab Web Application Firewall. The GitLab WAF provides you real-time security monitoring based on an Nginx Proxy with enabled ModSecurity module. The default-enabled OWASP core rule set is customized based on GitLab’s best practices and is configured to detection-only mode. Of course, it is possible to enable further security settings if needed. The Web Application Firewall helps you detect and prevent cross-site scripting as well as SQL injection attacks.

As mentioned above, GitLab WAF is configured to detect-only by default. The Web Application Firewall will log all security-related issues to an audit log (/var/log/modsec/audit.log) in the Ingress controller pod, which can be forwarded to any log management for further analysis or acting. An example output of a security issue:

Why You Should Only Care About Your Business Logic

In the last part of this post, I cover serverless, which is another approach that has attracted lots of energy over the past year. One way to describe serverless is that it means you stop caring about your servers and infrastructure and just focus on your business logic or the issue you would like to solve. With GitLab Serverless you can do exactly that. GitLab Serverless is packed with Knative, Kaniko and Istio, which are all open source project built on top of Kubernetes and which abstract away the complex details to allow developers to focus on what matters.

GitLab Serverless automatically builds a container image without us providing a Dockerfile, deploys it to Kubernetes and automatically scales it based on user needs. This is done in a Function-as-a-Service (FaaS)-like manner, which also allows us to scale our application to zero to save resources and money on an as-needed basis.

Once we have configured GitLab Serverless on our Kubernetes cluster, we only need to configure it with two files in our project: A GitLab CI definition as well as a serverless.yaml describing your function, or a Dockerfile describing your containerized application. In the below example we deploy a NodeJS-based function (you can review the whole example here).

Read More:   Update Why Should I Use a Time Series Database?

The .gitlab-ci.yml that defines the pipeline to build and deploy our function:

A serverless.yaml that describes the functions and required runtime:

GitLab Serverless will also provide us with detailed metrics on the scaling of your function:

All examples and code snippets are available here. My previous article “How Containerized CI/CD Pipelines Work with Kubernetes and GitLab” details how to create containerized pipelines with GitLab CI/CD and Kaniko. You can also see a live recording  of my talk at GitLab Commit 2020 in San Francisco on containerized pipelines, Kubernetes and open source in general:

Happy deploying!

InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Feature image via Pixabay.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...