• Home
  • >
  • DevOps News
  • >
  • Some Best Practices for Continuous Security – InApps Technology 2022

Some Best Practices for Continuous Security – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Some Best Practices for Continuous Security – InApps Technology in today’s post !

Read more about Some Best Practices for Continuous Security – InApps Technology at Wikipedia



You can find content about Some Best Practices for Continuous Security – InApps Technology from the Wikipedia website

While containers provide the base for the enterprises to move to a DevOps-style software development workflow, enterprises are still wary of the security implications around the emerging technology. At the Container Camp UK this September, Red Hat Chief Technologist Chris Van Tuin offered some tips in maintaining continuous security right alongside continuous development and integration.

According to the ClusterHQ State of Containers Usage survey released this June, the biggest drivers of container adoption are the increase in developer efficiency — at 39 percent of respondents — and how containers support microservices — at 36 percent.

Over the last two years, most companies have been at least considering using some sort of container technology to leverage these benefits. But, until recently, something has been holding back enterprises from adopting containers for the production environment: security.

As with most things enterprise, it’s all about security. But while last year 61 percent cited security of containers as their main uncertainty, a mere 11 percent repeated this concern this year. So what happened to spark such a dramatic shift for DevOps to go for containers? Does security still matter in deciding which technologies to adopt?

Why the Risk?

In a recent survey, TechValidate and Red Hat found these six main security risks enterprises are still worrying over, which Van Tuin spoke about:

  • Employees not taking proper security measures (36 percent)
  • Threat of an outside security breach (32 percent)
  • Containers can be unpatched or unpatchable (14 percent)
  • Threat of an internal attack by an employee (11 percent)
  • Shadow IT or Stealth IT, built and used within an organization without approval (4 percent)
  • BYOD, bringing your own device to work (3 percent)
Read More:   Best Python IDEs & Code Editors to use in 2022

How do these security risks fit in with containers? Well, of course, they all involve the human element just like containers do, but Van Tuin outlined five main container-related security risks:

  • Kernel exploits: Docker is built on the container technology provided by the Linux kernel and, unlike a VM, that kernel is shared among all containers and their host
  • Denial-of-service (DoS) attacks, making a machine or network unavailable to users
  • Container breakouts, which usually happens when people are running untrusted applications with root privileges inside containers
  • Poisoned images: You need to know who built these images, while Van Tuin says 64 percent of all official images have security vulnerabilities
  • Compromised secrets

Information security expert Lenny Zeltser echoed the threat of poisoned images in his recent post on the need for vulnerability management specifically around security patches. He wrote about how traditionally security patches are installed into a system independent of the application. Since containers integrate the app tightly with dependencies, the container’s image is patched as part of the app deployment process.

He calls this “container sprawl” when you can run multiple instances of applications, which is great for DevOps, but it indirectly leads to Docker images existing at varying security patch levels.

The usual plan of defense with VMs is to run a vulnerability scanner, but this doesn’t work in a container approach.

“What a container-friendly approach should entail is still unclear. However, it promises the advantage of requiring fewer updates, bringing dev and ops closer together and defining a clear set of software components that need to be patched or otherwise locked down,” Zeltser wrote.

What is great about the recently more rapid adoption of containers in the ops, infosec, and QA and security auditors space, it means that there are finally more hands involved in containers so that means more people are involved in identifying and filling these security holes.

Overcoming Potential Security Risks

While much of these security risks apply to all container providers, there’s no doubt that the ecosystem is dominated by one player, Docker at 90 percent.

Read More:   How to Make Tech Interviews Suck Less – InApps 2022

But, as Adrian Mouat wrote in his O’Reilly report on the topic, “While you certainly need to be aware of issues related to using containers safely, containers, if used properly, can provide a more secure and efficient system than using virtual machines (VMs) or bare metal alone.”

Similarly, while Zeltser has written a lot about the potential security risks of Docker and other containers, he argues that the operational benefits that we already touched on and other security benefits can outweigh these risks.

devops-break-down-silos-containers

What’s important is knowing the potential security risks of Docker, we are able to learn the tools and techniques to overcome it.

Containers make it easier to segregate applications that would traditionally run directly on the same host. For instance, an application running in one container only has access to the ports and files explicitly exposed by another container.

“Containers encourage treating application environments as transient, rather static systems that exist for years and accumulate risk-inducing artifacts,” Zeltser said.

Because of this, containers make it easier to control what sort of data and software components are installed. And this flexibility and rapid releasing of updates and independent pieces of the code puzzle means allows for much more frequent security patches alongside the application updates.

In his talk, Van Tuin added some specific ways to overcome some of the typical security risks that come with Docker.

For one thing, since humans are often the biggest security risk to anything really, you can start as a company by requiring minimum password lengths or enforce systems like the individual identity keys like YubiKeys given out at last year’s Dockercon EU.

Another of the main ways Van Tuin suggested to overcome this security risks is through a series of tools like OpenSCAP that assesses, measures and enforces security baselines. No matter what type of software you’re using, it’s important to automate a lot of the scanning for security vulnerabilities. OpenSCAP also provides a set of tooling and daemons that you can run against your host, at the Docker level, and at image level.

Read More:   Embrace DevOps Culture to Transform Your Business – InApps 2022

He also said that Kubernetes open source container orchestration tool has automated builds with security scans.

You can have containers sharing the same hardware and kernel but you can know which containers are bad or not — that’s where you need to make sure you have the right processes and technology in place to isolate that as soon as possible.

One really interesting point Van Tuin drove home was when he asked how many people in the audience had disabled the SELinux (Security-Enhanced Linux) mechanism for supporting access control security policies. The vast majority had, which isn’t wise.

“In a container world, you have to enable SELinux on a system. You need to have an extra level of security so the process can only communicate with the objects it’s designed to communicate with.”

container-security

Van Tuin went on to apply the lovely extended shipping metaphor to how to keep containers secure — from the start.

“From a shipping perspective, you don’t package the container at the dock, you do it at the factory in case something breaks. Kubernetes provides the ability to automatically test that for when it moves to scale.”

The idea is that rather than patching those containers in a production environment, you rebuild a new version, often automatically. Then, once it passes, it moves into testing and then is automatically deployed.

“You can map new version and remove the old bad version,” Van Tuin continued. “If there’s an issue for version 1.2, I can automatically roll back to 1.1 if I’d like to.”

Beyond these the RedHat CTO offered some final container security best practices:

  • Only run images from trusted parties.
  • Container apps should drop privileges.
  • Host operating system matter.
  • Apply these kernel security fixes and have a tooling process to automate monitoring it.
  • Don’t disable SELinux!
  • Examine container images for security flaws.
  • Work toward an automated build and automated deployment.
  • Incorporate security scans into your CI/CD pipeline.

TNS analyst Lawrence Hecht contributed to this story. 

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...