Chef Offers Habitat, a New Kind of Application Automation – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Chef Offers Habitat, a New Kind of Application Automation – InApps in today’s post !

Read more about Chef Offers Habitat, a New Kind of Application Automation – InApps at Wikipedia



You can find content about Chef Offers Habitat, a New Kind of Application Automation – InApps from the Wikipedia website

Should your application deployment automation system prepare your data center infrastructure for the static requirements of the application? Or does it adjust the application to suit the situation in which your infrastructure finds itself at the moment?

Exploring this question, with code in hand, is Chef. In a streaming public conference Tuesday, Chef chief technology officer and co-founder Adam Jacob [pictured above] introduced Chef’s declarative automation system for deploying applications, called Habitat — a system that will work in tandem with Chef’s namesake deployment automation tool and Inspec, the company’s infrastructure testing framework announced last November.

“Everything about an application’s behavior belongs to the application — it should be about the application,” said Jacob during Tuesday’s presentation. “When you think about its point of view, when you design for that build/deploy/manage cycle, everything that the application needs, needs to live at that [application] layer. And it’s different than saying it’s about the infrastructure. When it’s about the infrastructure, we build all this code, and we try to build up toward the application, then we try to bang the application into place. Instead, Habitat says, ‘Nope, it starts at the application,’ and everything that the application needs to do lives with the application.”

One Model

Chef’s stated intent for Habitat is to build a set of instructions that deploy an application in whatever environment happens to be the target at the time. It is not, as Chef’s namesake software has been used to date, a way to configure available infrastructure to suit the demands of an application — or, even more restrictive, to force available resources into a specific condition exclusively suited to the needs of integrating new applications with old ones.

Chef is one of a number of companies investigating ways of best bridging the needs of the application with the resources of the infrastructure. We’ve tackled this issue before, including with HashiCorp’s transition from Vagrant to Otto, with Avi Networks’ application delivery controller architecture, with Shippable’s dynamic container deployment scheme, with Ansible’s bid to substitute for docker compose, and with DCHQ’s approach to automating lifecycle management. All of these approaches to application deployment suggest that you implement a big degree of change, leaving something in your automation process behind.

“The data is the API”  — Chef’s Adam Jacob.

Jacob told his audience a story about traditional application lifecycle management and the extent to which Chef has been shaped by traditional enterprises to suit its needs. Assuming that a software project was conceived according to a common vision, he said, its evolution has it bouncing from link to link on its chain of designated departments — development, QA, operations, security. Granted, these chain links can all be codified into pipelines. But the fact that software moves from one point to another single point, he warned, can give developers a myopic perspective with respect to its user, so that it becomes intended for one department at a time, as opposed to the user at large.

When existing toolsets have been reassembled for them to be applied to the task of consolidating the build/deploy/manage cycle, Jacob continued, “what they tend to do is, you have to take a bunch of tools and string them together with a bunch of glue, to try to build this thing that manages the build/deploy/manage cycle. But when you do it, you’re forced to integrate with all the choices that every silo made across the chain. And the side-effect is, the way it feels to you is like a Krazy Glued-together Rube Goldberg machine… because it’s a Krazy Glued-together Rube Goldberg machine.”

Read More:   Tecton Brings Real-Time Machine Learning to MLOps – InApps Technology 2022

The New Model

160614 Chef Habitat 02 (plan example)

During Tuesday’s live stream, Chef’s Adam Jacob demonstrated how a Habitat package is constructed to deploy a relatively simple service: the redis in-memory data store. A Habitat script [like the one shown above] is called a plan, written in a language created by Chef that’s native to Habitat. Building a plan can be a kind of handheld process, where Habitat leads the user point-by-point through the plan’s generation. For each plan to be authenticated, it requires an origin, which may be the software’s original manufacturer although it may also be the Habitat Core, or for the development process, it may instead be the developer herself, assuming she has her own digital origin key. Rather than the typical Linux configure, make, and make install commands, an application can undergo Habitat’s make cycle without the use of configure, or a configuration script.

Habitat can be very conversational with its user, almost like one of Microsoft’s “wizards” but without the GUI. The hab setup command, without embellishments, launches this conversational process that reminds the user what an origin is, and asks her to specify one. Then it gives her the option of turning on performance analytics for the build process, which may be useful later in case errors creep up.

“Plans get built in what we call a studio,” said Jacob. “A studio is a cleanroom environment that takes that plan, and that list of dependencies puts them together, and builds the software in a way that’s safe and fast.”

Inside each plan’s directory are the configuration options for the application, representing not just anything that’s static but anything else that’s subject to change. Here, these options are specified not in YAML, as some may have expected, but in TOML (Tom’s Own Minimal Language).

In a nod to Windows, Jacob said, “What’s great about TOML is that it looks like an .INI file, which we’ve all been writing since the ‘80s. It’s like a key, a value, and an equal sign, and everything’s cool.” Annotations are important here not only for documenting and explaining these options for human readers but for enabling Habitat’s help system to present explanations to other DevOps professionals at the command line, on demand.

“It supports complicated data structures, if you need them,” he went on. “Some software — like Apache Web servers, or NGINX — need more complicated data structures to be able to really talk about all the things they could possibly do. They need hashes and arrays, and TOML gives you that with a simple, easy syntax. So it scales up really well.” Plans may contain what Adam Jacob described as “lifecycle hooks,” that customize the behavior of an application depending on the context in which it will run — for example, during health checks.

When Jacob executed hab package build redis, Habitat responded by assembling the necessary dependencies, then downloading Redis, compiling it, and placing the product in an artifact, which is an immutable pattern whose contents may be verified by way of checksums. For an artifact to be made available so that its software may be deployed later, Habitat has you send it to a depot (which is a much nicer name than “repository,” which often gets shortened to “repo” anyway) using hab package upload. Chef maintains a public depot of its own.

Installing the artifact using the designated plan is accomplished with hab package install origin_name app_name, where origin_name is the identifier for the artifact’s source, and app_name identifies the application. In the demo, Jacob used hab package install adam redis.  It does appear here that Chef is using “artifact” and “package” interchangeably here, and this may need to be sorted out over time.

Read More:   Are Go Generics Ready for Prime Time? – InApps 2022

Habitat is intentionally designed to be platform neutral, which means it supports containers. Producing a form of the artifact for Docker is a simple string of terms in one command — in the demo, hab package export docker adam redis. For now, Habitat also supports Cisco’s ACI and Mesosphere package formats, but Jacob added that Chef intends for Habitat to support Amazon’s AMIs, as well as VM formats, in coming versions.

Clustering by Default

Running a Habitat package involves a component called a supervisor, which introduces a curious new aspect to the system: clustering. In a multi-tenant environment, multiple supervisors running simultaneously can join forces in what Habitat calls a ring (Obviously, someone at Chef is a Trekker.) When Habitat rings are formed, multiple instances of an application running simultaneously are addressed collectively as service groups. This collective interaction happens automatically, without intervention from a user or an admin or some other script.

Inevitably, though, as Jacob admitted, the creation of service groups leads to the necessary creation of topologies, which govern the behavior of each application instance once it has been enrolled in a service group. In one such topology, which Jacob described as “leader/follower,” one instance may host the database that receives all the write operations and most of the reads while the others receive streams of database updates. This is one example of a situation where the same application may end up running a completely different way, once it is installed in the same context with multiple other instances of itself.

Jacob tried to explain how configuration takes place in these instances by saying, “The data is the API.”

“If you think about how you typically build an application,” Adam Jacob told us, “when we talk about an API, there are two kinds: a network API or a REST API, like you’re calling a Web service; and an internal API, where you link a library, and you make function calls to it. In both of those cases, you are required to change the application in order to acquire the functionality that you want.”

In the leader/follower topology, for instance, a supervisor elects one instance of an application the leader, thus designating the others followers by default. Typically, enabling the actual behavior that leader/follower entails, could be accomplished by rewriting the application so that an internal API function call contacts a library, creating a new binding.

“When I say, ‘The data is the API,’” he continued, “what I mean is, the supervisor that is running your service handles all of that for you, and it passes the data to the application. But there’s no easy out — the application can’t talk back, it’s not two-way. [Yet] I wrap the application up in this smart supervisor, and that smart supervisor has the information about how to work in that topology, and it manages it for you. One side-effect of that is, you can get that behavior in any application that needs it.”

So Jacob’s metaphor is meant like this: Changing the data which designates how an application would behave as a leader or a follower, in the context of a scale-out system, has the effect of hard-coding an API call into the application and rebuilding it, using traditional means — without having to actually do the coding.

Chef offers a hosted demonstration of Habitat, along with some documentation. The company will discuss the technology in more depth in a June 29 webcast.

The End of Bimodal IT?

In an interview with InApps, Chef’s Adam Jacob and its vice president for business development, Ken Cheney, spoke with us at length about how their company expects Habitat to be integrated into the production environments of organizations — especially those that already use Chef.

Read More:   The Attempt to Rescue the Network Time Protocol – InApps 2022

“What our customers get out of Chef is that ability to move quickly, to work together, to write infrastructure code and collaborate, in addition to the value of the automation — it’s easier to do compliance, and those sorts of things,” Jacob told us. “What Habitat gives our customers is another option for how to do the application portion of that automation.”

That second option may be tempting, from the perspective of a business analyst looking to align Chef and Habitat with the needs of an everyday enterprise. There’s a concept called “bimodal IT” that suggests the duality of an IT organization can be successfully managed, such that slow processes like integration and fast process such as adoption, can successfully coexist. Granted, bimodal is one of the most misinterpreted concepts in business today, even by Gartner itself (a presentation during the recent RSA Conference keynote session elicited audible groans, and a few cat-calls, from the audience).

But its basic premise is that not everything or everyone in IT needs to move at the same speed, and thus business processes and development processes may need to be coordinated to take that into account. Arguably, such coordination could take place with the help of an automation system. And the duality of Chef and Habitat could, from one perspective, play into that modality.

Chef’s Adam Jacob had a word for that which involves a notable farm animal and its biological processes.

“It’s bulls**t on its face that, inside some of the most successful companies on the planet, we have these teams of people who don’t know, don’t care, and cannot fundamentally understand that there’s a better, different way of doing their jobs,” he exclaimed. “You’re right that it has to be a cultural phenomenon, and DevOps is what we call that cultural phenomenon. But Habitat makes the lives of everyone in that chain easier — for applications developers, for operations people, for QA, for security, it makes everybody’s life easier, partly because it allows them to collaborate with those other team members, because they can talk about the way the application behaves in terms of topologies, update strategies, and all those sorts of things.”

Jacob went on to condemn the notion that code, by its very nature, cannot apply to everyone in an organization — one of the common corollaries of the bimodal IT concept. Indeed, Jacob believes that once the symbology of code is rendered in a simple enough fashion, it can inspire everyone in an organization to communicate their ideas about applications and functionality using common objects.

“If our only destiny is bimodal IT,” he continued, “then what we’re really saying is that half of the infrastructure gets to be awesome, and half the applications are great, and the other halves are garbage, and we’re doomed.”

Ken Cheney expanded on this: “Sure, you may want to have parts of your organization move more slowly, for business reasons — like, they don’t have to push changes all that frequently. But I guarantee, there will come a time where, whether it’s driven by business reasons like the need to have more intimacy with your customer, or you have a security reason — you want that ability to go faster.

“The manageability problem is one of the primary reasons why those teams move slowly,” Cheney continued. “If we can actually improve that manageability across the lifecycle and build/deploy/manage and provide that consistency regardless of where you’re running that workload — whether it’s on a PaaS, on containers, on IaaS, on a VM, on bare metal — you get the same management experience. It’s consistent, regardless of whether you’re building that application, or you’re an ops person managing that application at run time.”

InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.




Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...