Red Hat sponsored this post.
If you’re reading this right now, there was an electrician in your past. Wiring houses, offices and the public spaces we love to walk through, these folks are highly skilled, often union laborers who have to undertake a great deal of training and experience to do their jobs. Say what you will about software bugs, they don’t often set fire to buildings or fry a television.
Because the stakes are so high for the job, it is incumbent upon the union to train its members properly, and to ensure that those members know the difference between a live wire and a ground line.
The solution has, traditionally, been training, but an untraditional new method of delivery undertaken by the Electrical Training Alliance that requires the use of Linux containers, API gateways and CockroachDB.
Wiring Up the World
The Electrical Training Alliance develops the curriculum for the Union Apprenticeship Electrician Program. A union training program to become an apprentice requires following a registered curriculum. ETA provides this curriculum and guidance to roughly 230 to 240 training programs, which carry out apprenticeships and training for electricians, across the United States and Puerto Rico.
Stephen Boyd is an IT architect, analyst and developer at the Electrical Training Alliance, where he oversees the modest IT and software infrastructure teams that maintain the rather large online estate that the Alliance uses. He’s also the impetus for the ETA joining the OpenShift Commons, the open source community behind OpenShift.
As with most enterprise software and IT projects, Stephen’s journey to OpenShift and CockroachDB began with a new initiative, a project to test microservices and a lot of new services, including OpenShift Dedicated, Tekton, pipelines and CI/CD. In this process, handling the data they had to work with became a growing concern.
Stephen led the initiative on ETA’s new grant-based Department of Labor project to build a cloud-based platform.
“These registered [training] programs have to use administration software to track everything: hours, job training assignments, their grades when they go to school. The grant allowed us to create an online program. We needed a way to create an online application that requires a lot of information collection, and to track all that stuff,” he said.
This grant-based project required ETA to create a solution that could rapidly adjust and scale with traffic. They were certain that a monolithic approach would not suit them and wanted something that was easy to support and scale independently based on each microservice, since some applications would get traffic year-round and some would not. To optimize resources, they chose a containerized approach that would allow services to be spun up as needed and closed down when not in use.
The team chose Red Hat’s OpenShift container management platform to orchestrate everything using Kubernetes. They opted for OpenShift Dedicated to incorporate a managed services approach to allow the team to focus on building the application. “All these various things the training centers do, we wanted to do in a way that was dynamic and allowed for a quicker, more responsive mechanism,” Stephen said. This container-based application architecture running on a managed platform was the stack to make it happen quickly and easily.
Initially, Stephen began the digital transformation of the ETA by himself. Today, fortunately, he is no longer alone. He has two internal developers and an external development firm of over a dozen UI designers to help build out the new cloud-hosted training site.
Removing the burden of managing OpenShift and Kubernetes by hand, the actual application architectures became involved. Naturally, transitioning to the cloud can require some architectural changes in the way a team builds its applications.
ETA’s original database solution worked well for small teams and sites, but their new education platform needed to operate at scale — and, in the future, across regions. This started a hunt for a partner that would satisfy these requirements without additional integration work.
“I started looking at partners with OpenShift Dedicated. The one that was seamless to integrate with OpenShift Dedicated was CockroachDB,” he said.
The team has since implemented the CockroachDB Operator in OpenShift Dedicated.
[Kubernetes Operators are a method of packaging and managing Kubernetes-native applications and software. Kubernetes Operators codify operational knowledge about applications and software. They can automate both day 1 and day 2 operations and react to cluster events to deliver a complete life-cycle experience for users.]
The real trick was that the data flowing behind the scenes in this newly containerized environment was all sensitive, stateful and critical to the users’ progress through their journey to journeyman status.
Stephen and his team continue to work on their digital transformation, but they’ve settled on a great many of the foundational elements at this point. Key among them is certainly CockroachDB as the stateful, container-friendly data store. Elsewhere in the stack, though, are elements such as Red Hat OpenShift API Management and the Kubernetes-native Java stack Quarkus. When they deploy the services, data is pushed to CockroachDB using Flyway. A big concern was disaster recovery. They solved this by ensuring there are multiple backups at each tier – at the local and production level. They take backup locally on the OpenShift Dedicated system and kick data to a S3 bucket as well using CockroachDB.
What does all of this translate to in terms of time savings and efficiencies? Stephen said that provisioning environments has become easier overall, and it’s much simpler to reproduce environments without the typical divergences that create the infamous “it works on my machine” type of issues.
“Having [a container-based system] allowed us to instantly create a new environment, get databases up and running, and even deploy services with Tekton. We’re looking into GitOps with ArgoCD,” said Stephen.
Also, compared to existing virtualized environments, Stephen said, it’s much simpler to maintain and manage a fleet of containers.
“We‘d been proponents of virtualization forever. We use VMware internally at our office, so if it was an environment for a developer, we’d create a VM, then do all the installs. We had snapshots, but it was still a day-long process to get something up and running and patched, plus the maintenance of snapshots. Something like a staging environment would require our data center, which also uses VMs, and we’d have to purchase space and get it running,” he said.
That’s all changed now.
“A separate project just came up two weeks ago. In two days I had a space up and running, the OpenShift namespace was up, and a service was deployed. They added CockroachDB in one day so they could test it. By contrast, with our development partner, it can take two to five weeks to get something done,” he said.
“That’s why we wanted to deal with containers and the power of all these operators. Everything is faster. We are moving at an agile cycle, and we want to push something out there that works, deliver a minimum viable product fast and implement a change fast.”
Enabling this type of speed requires the agility and the environmental consistency that a container-based infrastructure provides.
ETA was able to build a container- and Kubernetes-based platform that ties all the data together and allows the training centers to customize their programs and automate much of their day-to-day operations. This allows training schools to save time and provide quality training as well as provides electricians with a streamlined training experience.
Photo by Oronzo Roberti from Pexels.