A short outage this week on the GitLab hosted code service struck a combination of fear and sympathy across the tech community and offered a sharp reminder of the importance of testing your backups again and again (and again).

On Tuesday, a GitLab administrator had accidentally erased a directory of live production data during a routine database replication. In the process of restoring from the last backup, taken six hours prior, the company had discovered that none of its five backup routines worked entirely correctly. The incident report, which GitLab posted online, noted that erasure affected issues and merge requests but not the git repositories themselves.

The six-hour outage of the online service at GitLab.com only affected less than on percent of their users, according to Tim Anglade, GitLab interim vice of marketing, and ultimately did not affect any enterprise customers. The bread and butter that people use GitLab for was not affected, he said. No files, no data, were lost, just specific types of metadata, including comments, storing data issues, comments snippets, and data about projects for users. Up to 707 users potentially lost metadata, according to the incident report.

Read More:   Gatsby and Cosmic JS tutorial for building documentation - 2022

There were two issues, Anglade explained: One was a straightforward underlying database issue, which GitLab took offline to resolve, and a separate data log issue, which unearthed an issue with GitLab’s restore process. In this case, GitLab was using the PostgreSQL open source database.

@gitlabstatus tweeted a stream of updates

In keeping with company policy, they made the log issue very transparent, communicating to the GitLab community via tweets, blogs and even a life-stream YouTube channel (now offline), sharing the progress of the issue resolution.

Over 2,000 followed live stream on YouTube and offered what Anglade said were mostly helpful suggestions, and Jeroen Nijhof is shepherding Issue #1110 to resolution.

In an interview on Wednesday, Anglade conceded that the policy of openness created more concern and fear than expected, but the recovery team stayed committed to letting the community know what was happening every step of the way. Headlines calling it a “meltdown” probably didn’t help much either.

But the dev community has responded with more sympathy than shade as sysadmins recounted their own failed restores and acknowledged how complicated backup processes can be.

In a blog item, Simon Riggs, Chief Technology Officer for the PostgreSQL enterprise support provider 2nd Quadrant, praised GitLab for its handling of the incident: “Thank you for posting this publicly to allow us to comment on this for your postmortem analysis.”

“We’ve been there,” added web developer Leonid Mamchenkov, writing on his blog. “I don’t (and didn’t) have any data on GitLab, so I haven’t lost anything. But as somebody who worked as a system administrator (and backup administrator) for years, I can imagine the physical and psychological state of the team all too well.”

Although the actual impact was minimal, the result is a complete review of the GitLab restore processes.

Quoting the old admin adage that “There is no such thing as a successful backup, there’s only failed backups or successful restores,” Anglade said that now that the database is back online, they ops team is going to review the end-to-end restore process throughout the company.

Read More:   Impact of Low quality Data on Business Performance 2023

Although this may seem a bit like closing the barn door after the horses have run away, it’s a proactive step for the company to take. This week’s mistake affected less than 1 percent of their user base. Next time they might not be so lucky.

This isn’t just about human error, Anglade explained, although there’s been a lot of focus on that this week. It’s about making sure everything works together. “We may need to invest in our infrastructure, both technical and human, and we’re going to take a deep look at addressing it. But also the human system in the middle of the technology that make the process is possible.”

Let this be a cautionary tale for companies everywhere. As Mamchenkov said, “I guess I’ll be doing test restores all night today, making sure that all my things are covered…”

Feature image via Pixabay.