GitOps is becoming an essential part of the cloud operating model, enabling the application of DevOps best practices to the management of infrastructure, improving the speed and quality of infrastructure service delivery, and maturing infrastructure operations.
“GitOps” is one of the more recent additions to the DevOps lexicon – a play on the Git version control system and operations. Yet it is more than just another catchy compound buzzword. It describes an operational model for extending DevOps principles to infrastructure operations.
Making GitOps part of your cloud operating model enables you to standardize application and infrastructure management processes, driving both from definitions stored in version-controlled repositories. This not only improves and speeds up the overall application lifecycle, but improves environment consistency, reliability, scalability and traceability.
What is GitOps?
The term “GitOps” was originally coined by Weaveworks in 2017 to describe their use of developer tools to drive operational processes. Today, industry definitions for GitOps range from the simplistic to the specific, for example:
Atlassian describes GitOps as “code-based infrastructure and operational procedures that rely on Git as a source control system”
GitLab describes “an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation”
GitOps in a nutshell
In my last post “Everything as Code” we looked at the benefits of managing infrastructure and operating system configuration “as code”, enabling the application of software development best practices like version control to infrastructure.
GitOps takes this a step further, adding additional automation via infrastructure delivery pipelines, resulting in more robust, streamlined operational processes.
Many modern IT organizations already support their cloud operating model with declarative infrastructure management capabilities, and CI/CD for their application lifecycles. Bringing the two sides together is a natural next step.
Operational Use Cases for GitOps
There are many repeatable, time-consuming, operational processes that come to mind where GitOps provides immediate value:
- Provisioning and managing Kubernetes environments (the original Weaveworks use case)
- Iterative cloud template development (for vRealize cloud templates)
- Cloud image management (vSphere templates, AWS custom AMIs and Azure images)
- Transitioning to “immutable infrastructure” to improve patching and update processes
*Immutable infrastructure is the principle whereby infrastructure components are never changed, instead replaced with updated versions when changes or updates are needed
Additionally, GitOps can act as a service consumption model for developers and lines of businesses. Once a change is triggered in Git by a developer it can be applied throughout the environment with no involvement from operations – unless an issue is detected, which can be remediated before the change completes and affects users.
Technology teams can reduce the time spent on provisioning and managing infrastructure resources, and prioritize the proactive delivery of additional services and capabilities to help drive business innovation.
Traditional Operations vs GitOps
|Resource provisioning||A mix of manual and automated processes||Fully automated and driven by declarative definitions|
|Configuration changes||Manually implemented at an OS or system level||Automated and via the configuration management system|
|Environment documentation||Maintained manually and often out of date||The current environment definition is the documentation|
|Change tracking||Manual and distributed across systems||Tracked in source control|
|Roll-back||Often manual and time-consuming||Automatic (revert back to the previous definition)|
|Configuration drift||Often not identified until there is an issue||Automatically detected and remediated|
Sounds Great! Now what?
Does it sound like a lot of work to configure and manage? Maybe, but not when you consider that you probably have valuable, time-constrained engineers regularly performing these processes and tasks manually today.
Or worse, that limited testing and gatekeeping in your regular maintenance processes leads to highly visible, expensive, preventable environmental issues and outages.
So how do we get to GitOps? Start by applying the Five GitOps Principles (as defined by the GitOps Working Group) to the lifecycle of an infrastructure resource, like a virtual machine or load balancer:
- Declarative Configuration (define the resource as code)
- Version controlled (use source control to manage the resource definition)
- Automated delivery (provision and manage the resource from the definition using automation)
- Software Agents (implement automated configuration management for the resource)
- Closed loop (build the delivery pipeline for integration testing for resource changes)
vRealize and GitOps
In addition to the automation and multi-cloud capabilities that vRealize is known for, it also includes the foundational capabilities needed to support GitOps and other DevOps best practices. Therefore, if you have vRealize in your environment you start exploring GitOps immediately.
|GitOps Principle||Delivered By|
|1. Declarative Configuration||vRealize Automation – Infrastructure as Code|
|2. Version controlled||vRealize Automation – native git integrations|
|3. Automated delivery||vRealize Automation – automated provisioning|
|4. Software Agents||vRealize Automation SaltStack Config|
|5. Closed loop||vRealize Automation – infrastructure pipelines|
For information on all the ways that vRealize Automation can support your Cloud Operating Model, as well as provide a DevOps platform that includes GitOps capabilities, visit VMware vRealize Automation.
Other posts in this series
- DevOps History
- DevOps Culture
- DevOps Practices
- Principles and Outcomes August 2020
- DevOps Technology
- DevOps Processes