As more organizations embrace DevOps as a lean, effective way to deliver new applications, one thing becomes abundantly clear: cultural change is critical for DevOps success. Traditional IT models don’t cut it – you need to create a collaborative, empowering and autonomous culture.
Organizations that have successfully transformed their technology delivery through DevOps consistently recognize that it requires much more than tool and process change. We repeatedly find that leading indicators for DevOps success correlate directly with cultural changes required to support innovation and agility. In this post, we look at the ways traditional IT needs to evolve, to let go of relying on process adherence as a primary measure of success, and to embrace the cultural changes considered critical to the successful adoption of DevOps practices.
McKinsey & Co identified Five cultural changes you need for DevOps to work (2017), which include empowerment and autonomy – the cultural cornerstones considered essential by the DevOps community. But, how do you change a culture? How does an organization unlearn decades of reactive policies and protections that contribute to the shared history (or bureaucracy) found in many IT organizations? Can cultural changes be driven and shaped from within? And can it be done without exposing the risk that these policies and processes were designed to mitigate?
We recommend starting with a handful of applications, building a small, driven team around each, and empowering those teams to make decisions around tools and processes. These early DevOps teams should be encouraged to share practices and outcomes with each other. This should form the foundation of a framework to help other teams evolve, potentially through a central DevOps Advisory Practice or a Target-like “Dojo” (where experts coach teams through 30-day challenges to transition from traditional delivery to using DevOps and Agile principles).
The “DevOps” moniker suggests a tighter relationship between development and operations than has typically existed in many organizations. Traditionally, at best, there are a series of handoffs and processes between development and operational teams for communicating environmental requirements, coordinating application deployments, and for ongoing application management. At worst, the development and operational teams belong to completely separate organizational structures, communicating only when there is a change or issue. This model has led to the widely held operational belief that development teams build software and “throw it over the wall” to be deployed and maintained. Often, it means that these teams have completely different perspectives, objectives, and are measured on different – and sometimes competing – key performance indicators (KPIs).
|Traditional IT||DevOps Collaboration|
|Distinct Development and Operational Orgs||Dev and Ops in same team (or role)|
|Hierarchical teams, built around a core technology||Self-organizing teams, built around the application|
|Responsible for one function across many applications||Responsible for the entire application lifecycle|
|Success measured on isolated metrics||Success measured on shared metrics|
DevOps teams include developers, operational resources, and other critical, cross-functional roles, built around a particular service, application, or product. Communication, collaboration, and trust is encouraged by leaders and supported through processes such as the daily standup. Successes are shared, and failures are expected and planned for as a critical component of the innovation cycle. Developers and operational resources are involved in all aspects of the product lifecycle, including infrastructure deployment, application development, and operational functions including monitoring, troubleshooting, and even security and governance. All team members participate in the support of the environment, resulting in a shared sense of ownership and responsibility. Objectives and priorities are set at the service level, and success is measured through KPIs/metrics measured against outcomes and results applicable to the whole team.
For the last couple of decades or so, technology organizations responsible for infrastructure and operations have been chasing the elusive ideal of centralization and standardization. By moving operational functions into a central organization, the theory goes, we can create teams of technical expertise, extend standards and best practices, and benefit from a central procurement office to drive down costs. And, for a while, organizations did benefit from this model; for example, standardizing on Oracle or Microsoft SQL and building specialized database teams to support them. However, as applications grew in scale and complexity while supporting teams often remained static, more often than not “best practices” became, in a sense, a dumbing down of capabilities to be applied across the broadest possible set of use cases. The “standard” infrastructure toolset became the hammer, making every application requirement a nail.
|Traditional IT||DevOps Empowerment|
|Centralized technology standards||Empowered teams|
|Limited tool choice||Teams select the “best tool for the job”|
|Bloated, partially implemented, suites of tools||Fully implemented, fully embraced tools|
|Set it and forget it||Continuous improvement|
The concept of empowerment is central to DevOps. It champions self-organizing teams, empowered to make decisions around the tools and processes that work best for them. In fact, DevOps.com has predicted that 2020 is the year of DevOps Empowerment, that empowering engineers with “full-service ownership” (“code it, ship it, own it”) brings increased accountability and continuous improvements to services, resulting in faster release cycles and increased product quality.
What does this mean for those organizations where centralizing functions and enforcing standards is part of their DNA? It may be uncomfortable to consider supporting a proliferation of tools across the organization, having spent decades trying to bring some order to the chaos. Consider instead the joy of finding the right tool in your toolbox when working on a home improvement project, instead of only having a box of hammers at your disposal. And then discovering that you also know how to use that tool properly!
The development of ITIL (the de-facto IT service management framework) over the last 30 years was focused on supporting the development of higher quality services at a lower cost. One of the core components required by ITIL to maintain service quality is that of change control; a stringent set of processes to ensure that application or environmental changes are not detrimental to service quality. In theory, the framework of checks and balances decreases the risks associated with introducing change to an environment. In reality, it often led to a culture of risk-averse and overly cumbersome processes that could ultimately increase risk through change bundling, corner-cutting, and extended delays for approval from resources with only a cursory understanding of the environment.
|Traditional IT||DevOps Autonomy|
|Change Review Boards||Autonomous teams|
|Manual change requests and roll-back plans||Automated pipelines and testing|
|Multiple layers of oversight||Distributed risk management|
|Can result in finger-pointing||Shared ownership and responsibility|
A common goal of many DevOps teams is to have complete autonomy over their service, making all application and environmental change decisions within the team; including when and how those changes are promoted into production. Many teams aspire to completely automate the entire process, using CI/CD (continuous integration/continuous deployment) to manage risk through frequent, iterative changes with extensive automated tests to gate the promotion of changes through environments. Governance is not bypassed – delivery pipelines like VMware Code Stream can create required change requests and documentation to satisfy ITIL – but the process is fully automated. The quality of the pipeline, including the guardrails and automated testing, determines the level of risk associated with changes promoted through it.
The Harvard Business Review article “How to Give Your Team the Right Amount of Autonomy” (2019) recommends “distributed risk mitigation” (making risk mitigation everyone’s job) as an alternative to multiple levels of oversight. The article describes how the aviation industry transformed its safety record through distributed risk mitigation, “creating a new culture in which risk is everyone’s responsibility and equipping all employees with training on assertiveness and the benefits of advocating the best course of action even though it might involve conflict with others”.
Support Choice, Maintain Control
As development and operations teams work more closely together, solutions must accommodate the needs of both personas. At VMware, we recognize that necessity and build it into our solutions. vRealize Automation, for example, allows self-service, full-stack-application automation to multiple public, hybrid, and private cloud endpoints; including IaaS, Kubernetes, and native public cloud services. It includes policy and governance as part of the application definition, defined in code, and maintained in your developer’s favorite version-controlled repository.
We recognize that flexibility is critical to DevOps teams when selecting tools, and that it is increasingly critical that our solutions integrate easily with third-party toolsets through exposed APIs and pre-built providers. VMware Code Stream (the pipeline component of vRealize Automation) integrates natively with most popular open-source developer tools, allowing operational governance of processes without interfering with developer preferences. For visibility and control, vRealize Operations integrates with public and private environments, while CloudHealth brings automated cost and security governance.
And our Tanzu portfolio offers products and services for modernizing applications and infrastructure, supporting customers as they update legacy systems to cloud-native development methods and runtimes.
The DevOps ecosystem is a growing mix of tools, processes, frameworks, and culture. And yet, as history has proven, tools come and go, and processes evolve and are replaced over time. What will stand the test of time are the cultural changes that drive the agility needed to evolve with the ecosystem.
Empowerment and autonomy are central to DevOps, and organizational changes must be made to enable self-organizing teams built around products and applications, with collaboration encouraged and supported by leadership.
Empowerment includes tool and process choice for DevOps practitioners. VMware believes that our solutions must be extensible at their core to support choice, while continuing to bring our expertise in governance, management, and automation to DevOps toolchains.
In the next post “DevOps: Technology – The DevOps Toolchain”, I take a closer look at the ecosystem of tools supporting DevOps processes, including the VMware portfolio of solutions.
How to Give Your Team the Right Amount of Autonomy Harvard Business Review (2019)
Five cultural changes you need for DevOps to work McKinsey Digital (2017)
DevOps at VMware
VMware lives DevOps in many ways; as a practitioner of the principles for software development, as a provider of tools and solutions that support DevOps practices, and as an advisor and implementor for DevOps initiatives across many of our customer organizations;
- VMware transformed to an agile foundation over a 3-year period, embracing a DevOps culture across our engineering teams and sharing our journey with customers.
- VMware solutions, such as vRealize and Tanzu, are an important part of the DevOps Toolchain ecosystem.
- VMware provides consulting and Professional Services to customers looking for assistance at any stage of their transformation journey.
Other posts in this series
- DevOps History
- DevOps Culture
- DevOps Practices
- Principles and Outcomes August 2020
- DevOps Technology
- DevOps Processes