Continuous Deployment with VMware Code Stream

The rising expectation of application and infrastructure delivery is outpacing the speed by which teams can deliver — this is where automation comes in. Find out how VMware Code Stream can help simplify the end-to-end automation process in this practical case study.

The expectations of application and infrastructure delivery are increasing continually. In this blog, we look at how VMware vRealize Automation Cloud and the Code Stream service can be used to deliver on these expectations; with a focus on continuous deployment, Infrastructure as Code (IaC) and applications with consistent delivery regardless of the cloud provider.

In previous blogs,we used the Tito application to demonstrate cloud agnostic capabilities provided by vRealize Automation Cloud. Let’s take the same use case a step further and look at a realistic application pipeline, moving from code to release using VMware Code Stream.

To demonstrate our continuous deployment use case we will use the Time Traffic Overview application (Tito), a multi-tier load balanced application based on Apache and MySQL. This app, developed by Vincent Meoc, uses Google Maps data to determine optimal travel time from home to work.

Tito web application


Cloud Agnostic Blueprint for Tito

The Tito application is deployed in the Code Stream pipeline via VMware Cloud Assembly. Cloud agnostic blueprints enable deployment to Amazon Web Services, Azure, Google Cloud Platform, VMware Cloud on AWS or vSphere,  with the change of a single tag driving placement constraints to match cloud account capabilities.

Blue Green Deployment

In this example we are using a blue-green deployment strategy with new builds triggered on an infrastructure or application code-change in Git via webhook to Code Stream. UI automation testing and user load testing is used to gate the release and Route53 is used for traffic management to promote a new deployment. This allows for a blue-green failover scenario via DNS, though alternate strategies like a canary release could be achieved using weighted distribution.

VMware Code Stream

VMware Code Stream provides release automation and continuous delivery enabling frequent, reliable releases of infrastructure and applications. The new deployment is triggered via a Git commit which results in pipeline execution via a webhook.

Code Stream Pipeline Stages

The diagram below provides an outline of the task sequence of the Code Stream deployment pipeline, broken down into three stages:


Build stage: Our blueprint deploys to any major cloud depending on pipeline variables, which defines the target cloud using input variables. These are bound to constraint tags which are supplied by Code Stream and drive the Cloud Assembly placement engine.

Once the deployment is successful, we poll the application load balancer address to check for http 200 OK response, then our application is online and ready for tests to be executed.

Test stage: During this stage, we perform UI and performance testing. UI automation tests are executed using a JavaScript test framework called Cypress, with results tracked via Slack. Load tests are then executed using a Python-based load test framework called Locust. In the scenario of a failed load test, we can perform an idempotent optional scale out, using a conditional approval task. We can then revalidate user performance load tests against the scaled web tier.

Release stage: The release stage validates all required steps for promotion of the new application instance to production, or performs any required cleanup on failure. We are performing a blue-green failover using DNS to direct traffic to the new online Tito application instance.

Code Stream Integrations

The VMware Code Stream pipeline ties together the many disparate systems initiated from a Git trigger. This enables automation of the build, test and release stages to move from code to production.

VMware Cloud Assembly

Cloud Assembly provides the IaC and cloud agnostic blueprint engine, along with cloud placement and governance. This enables simple consumption regardless of the underlying cloud provider. In the pipeline, we consume the Tito blueprint via the Blueprint service task.


Cypress is a JavaScript-based UI automation test framework for web applications. Headless UI tests are executed via SSH-agent but may also be executed via a workspace container and the CI integration method in VMware Code Stream. Deployment and configuration of Cypress is a prerequisite to the pipeline execution and is automated via VMware Cloud Assembly blueprint.


Locust is an open source load testing tool based on Python for simulating real user load. Load tests are executed using the VMware Code Stream SSH-agent on a test runner instance which has been pre-deployed and configured via a Cloud Assembly blueprint. Pipeline inputs drive load configuration including max user load, and Locust test specifications are pulled from a git snippet.

VMware Wavefront

VMware Wavefront provides real-time application metric monitoring and analytics. Metric collection for the Tito application is via Telegraf agent configuration using cloud-init within the cloud-assembly blueprint. The blueprint inputs are passed into the Telegraf configuration as tags for metric filtering and associating with application-specific dashboards. 

We also use the real-time observability of Wavefront during user load test execution. Using event hooks in Locust test framework, we can forward custom metrics (including success vs failed requests and response time) to better understand Tito application behaviour under user load. This also enables us to correlate with apache and OS metrics to understand bottlenecks and how the application might respond to demand.

Amazon Route53

AWS Route53 provides DNS for the Tito Application. As part of our Code Stream pipeline, we integrate with Route53 to update DNS and repoint to a new application instance using a blue-green failover strategy. The DNS updates are made to Route53 using the Code Stream REST task and Cloud Assembly Action Based Extensibility (ABX), a conditional pipeline execution. 

The ABX action is executed in AWS Lambda with the AWS SDK (Bota3). used to interact with Route53 in Python. A new copy of the site is then brought online within approximately 60-seconds of pipeline completion. Note the same DNS integration could be implemented with Azure or any other cloud DNS provider. 


Slack is used for pipeline progress-tracking via color coded threaded messages to an SRE operations channel. To make the Slack integration reusable, we are using custom integration tasks. These integrations allow for code functions with inputs and outputs to be executed in a standard runtime environment like NodeJS or Python.


This was an example of a continuous deployment pipeline scenario. Here, we automated an end-to-end application release process for the Tito application, including a blue-green failover scenario to bring a new instance seamlessly online. VMware Code Stream allows for end-to-end automation of your application release-regardless of the tools you need to integrate. 

If you are interested in seeing more, take a look at the VMware Hands-On Labs or sign up for a trial at


By Simon Lynch, VMware Staff Architect

Simon Lynch is part of the Advanced Customer Engagement Team:

Amanda Blevins, Mandy Botsko-Wilson, Carlos Boyd, Darryl Cauldwell, Paul Chang, Kim Delgado, Dustin Ellis, Jason Karnes, Phoebe Kim, Andy Knight, Simon Lynch, Chris Mutchler, Nikolay Nikolov, Michael Patton, Raghu Pemmaraju

About the Authors

Leave a Reply

Your email address will not be published. Required fields are marked *