Deploying CloudBees Core on VMware Kubernetes

This post is authored by Dan Illson with the assistance of Jeff Fry.

CloudBees Core is a continuous integration and continuous deployment (CI/CD) engine based on the Jenkins open source automation server. This offering extends Jenkins by embedding best practices, rapid onboarding and additional functionality to facilitate security and compliance controls.

Organizations use CloudBees Core to provide a centrally-managed CI/CD service, while maintaining a self-service experience for individual teams.

VMware Kubernetes Engine (VKE) is an enterprise-grade Kubernetes-as-a-Service offering within the VMware Cloud Services portfolio. It provides easy to use, secure, cost effective and fully managed Kubernetes clusters. VKE enables
users to run containerized applications without the cost and complexity of implementing and operating Kubernetes.

You can find out more about VKE and request access here.

Cluster Creation and Preparation

Once you have applied and been granted the required access to the VKE service, you can begin preparing the VKE cluster.

The first step in the process will be to log in to obtain the necessary command line tools to interface with VKE and the Kubernetes cluster itself (assuming the tools haven’t been previously installed). These tools are:

  • kubectl
  • helm
  • VKE command line package
  •  
    Kubectl and the VKE command line package can be obtained within the VKE web interface. To access these utilities, log in to the VKE UI and select the ‘Developer center’ link in the vertical navigation bar at the left edge of the screen. Click on the ‘Downloads’ tab in the Developer center panel and download both the VKE CLI and kubectl packages for the required operating system (see Figure 1 below).

     
    Figure 1. The ‘Downloads’ tab of the VKE Developer Center page.


     
    Helm must be acquired separately. You can find detailed instructions for configuring helm here.

    Once the required tools have be installed, the next step will be to set up a Kubernetes cluster. Creating a Kubernetes cluster under VKE can be done via the web UI or from the VKE command line package. For web setups, log in to the service and click on the link labeled ‘New Smart Cluster’. From here you will need to select from a few options:

  • Deployment type: Choose ‘Development Cluster’ to minimize spend
  • Region: Select one from the available list
  • Privileged mode: Check this box as it will be required to perform container image builds within the cluster
  • Name: Assign the cluster a descriptive name for your reference
  •  
    This process can be performed within the VKE command line package as well.

    The remaining process described in this article will be performed via the command line. To do so, first log in with the following command:

    vke account login -t < organization-id > -r < refresh-token >

    The two values surrounded by angle brackets ‘< >’ are placeholders. To find these values, log in to the VKE web interface. From the landing page, select the ‘Developer center’ link in the vertical navigation bar on the left edge of the screen (see Figure 2 below). The organization ID should now be visible in the vke account login command example on the Overview tab. In the image below, the org-id value is redacted by a rectangular bar.

     
    Figure 2. The VMware Kubernetes Engine Developer Center. The org-id has been redacted, and the ‘Get Your Refresh Token’ link has been highlighted


     
    To retrieve the necessary refresh-token value, follow the link labeled ‘Get Your Refresh Token’ just above the example command on the right side of the screen. A redacted example of the screen displaying API or Refresh tokens is show below (Figure 3). You can now use the fully populated command to log in to VKE via the command line.

     
    Figure 3. An example of the API Tokens view linked from the highlighted example in Figure 2. Information about the token has been redacted in this image.


     
    After logging in via the command line package, run the following command to create a new cluster:

    vke cluster create –name < cluster name > –region < region > –privilegedMode

    By default, a development cluster is created, so there’s no need to specify that option from this command. As with the UI-driven example, select a name and region according to preference, and enable ‘privileged mode’.
    Use the following command to gain access to the cluster via the kubectl utility once it has been created:

    vke cluster auth setup < cluster name >

    Replace < cluster name > with the name chosen for the cluster when the vke cluster create command was run previously.

    Helm Configuration

    To continue the process users will require Helm, ‘The Package Manager for Kubernetes’. The CloudBees Core components will be installed via a Helm chart. The instructions to install Helm on a variety of platforms can be found here. Once Helm has been installed and the vke cluster auth setup command from the previous section has been executed, run the following command to install Tiller, the cluster-side component of Helm:

    helm init

    This will likely trigger a worker node addition within the VKE cluster, so it may take a few minutes before the tiller pod is available and running. To monitor the progress of the tiller pod, run the following command:

    kubectl get pods -n kube-system -w

    Installing CloudBees Core

    The first step in installing the CloudBees core components onto the prepared VKE cluster will be to build the helm chart. To begin, clone this repository from GitHub. This repository was created by Jeff Fry, Senior Business Development Engineer at CloudBees. Once the repository is cloned, navigate to the base directory of the local copy of the repository and run the following command to build the Helm chart:

    helm package ./CloudBeesCore

    Once the Helm chart is built, two Kubernetes namespaces will need to be created. One will house the nginx ingress controller, and another ‘cloudbees’ will house the CloudBees Core deployment. A clusterrolebinding object will also be necessary to ensure the correct permissions for the nginx ingress controller.

    kubectl create namespace cloudbees
    kubectl create namespace ingress-nginx
    kubectl create clusterrolebinding nginx-ingress-cluster-rule –clusterrole=cluster-admin –serviceaccount=ingress-nginx:nginx-ingress


    The next step is to install the ingress controller from its own stable helm chart.

    Note:he controller.scope.namespace value has been set to match the kubernetes namespace containing the CloudBees core components.

    helm install –namespace ingress-nginx –name nginx-ingress stable/nginx-ingress –version 0.23.0 –set rbac.create=true –set controller.service.externalTrafficPolicy=Local –set controller.scope.enabled=true –set controller.scope.namespace=cloudbees

    It will take a few minutes following the installation of this helm chart for the ingress-nginx service to resolve its external ‘Load Balancer Ingress’ hostname. The following command is used to check on the status of that value (labeled ‘Load Balancer Ingress’ in the output):

    kubectl describe service nginx-ingress-controller -n ingress-nginx

    With the ingress controller in place, it’s now time to deploy the helm chart containing the CloudBees Core components. The following command will install the those pieces via a helm chart. The command has placeholders which specific values, which will need to be plugged in for:

    helm install cloudbeescore –set cjocHost=< lb-ingress-hostname > –namespace cloudbees

    In this case, the namespace for the CloudBees Core deployment has been set according to the namespace we originally created with the kubernetes cluster for this purpose. It is also the same namespace referenced as the ‘controller.scope.namespace’, as configured during the installation of the nginx ingress controller helm chart.

    The value which will need to be replaced depending on the specific installation is the < lb-ingress-hostname >. Please amend this placeholder with the value of the ‘Load Balancer Ingress’ hostname from the output of the previous command. Once this command has been executed, the progress of the rollout can be monitored via this command:

    kubectl rollout status sts cjoc –namespace cloudbees
    Wait for output of this type (ID after ‘cjoc-‘ will change):
    statefulset rolling update complete 1 pods at revision cjoc-59cc694b8b…


    Once the ‘rolling update complete’ message is displayed, run this command to retrieve the initially generated admin password for the CloudBees Core instance:

    kubectl exec cjoc-0 cat /var/jenkins_home/secrets/initialAdminPassword –namespace cloudbees

    Save the value of this output, and navigate to the public URL of the CloudBees Jenkins Operations Center (CJOC): http://< lb-ingress-hostname >/cjoc. The < lb-ingress-hostname > placeholder will need to be replaced with the value from the kubectl describe service command utilized previously. Logging in as ‘admin’ with the password revealed by the previous command should kick off the setup wizard for CJOC.

    Unless a more permanent license has already been procured, a trial license should be requested via the form within the wizard to get started.

    Conclusion

    The process outlined above will create a brand new Kubernetes cluster on VKE and install the necessary components for CloudBees Core within that environment.

    Our next blogs will outline the process for creating CI/CD pipelines within a Jenkins master and utilizing a webhook to trigger a pipeline. Stay tuned!

    Sign up for a free trial of CloudBees Core or learn more here.

    Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *