Amazon Alexa and VMware Cloud on AWS

The power of APIs
“Alexa, add one host to my SDDC” might sound like a science fiction voice command in a starship vessel, but it’s not.

With fully integrated APIs, the VMware Cloud on AWS interface can be driven by any kind of software that can send an API call to an interface.

This article describes how to build a simple AWS Lambda function to drive an Amazon Echo device to receive and reply to voice commands.


The architecture is quite simple and involves a voice command to the Echo device requesting some action or status. Alexa triggers a Lambda function that sends an API call to VMware Cloud on AWS and sends a log message to a Slack channel.

The VMware Cloud on AWS APIs are listed at

You need to have access rights and be authenticated to be able to see them.

To call the APIs you also need a refresh token from your SDDC organization and your SDDC and Org IDs.

Creating a Lambda Function
The skill we are creating will be interactive and requires inputting parameters such as the number of hosts to add or remove.

The steps to follow are:

  • Sign in to your AWS account
  • Create a Lambda function
  • Sign in to the Amazon developer portal
  • Create your skill
  • Test it with an Echo device, online simulator or the Echo Simulator at 
  • Log in to your AWS console and choose Lambda

    Create your function from scratch:

  • Give it a name
  • Use Python3.6
  • Use the existing role “lambda_basic_execution” for the permissions
  • Click “Create”
  • On the next screen make sure to copy the Amazon Resource Name (ARN) of the function. We will need it later.

    Creating an Alexa Skill
    Log in to the Developer site:, select Alexa and then Alexa Skills Kit.

    1.  Skill information
    Add a new skill and choose a name and invocation name.

    The invocation name is important, this is the command you will use to activate the skill.
    Leave Global Fields default and save.

    Scroll up on this screen. You will now see the Application ID. Make a note of this, we will use it in our Lambda function.

    2. Interaction Model

    Now launch the skill builder.

    This is where we can teach Alexa our intents and how to recognize our requests.

    We also have a list of parameters like NUMBERS where we can add a value. For example: add two hosts or add three hosts. The value “two” or “three” should be recognized as a variable.

    The easiest way to populate the intents is to upload the JSON file located at: 

    Intents are in the format:
    “name”: “SDDClist”,
    “samples”: [
      “list my organization SDDCs”,
      “list my org”,
      “list my organization”,
      “my org”,
      “my organization”,
      “my data center”

    The name is the event that will be launched in the Lambda function and that will execute code. The example above asks for the SDDC list in your Organization.

    A simple phrase like “list my organization” can be used but we can also say something like:

    “Alexa, ask my demo about my organization”

    In this sentence we have the invocation name “my demo” and the intent “my organization”

    Once done, apply changes, save the model and build the model.

    3. Configure Service end point

    The next step is to link the Lambda function to the Alexa skill. To do that, we will use the previously saved Lambda ARN as end point.

    Now we want to use Alexa skills as a trigger in our Lambda function.

    Let’s go back to the AWS Lambda console.

    Select Alexa Skills Kit as trigger on the left panel and paste the Alexa Application ID as a trigger and save.

    The next step is to upload the code.
    The complete code is located at: 

    4. Test

    That’s it. You can now test the application using a real Echo device, online simulator or the Echo Simulator at

    Examples of interactions

    Enable the function by saying:
    “Alexa, open my demo”
        Welcome to the VMware Cloud on AWS demo… Ask me.
    ”List my organization”
        This is the list of SDDCs in your organization: Adam-SDDC, Nico-SDDC, Gilles-SDDC, Kevin-SDDC

    And so on according to the intents.

    How did we build the Alexa skill?

    Code structure

  • config.ini: File containing the Auth Token, Org ID, SDDC ID and Slack Channel URL
  • intents.json: File containing the JSON representation of Alexa voice commands and responses.
  • The lambda code and API calls. The file name is IMPORTANT. This is how the code will be executed
  • Compressed directory with all “imports” and code. This is the file we upload to the Lambda configuration
  • Lambda Handler

    The main handler contains four events:

  • New session started: print log only
  • Session launch: speak “welcome” response
  • Session event: determine event and act upon it
  • Session end: speak “Good bye” response and close
  • Event example: Get SDCC Status

    On recognition of the voice command “get my SDDC status”, the function get_sddc_status() is called. This function in turn calls the get_sddc_data() and the API call is made to VMware Cloud on AWS so our SDDC data variables can be populated.

    Then the build_speechlet_response is filled and will be spoken by Alexa.

    Alexa and open microphone security
    For security reasons, the microphone on Echo devices cannot be open for more than 8 seconds. If the session is running, you have 8 seconds to ask something to Alexa. If you don’t speak, there is a “re-prompt” built in the “speechlet” that will prompt you again.

    The total is then 16 seconds – if you don’t speak during this time, Amazon Echo will close the session.

    A workaround for this is to create a “WAIT” or “HOLD-ON” event that will buy time, because adding or removing hosts takes more than 8 sec in ZEROCLOUD (10 mins in real environment).

    If you are using the online test simulator, this is not an issue.


    Leave a Reply

    Your email address will not be published.