Installing and Using vSphere Integrated OpenStack (VIO) with Kubernetes

Containers on ship at port

How to install vSphere Integrated OpenStack (VIO) with Kubernetes Without OpenStack 

Ironically, the product is named vSphere Integrated OpenStack (VIO) with Kubernetes; however, you don’t actually need VIO to install the product.  As it turns out, you can actually install VIO with Kubernetes directly into vSphere and get up and running in about an hour with your first Kubernetes cluster.

This article will show you how to obtain and import the VIO with Kubernetes 4.0 virtual appliance, import it into your vCenter environment, add an SDDC provider, and get up and running with your first Kubernetes cluster.  Additionally, we’ll show you some of the options you have for managing the cluster once it’s deployed.

What is VIO with Kubernetes?

First, it might be helpful to discuss what VIO is.  VIO, or vSphere Integrated OpenStack, is VMware’s vendorized – if that’s a word – version of OpenStack.  It follows fairly closely with the open source OpenStack main distribution schedule, lagging about 3 to 4 months behind official OpenStack releases. 

VIO comes to the table with vSphere networking (NSX) and storage drivers that enable vSphere shops to run OpenStack on top of their enterprise infrastructure utilizing the expertise they already have on staff for enterprise infrastructure management, while enabling the API-driven programmatic consumption model that OpenStack is famous for.  VIO registers as a vCenter solution and allows vSphere administrators to deploy OpenStack clusters from directly within the vSphere Web Client.

Version 3.x and prior came with vSphere Enterprise Plus editions for free, with support costing extra.  Starting with 4.x and later, VIO is now a paid SKU completely unto itself, which makes some sense given the extra development time and effort going into the product.

Which brings us to VIO with Kubernetes.  VIO with Kubernetes is essentially an add-on module intended to work with VIO.  It is a separate management interface that enables the creation, management, and deletion of Kubernetes clusters on-demand.  All of the infrastructure automation required to provision nodes, networking, and storage for Kubernetes is self-contained within the VIO with Kubernetes management server.  Thus, when an administrator creates a new cluster, all of the hard work is accomplished for you, which brings a lot of value to the table for infrastructure teams that may not know the ins and outs of Kubernetes installation.

Under the hood, the cluster installation steps are accomplished utilizing Ansible to configure the nodes with predefined setups.  You can configure Cloud Providers which are the infrastructure interfaces used by VIO with Kubernetes to create the VM instances that will act as masters, workers, and routers.  When integrated with NSX, VIO with Kubernetes can create virtual networks on the fly, placing Kubernetes nodes behind a Kubernetes router, which filters traffic to the containers hosted on the cluster. 

Once the cluster is built, dev teams can manage Kubernetes clusters using the kubectl commands and APIs they’re most likely already used to using.

Video Walkthrough

Watch this video for a convenient reference, and continue further for step-by-step instructions.

Download and Deploy the VIO with Kubernetes OVA

As is now pretty routine with VMware products, VIO with Kubernetes can be downloaded from using an account which is entitled for VIO 4.x.  The OVA filename under the initial 4.0 release is VMware-OpenStack-for-Kubernetes-  The OVA can be found clicking “View Products & Downloads,” then clicking “All Products,” and finally clicking the “View Downloads” link next to “VMware Integrated OpenStack,” which is located under the “Infrastructure & Operations Management” heading. VIO with Kubernetes OVA download

Ensure the version is set to 4.0, and click the “Go To Downloads” link located next to the VMware Integrated OpenStack 4.0.0 product.  For this walk through, we aren’t actually deploying OpenStack, and therefore we don’t need to have it already installed and do not need to download or install it.  The virtual appliance we’re looking for is called “VMware Integrated OpenStack for Kubernetes Virtual Appliance 4.0.”  Locate that file and click the “Download” button.

Once the download is complete, you’re ready to import the OVA.  If you like scripting, the following OVF Tool script can be used to deploy the appliance:

"c:\Program Files\VMware\VMware OVF Tool\ovftool.exe" -n=viok01 --X:enableHiddenProperties --X:logFile='$logfile' --X:logLevel=verbose --allowExtraConfig -ds=NFS-01 --net:"Management Network"=VMPublic --powerOffTarget --powerOn --overwrite --prop:root_pwd="P@ssword1" --prop:vami.ip0.VMware_Integrated_OpenStack_with_Kubernetes_Appliance="" --prop:vami.netmask0.VMware_Integrated_OpenStack_with_Kubernetes_Appliance="" --prop:vami.gateway.VMware_Integrated_OpenStack_with_Kubernetes_Appliance="" --prop:vami.DNS.VMware_Integrated_OpenStack_with_Kubernetes_Appliance="" --prop:vami.domain.VMware_Integrated_OpenStack_with_Kubernetes_Appliance="lab.local" --prop:vami.searchpath.VMware_Integrated_OpenStack_with_Kubernetes_Appliance="lab.local" "C:\Users\User\Downloads\VMware-OpenStack-for-Kubernetes-" "vi://user@lab.local:P@ssword1@"

In the above example, all highlighted values should be changed to match your requirements.  The properties being passed should be self-explanatory.

If deploying the OVA by hand, you can utilize the following steps:

  1. Log in to the vSphere Web Client with an account that has administrative privileges to import OVAs.
  2. Click a cluster or host system object in the inventory.
  3. Right-click the cluster or host and click “Deploy OVF Template…”
  4. Browse for the VMware-OpenStack-for-Kubernetes- in the OVF deployment wizard
    Deploy OVF Step 1 - Choose File
  5. Click the “Next” button
  6. Rename the virtual machine that will be created by the import and select a VM folder location
    Deploy OVF Step 2 - Choose name and location
  7. Click the “Next” button
  8. Select the cluster or compute resource in which to place the VM
    Deploy OVF Step 3 - Select compute resource
  9. Click the “Next” button
  10. Review the details, and click the “Next” button
    Deploy OVF Step 4 - Review details
  11. Accept the EULA by clicking the “Accept” button.
    Deploy OVF Step 5 - Accept EULA
  12. Click the “Next” button
  13. Select the datastore on which to place the VM, and change the virtual disk format to “Thin provision”
    Deploy OVF Step 6 - Select storage
  14. Click the “Next” button
  15. Select the port group which the VIO with Kubernetes VM’s “Management Network” NIC will be attached.
    Deploy OVF Step 7 - Select networks
  16. Click the “Next” button
  17. Enter valid values for the IP address, gateway, subnet mask, domain, domain search path, and root password
    Deploy OVF Step 8 - Customize template
  18. Click the “Next” button
  19. Review the OVF deployment settings, and click the “Finish” button
    Deploy OVF Step 9 - Review & finish
  20. Monitor the deployment status in the vSphere Web Client Recent tasks pane
    Deploy OVF - Recent tasks in vCenter
  21. Wait for the ready screen to show in the VM console
    VIO with Kubernetes ready screen at console

Configuring VIO with Kubernetes

Once the VM ready screen is up, you can open a browser and point it to https://[ip-address-of-VIO-w-k8s-appliance].  The login screen of the VIO with Kubernetes Management UI is shown.

VIO with Kubernetes 4.0.0 Login Screen

Enter the username root and the password given at the OVF deployment step.

Create a Cloud Provider

Upon first login, there are no configured cloud providers or clusters.  In order to provision a Kubernetes cluster, you will first need to create the Cloud Provider.  In this example, we have omitted the OpenStack infrastructure, and instead opted to use the SDDC infrastructure option – that is to say, we will be deploying directly on top of vSphere.

Use the following steps to create an SDDC Cloud Provider (Note: you can only have one SDDC Cloud Provider per VIO with Kubernetes instance; therefore, if you want to manage multiple vCenters, you would need to provision multiple VIO with Kubernetes management appliaces).

  1. Click the “Cloud Providers” link located on the left of the management console.
  2. Click the “Deploy New Provider” button
  3. In the first step, it’s possible to use a JSON file to provision a provider. We’re going to skip this step, but it’s useful to save your provider configuration at the end as JSON in case there are failures.  Then you can easily reuse the configuration to try again.
    Deploy Infrastructure Provider - Step 1
  4. Click the “Next” button
  5. Enter a provider name, either the FQDN of the vCenter server you’re connecting, or perhaps just “vCenter.” Select “SDDC” as the Provider type.
    Deploy Infrastructure Provider Step 2 - Provider name and type
  6. Click the “Next” button
  7. Enter the connection details for the vCenter server, including the FQDN or IP address, a service account username, and the password for it. If you are using self-signed certs, check the box for “Ignore the vCenter Server certificate validation” option
    Deploy Infrastructure Provider - Step 3 - Authentication
  8. Click the “Next” button
  9. Select an available ESXi cluster on which to deploy the Kubernetes cluster node VMs later on. Initially, VIO with Kubernetes will deploy a template VM to this cluster.
    Deploy Infrastructure Provider - Step 4 - Add vSphere Cluster
  10. Click the “Next” button
  11. Select the datastores you would like to use as storage for the Kubernetes cluster nodes when they’re deployed
    Deploy Infrastructure Provider - Step 5 - Datastores
  12. Click the “Next” button
  13. Select the “NSX-V Networking” network type for the deployment. Distributed switch would be for a basic vSphere deployment without NSX.  NSX-V should be used for SDDC provider types, and NSX-T can be used for OpenStack providers.  In this example, we’ll show NSX-V deployment.
    Deploy Infrastructure Provider - Step 6 - Networking
  14. Enter the NSX manager FQDN or IP address, username and password.
  15. If using self-signed certificates, check the “Ignore the NSX-V SSL certificate validation” checkbox.
  16. Select the transport zone to use (usually only one per deployment anyway)
  17. Select the Edge resource pool and datastore – these will be used to deploy NSX edges to use as routers
  18. Select the virtual distributed switch on which to create NSX virtual wires on-demand when Kubernetes clusters are created.
  19. Click the “Next” button
  20. Select the management network to use as the outside interface for the Kubernetes clusters. NSX edges will be connected to this network.
    Deploy Infrastructure Provider - Step 7 - Management Network
  21. Enter the network CIDR for the management network
  22. Enter a static IP allocation range, which is used to hand out IPs to NSX edges that translate for Kubernetes cluster nodes. Each cluster deployment will use a minimum of 3 of these IP addresses, and you cannot modify this without destroying all clusters and the SDDC provider, so it is essential to size it appropriately up front.
  23. Enter the network’s gateway and the DNS server to apply to networked interfaces
  24. Click the “Next” button
  25. Select the “Local Admin User” authentication source. Alternately, you can connect to an Active Directory LDAP source, if you prefer
    Deploy Infrastructure Provider - Step 8 - Authentication Source
  26. Enter a cluster admin username to be created, and an associated password.
  27. Click the “Next” button
  28. Verify the information in the Configuration Summary screen
    Deploy Infrastructure Provider - Step 9 - Summary
  29. Click the “Download Provider JSON” button to save a copy of the configuration to disk
  30. Click the “Finish” button to kick off the SDDC provider creation task

The SDDC Provider creation process can take 15 or 20 minutes to fully complete.

Create a Kubernetes Cluster

Once the SDDC Provider has been created, you can then deploy Kubernetes clusters.

Use the following steps to deploy a Kubernetes cluster:

  1. Click the “Clusters” link located to the left of the management portal
  2. Click the “Deploy New Custer” button or the “New” link to start the new cluster deployment task
  3. Similar to the new provider section, you can upload a JSON to drive the configuration, and you will have the option on the summary page to save the configuration to disk. For now, just click the “Next” button
    Deploy Cluster - Step 1 - Intro
  4. Select the SDDC provider you previously created
    Deploy Cluster - Step 2 - Provider Selection
  5. Click the “Next” button
  6. Check the “Use default node profile” checkbox
    Deploy Cluster - Step 3 - Node Profile Selection
  7. Click the “Next” button
  8. Enter a cluster name, the number of master and worker nodes, and the DNS server for this cluster
    Deploy Cluster - Step 4 - Cluster Data
  9. Select either an exclusive or shared cluster type. Exclusive clusters are intended to be wholly owned by a single tenant, while a shared cluster type binds users to a namespace to achieve multitenancy on the Kubernetes cluster.
  10. Click the “Next” button
  11. Select the users to enable on the cluster. If you chose a shared cluster, you can also create a namespace in this step and add the users to the namespace
    Deploy Cluster - Step 5 - User & Group
  12. Click the “Next” button
  13. Verify the settings on the Kubernetes cluster deployment summary page
    Deploy Cluster - Step 6 - Summary
  14. Click the “Downoad Cluster JSON” button to save the cluster configuration to disk
  15. Click the “Finish” button to kick off the cluster deployment
  16. Wait while the cluster is shown as “Creating” (noted under hwlkube02 below)
    Creating Kubernetes Cluster
  17. Within the vSphere environment, you will see several tasks kicking off, including cloning of VMs and creation of NSX virtual wires
    vCenter Recent Task - Kubernetes Cluster Creation
  18. A Kubernetes router will also be deployed
    Kubernetes Router Deployment

Wait for the task to complete and the cluster to show as “Active” in the management interface.

Scaling Clusters After Deployment

It is possible to scale out the number of worker nodes after a cluster has been deployed.  Click the ellipsis next to the cluster name in the Clusters screen, and select “Scale Cluster” option.

Scale Cluster Menu Option

Enter the new number of nodes, and click the “OK” button.

Scale Cluster Dialog

Wait while the cluster is scaled.  New VM(s) are created in vCenter, and the cluster shows as “Updating” in the management portal.

Cluster Deployment - vCenter Recent Tasks

Cluster Creating Status

Once the operation is complete, the new worker nodes are available for use.

Using The Kubernetes Cluster

Now that we’ve finished the deployment of a new cluster, it’s time to test it out.

Installing kubectl on Windows

The CLI of choice for interacting with Kubernetes is kubectl.  To install on windows, use Chocolatey package manager.  To install Chocolatey, pop open a PowerShell prompt, running as administrator, and execute the following command:

Set-ExecutionPolicy Bypass; iex ((New-Object System.Net.WebClient).DownloadString(''))

Once Chocolatey is installed, use it to install kubectl using the following command:

choco install kubernetes-cli

After that’s complete, you can now run kubectl from the PowerShell session you’re already in.  The first step is to configure you kubectl settings.  This is done using the kubectl config command set.  The following commands will get you going with your new cluster:

kubectl config set-cluster demo1 --server=https://[Cluster-IP-Address] --insecure-skip-tls-verify
kubectl config set-credentials kubeadmin --username=kubeadmin --password=[PASSWORD]
kubectl config set-context demo1 --cluster=demo1 --namespace=default --user=kubeadmin
kubectl config use-context demo1

Be sure to sub in the correct IP address and password for the useraccount (not to mention the user account name if you chose a different one during cluster creation).

Test the config using the following command:

kubectl get pods

It should return no error and no pods.

Now you can run a simple vanilla nginx container to test the installation:

kubectl run nginx2 --image=nginx --port=80

Expose the deployment so you can hit the container from the outside:

kubectl expose deployment nginx2 --type=NodePort --name=nginx2

Enumerate the services to see the port mapping:

kubectl get services

View the nginx2 listing for the port mapped to port 80.  Open a web browser, pointing to the Kubernetes cluster management IP address on the mapped port and verify that the default nginx start page is showing.

Accessing the Kubernetes Web Dashboard

You can also manage the Kubernetes cluster using the dashboard.  This is accessible via HTTPs in a web browser to the Kubernetes cluster management IP address.  You can also access it via link from the VIO with Kubernetes management portal.  Simply click the Kubernetes cluster object to view extended properties.  Then click the “Dashboard” link in the top right of the page, and the Kubernetes cluster dashboard will be displayed.

1 comment

  • Hi Sir,

    thanks for knowledge, I’would like to try on lab, can share the file for me


Leave a comment