Deploying and Operationalizing Kubernetes with Pivotal Container Service (PKS)

Running and Operationalizing Enterprise Kubernetes using Pivotal Container Service (PKS)

Every business and most organizations and agencies are now, at their core, software providers and developers.  Because of the nature of doing business in today’s world, technology is at the center of every interaction between customers and companies, and missions and their end-users.

Even if an organization is not specifically in the business of writing software or providing IT services, it is almost 100% assured that the organization will have to provide these in service to their mission and their users.

Users expect to be able to order, set up, consume, and decommission services over the Internet or other networks.

Because of the ubiquity of software development services, large transformations have taken place driving the modern software development pipeline down the path of containers.  This is because they can be easily ported between environments, quickly created and destroyed, and carry a small footprint with them.

Containers provide further segmentation and isolation in much the same manner that virtual machines do by taking an operating system, dividing it up into isolated cells, and performing functions within each one.  Virtual machines run as separate processes inside a hypervisor in a similar way to how containers run separate within an operating system.  These features can be layered upon one another in multiple ways and orders. 

Most organizations and public clouds are running containers on top of virtual machines, thus the layering works something like this: the hypervisor runs on baremetal; the operating system runs on the hypervisor; the container runs on the operating system; the application runs in the container.

Orchestration Solutions for Containers

There are several options for orchestration of containers.  The need to be able to identify related containers as stacks for particular services and application instances, and the need to be able to define valid operational states is well-recognized.  The job of an orchestrator or scheduler for containers is to ensure the operational state of all the containers in various stacks, to deploy them on-demand, to provide self-healing, and spread the load across potential cluster nodes or provide locality when required.

The de facto standard for container orchestration and scheduling is rapidly becoming Kubernetes.

For most organizations, their path on the container journey can be summarized in one of four ways:

  1. Haven’t started yet, don’t know about containers, don’t care – this is a very small pool of application development organizations.
  2. Have gotten their feet wet, tried some containerization, but having fielded anything – most organizations have at least gotten to this point.
  3. Have standardized on a container solution but haven’t fully deployed at scale – most organizations are arriving here at #3, and understanding they are probably going to use Kubernetes but don’t know how.
  4. Have fully deployed at scale, and conquered the problem (for the most part) – this is also a somewhat small group in the grand scheme

Operationalizing Kubernetes with Pivotal Container Service (PKS)

Any organization that has gotten to stage 3 or 4 in the above list, has probably encountered at least one problem with operationalizing Kubernetes if not more.  These problems might be:

  • Reliability
  • Deployment Scaling
  • Logging
  • Complexity of Deployment
  • Networking
  • Monitoring
  • Storage
  • Security

In addition to these pain points inherent to standing up and operationalizing Kubernetes, an organization must also concern themselves with the underlying infrastructure and additional supporting components.  These include the physical compute, storage, and networking.

Most organizations are already familiar with running vSphere for compute, and many are familiar with VSAN for storage. PKS layers on top of these components and manages the networking, Kubernetes, and container registry components for you.  This makes it much simpler to deploy enterprise-grade Kubernetes on an infrastructure you’re already familiar with and know how to operate.

The Simplest Method to Deploy Kubernetes at Scale in a Production Environment

PKS provides the simplest method to deploy production Kubernetes at scale.  It layers on top of infrastructure you already have and already know how to run.  This makes it infinitely more valuable due to the fact that you can use tools and expertise already at your fingertips.  On top of that, it provides a supported commercial release of the open source Kubernetes solution coupled with additional value-add components that solve most of the problem inherent with designing and operationalizing a Kubernetes deployment.  On top of all of that, PKS can play nicely with many of the other VMware SDDC stack components to form a full featured solution that other on-premises Kubernetes solutions can’t touch.