Introduction to Kubernetes, Part 1
Kubernetes manages container environments, automating many manual orchestration tasks, and providing options to extend an environment's capabilities.
October 25, 2019
The adoption of container-based applications is continuing to increase across the industry, as new and existing applications are built using cloud-native principles. Container orchestrators provide a control plane for managing container environments and alleviating many management challenges.
Google released Kubernetes in 2014 as an open-source container management version of its internal cluster management system called Borg. Since then, Kubernetes has become the dominant container orchestration platform with a thriving community and large investments from many industry giants.
Opensource and vendor interest in the Kubernetes project has enabled a high-level integration with external services and infrastructure platforms through the development of plugins. Plugins extend the platform's functionality while maintaining a common frontend API for consumption.
Key terms
Pods: Kubernetes does not directly run or manage workloads as individual containers; instead, workloads are managed as an object called a 'Pod.’ A pod contains one or more containers, and each pod has its own IP address, which is shared amongst the containers within a pod.
Masters Nodes: A master is one of the two server roles within Kubernetes, providing the control plane functionality through an exposed API entry point. Masters ensure that the environment's current state matches the desired state. By default, pods are not placed on master nodes.
Worker Nodes: Pods are placed onto worker nodes for execution by the scheduler service.
Container Runtime: A container runtime application is required for both master and worker nodes. Container runtimes provide the runtime environment for container execution. There are several options available such as; Docker, Rkt, and runc
Namespace: A virtual construct for dividing physical resources into smaller pieces and access control.
Object: An object is the name given to an entity stored within the clusters shared state database. An object represents the desired (record of intent) and current state of an entity; the Kubernetes system works to ensure that the current state matches the desired state.
Small container environments can be manually managed without imposing significant management challenges. Manually managing the environment quickly becomes overwhelming as the number of containers and hosts increases. Container orchestration platforms, such as Kubernetes, manage container environments, automating many of the manual tasks, and providing options to extend an environment’s capabilities.
The list below highlights some of the container environment management features that Kubernetes provides:
Container Scheduling: The scheduler service places pods in a distributed manner across applicable worker nodes. Pods are removed from worker nodes by the scheduler when they are no longer required.
Dynamic Networking: Networking is a complex challenge for container administrators. Pods require IP addressing and routing to function but may only exist for seconds. Kubernetes uses plugins to integrate with external networking platforms to dynamically configure the network state.
Services: Services logically group pods that match a specified selection criterion. Services act a lot like load balancers, providing a frontend entry point to the service providing by the pods.
Health Checks: By default, traffic is sent to a pod once all the containers have started. Health checks provide two types of health validation checks: liveness and readiness. Readiness checks validate if the pod is ready to receive traffic. A service will not send traffic to a pod that is not ready. Liveness checks validate if the pod is dead or alive. A dead pod is removed and redeployed by the master servers.
Access Control: All API requests must be performed by an authenticated account, whether that account is a user, service account, resource, or otherwise. Kubernetes uses a 'deny by default' approach to access control. Therefore an account must be granted required permissions before performing an API request.
There are several common methods for consuming Kubernetes:
Platform as a Service (PaaS): Many cloud providers offer pre-configured Kubernetes environments as a service that can be consumed on-demand. The underlying platform is managed by the provider reducing the management overhead and allowing your business to focus on development and processes. Configuration options are often limited with a PaaS solution. These limitations will vary between providers and should be assessed in detail.
Vendor packaged solution: Many software vendors offer pre-packaged Kubernetes solutions that typically include features like Web UI management consoles, OOTB integrations with other products from the same vendor, support contracts, and simplified management tasks such as upgrades and certificate management. Vendor solutions often trail the Kubernetes release cycle by a couple of weeks while the solution is validated against the latest release.
Do it yourself: Kubernetes is a free open source project, which means you can deploy and manage the solution yourself. The DIY approach offers the highest level of flexibility at the cost of management and engineering overhead. Kubernetes is a complex platform with many moving components, requiring strong skills in many technical and non-technical domains.
Keep an eye out for Part 2 of this article, which will run in a few weeks.
About the Author
You May Also Like