How Kubernetes Provides Networking and Storage to Applications

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Compute, storage and network are the foundations of any infrastructure service. In Kubernetes, nodes represent the compute building block, which provides those foundational network and storage resources to pods running in the clusters. The network and storage services are delivered by software-defined, container-native plugins designed for Kubernetes.
The network component enables pod-to-pod, node-to-pod, pod-to-service, and external clients-to-service communication. Kubernetes follows a plugin model for implementing networking. Kubenet is the default network plugin and it is simple to configure. It is typically used together with a cloud provider that sets up routing rules for communication between nodes, or in single-node environments.

Sponsor Note

KubeCon + CloudNativeCon conferences gather adopters and technologists to further the education and advancement of cloud native computing. The vendor-neutral events feature domain experts and key maintainers behind popular projects like Kubernetes, Prometheus, Envoy, CoreDNS, containerd and more.

Kubernetes can support a host of plugins based on the Container Network Interface (CNI) specification, which defines the network connectivity of containers and deals with the network resources when the container is deleted. There are many implementations of CNI, including Calico, Cilium, Contiv, Weave Net and more. Virtual networking available in public clouds is also supported by the CNI specification, which makes it possible to extend the network topology and subnets to Kubernetes clusters.
Some of the CNI-compliant network plugins, such as Calico, implement policies that enforce strict routing policies by isolating pods. They bring firewall-like rules to the pods and namespaces of a Kubernetes cluster.
Kubernetes Storage
Persistent storage is exposed to Kubernetes via persistent volumes. Pods consume the volumes through persistent volume claims. Storage administrators provision storage by creating the persistent volumes from existing network-attached storage (NAS), storage area network (SAN), direct-attached storage (DAS), solid-state drives (SSDs), non-volatile memory express (NVMe) or flash disk arrays. Developers and DevOps teams get a chunk of persistent volumes through the persistent volume claims associated with pods.
Kubernetes comes with storage primitives to expose storage from existing nodes. One such primitive is a volume type that makes the underlying storage accessible to pods. Examples of volume types include emptyDir and hostPath. They are used for specific use cases: emptyDir is for scratch space and hostPath makes local volumes available to pods. But they don’t have high availability and fault tolerance due to the tight coupling with the node. Overlay storage layers pool storage volumes from block devices, NAS and SAN to expose external storage to Kubernetes objects.
To offer high availability and container-native storage capabilities, Kubernetes introduced plugins for storage vendors to expose their platforms to containerized workloads. Block storage from public cloud providers, distributed file systems based on NFS and GlusterFS, and a few commercial storage platforms have plugins included in the open source upstream distribution of Kubernetes. Storage administrators create storage classes for each type of storage engine based on their performance and speed. Persistent volumes and claims can be created from these storage classes for different types of workloads. For example, a relational database management system (RDBMS) may be associated with a storage class with higher input/output operations per second (IOPS), while a content management system (CMS) may target a distributed storage engine through a different storage class.
Overlay Storage of Kubernetes: Exposing Storage to Pods and Containers Source: Janakiram MSV
Similar to CNI, the Kubernetes community has defined specifications for storage through Container Storage Interface (CSI), which encourages a standard, portable approach to implementing and consuming storage services by containerized workloads.
A Lightweight Network Stack Built to Scale
With its lineage in Borg, Kubernetes is designed for hyperscale workloads. Its modern architecture ensures optimal utilization of infrastructure resources. Additional worker nodes can be easily added to an existing cluster with almost no change to the configuration. Workloads will be able to immediately take advantage of the CPU, memory and storage resources of new nodes.

Sponsor Note

Dynatrace is the leader in Software Intelligence, purpose-built for the enterprise cloud. It’s the only AI-assisted, full stack and completely automated intelligence platform that provides deep insight into dynamic, webscale, hybrid cloud ecosystems. That’s why the world’s leading brands trust Dynatrace to deliver perfect user experiences.

The idea of grouping a related set of containers together as a pod and treating it as a unit of deployment and scale results in better performance. For example, co-locating a web server and cache containers in the same pod reduces latency and improves performance. Containers within the pod share the same execution context, enabling them to use the interprocess communication, which reduces the overhead.
Pods that belong to the same ReplicaSet and deployment scale rapidly. It just takes a few seconds to scale a deployment to hundreds of pods. The pods are scheduled on the nodes based on the resource ability and the desired state of configuration. By configuring a Horizontal Pod Autoscaler (HPA), Kubernetes can automatically scale in and scale out a deployment.

Sponsor Note

DataStax is the company behind the massively scalable, highly available, cloud native NoSQL data platform built on Apache Cassandra. DataStax gives users and enterprises the freedom to run data in any cloud at global scale with zero downtime and zero lock-in.

When running in elastic infrastructure environments, Kubernetes can use Cluster Autoscaler to add and remove nodes to the cluster. Combined with HPA, this technique can efficiently manage dynamic autoscaling of both the workload as well as the infrastructure.
The lightweight networking stack and service discovery of Kubernetes are designed for scale. They can handle tens of thousands of endpoints exposed by services for internal and external consumption.
The Kubernetes ecosystem and the community continues to innovate to make the platform suitable for hyperscale workloads.
The post How Kubernetes Provides Networking and Storage to Applications appeared first on The New Stack.

X ITM Cloud News

Patricia

Leave a Reply

Next Post

Docker evn variable not getting set

Tue Sep 15 , 2020
Spread the love          Hi guys, I’m new to docker and i’m trying to build a flask app, and i have to pass credentials via env variables, and they won’t get set. Please help! Here is what I do: Build the app: docker build -t simple_python_app . ​ Run the app, trying […]
X- ITM

Cloud Computing – Consultancy – Development – Hosting – APIs – Legacy Systems

X-ITM Technology helps our customers across the entire enterprise technology stack with differentiated industry solutions. We modernize IT, optimize data architectures, and make everything secure, scalable and orchestrated across public, private and hybrid clouds.

This image has an empty alt attribute; its file name is x-itmdc.jpg

The enterprise technology stack includes ITO; Cloud and Security Services; Applications and Industry IP; Data, Analytics and Engineering Services; and Advisory.

Watch an animation of  X-ITM‘s Enterprise Technology Stack

We combine years of experience running mission-critical systems with the latest digital innovations to deliver better business outcomes and new levels of performance, competitiveness and experiences for our customers and their stakeholders.

X-ITM invests in three key drivers of growth: People, Customers and Operational Execution.

The company’s global scale, talent and innovation platforms serve 6,000 private and public-sector clients in 70 countries.

X-ITM’s extensive partner network helps drive collaboration and leverage technology independence. The company has established more than 200 industry-leading global Partner Network relationships, including 15 strategic partners: Amazon Web Services, AT&T, Dell Technologies, Google Cloud, HCL, HP, HPE, IBM, Micro Focus, Microsoft, Oracle, PwC, SAP, ServiceNow and VMware

.

X ITM