Personal cheat on Kubernetes (K8)

Home

1 Kubernetes Theory

Kubernetes is an orchestrator for deploying containerized apps. Said another way, Kubernetes is a prevalent open-source system for automating the deployment, scaling, and management of containerized applications.

And the most common containers are implemented using docker. So reviewing docker containers:

1.1 Docker Image Files

Where Docker stores definitions of isolated operating systems They are a recipe for what an operating system needs to run, and contains things like

  • install ubuntu
  • install apache

1.2 Docker Containers

They are instances of images. So, an image is like a class, and a container is like a particular instance of this class. These objects are you will be most dealing with. Can also think of docker images as golden images that are then used to create docker containers.

They communicate with each other as if they were real servers on a network, so via TCP/IP or UDP typically.

Why containers, or container apps, or containerized applications? It is a packaging mechanism for code that contains the application code itself along with all the dependancies (down to the version) of that application. Therefore they can run on any platform as all the reqeuirements are included/packaged together. So they are easier to deploy, they are easy to migrate from one platfrom to another.

Large monolithic applications are no longer being developed. Instead people write micro-services that interact over the network.

This also allows one to stay current by upgrading individual microservices easily and independent of one another. The changes to the system are small, but many. Kind of achieves continual improvement…

But, it creates a lot of fragmentation / container sprawl. The mgt of this sprawl needs to be addresssed with some container orchestration.

1.2.1 Kubernetes (Manage your container sprawl)

Kubernetes is an open source orchestrator for deploying containerized applications, originally developed by Google. It is the defacto standard container orchestration. It tells your system:

  1. what components comprise your app, and
  2. how they interact between themselves over the network (what ports, what addresses, etc)
  3. how the application needs to be present in the runtime, for instance, how many replicas are needed, how it scales.

Kubernetes has become a tool to develop, deploy and maintain cloud native distributed apps.

From :"Kubernetes: Up and Running: Dive into the Future of Infrastructure" "Reliable, scalable distributed systems." More and more services are delivered over the network via APIs. These APIs are often delivered by a distributed system, the various pieces that implement the API running on different machines, connected via the network and coordinating their actions via network communication.

Because we rely on these APIs increasingly for all aspects of our daily lives (e.g., finding directions to the nearest hospital), these systems must be highly reliable. They cannot fail, even if a part of the system crashes or otherwise fails. Likewise, they must maintain availability even during software rollouts or other maintenance events.

Finally, because more and more of the world is coming online and using such services, they must be highly scalable so that they can grow their capacity to keep up with ever-increasing usage without radical redesign of the distributed system that implements the services.

2 Kubernetes overview

Modern apps are distributed, in containers, in hybrid-clouds, that contain microservices.

Container Orchestration:

  1. deploying
  2. Scheduling
  3. Scaling
  4. UPdating
  5. ? missed this.

Kubernetes is hard. experts are rare. 96% of organization can't manage kubernetes on their own. For that reason, there are cloud providers that offer Kubernetes as a managed service, or KAAS. (GCP GKE, AWS EKS, Azure AKS are the main ones)

KAAS can run on:

  1. Microsoft Azure using Azure Container (Kubernetes?) Service, (AKS)
  2. Google Cloud Platform via the Google Kubernetes Engineer (GCP GKE).
  3. Amazon's AWS EKS, Elastic Kubernetes Service, (AWS EKS).

DKS has a GUI called "Docker Enterprise Universal Control Plane"

2.1 Benefits of Kubernetes

  • Islands of compute
  • Provides automated scheduling, ensuring that components are running,

automatic restarting, or expanding containers, automated roll-out of new app versions, as well as roll-back these upgrades.

  • Bin-packing (packing more nodes on a single host to get better utilization

of compute resources)

2.2 Why do we need security for Kubernetes

Initially was a tool for stateless workload, stateless containers, that did not need protection.

Now however many containers have persistent volumes and state is maintained on those. Kubernetes needs to keep track of what volumes store persistent data, and Kubernetes then needs to ensure that these volumes are protected and secured.

More workloads moving into production means compliance to security regulations and need mgt of type of security needed for these different workloads on containers.

Applications developers like to move quickly into production, however operations security managers need to see that the new versions are secure.

As more need for data protection arises, you need more security on the containers.

2.3 What needs protection (Kubernetes)

Kubernetes itself needs protection. For instance SCD is kubernetes' own database keeps info on the application state of the containers. These need protection.

Although stateless workloads on a container can be very simply killed and redeployed, this is only a subset of applications/workloads.

Configuration data needs protection.

Persistent Volumes. Kubernetes has done a good job at abstracting storage that is cloud agnostic. You may change the storage class for an application but the way you do backups and protection would be identical whether you are using Google Cloud, Azure, AWS or other cloud providers, or private on prem clouds.

Good job on security policies, network policies that can be driven through infrastructure as code in a very declarative approach.

End user view really looks to proctect:

  1. Persistent data
  2. Configurations
  3. Operator customized resources (are part of the application state, stored in SCD)

3 Kubernetes Features

3.1 Velocity

Speed of deployment of new features hourly while maintaining reliablity with no downtime

3.2 Immuability

Once a container is created, you cannot change it. So, kill it and replace it with a newer version.

Containers and Kubernetes encourage developers to build distributed systems that create an immutable infrastructure. Once an "artifact" is created, users cannot change it. This contrasts with traditional infrastructures where changes are applied to an existing system as "incremental updates".

With Kubernets, rather than making an incremental update to a service, an entirely NEW, complete image is built, where the update becomes a simple replacement of the intire image with the new image, in a single operation.

Contrast that to a system that has an accumulation of upgrades and changes. The current state of the infrastructure is no longer a single artifact but the whole string of sequential upgrades including operator changes.

3.2.1 No Incremental Changes.

This philosphy of creating a brand new image, comes with the benefit that the old image is still there as a fall-back if the new image has some bugs. New images are also easy to understand and fix (or redo).

3.3 Immutable Container Images

The are the crux of everything you can build with Kubernetes. No changes! Just new images.

3.4 Declarative Configuration a.k.a. Desired State Management

Everything in Kubernetes is a declarative configuration configuration object. These represent the desired state of the system. It is up to Kubernetes to ensure that the desired state is implemented and that the running state matches the desired state. Kubernetes calls this "Desired State Management".

This is very different from an imperative configuration, where the configs are a bunch of instructions/cookbooks on what actions needs to be done. I always have liked the air traffic controller, who simply declares "flight AC545 come to a heading 270 and altitude 8000 ft, speed 280 knots". That is much different than "flight AC 545, reduce throttle by 30%, right flaps 20%, bank right…"

Because the state of the system is defined in these declarative configs, it is easy to put them into a version control system, and very easy to back-out of changes and go back to a previous state, or two or three back.

3.5 Self Healing

Kuberetes is an online self-healing system. On a regular basis Kubernetes compares its current state to the declared desired state. Any mis-match is noted, and acted upon. Because the Kubernetes is in control of making the detailed changes to match a desired state from the beginning, it has all the native capabilities to make the changes to move the system back to a desired state.

Contrast that to the traditional cook-book list of steps given one condition and a different series of steps given another condition, let alone knowing if after the steps have been executed is the desired state actually attainged or not. -yikes!!

Plus, the old system relies on operators to make run the cook-book steps according to the planned and documented multitude of recovey procedures.

4 Decoupling

Kubernetes is a distributed architecture where each component is separated from others by defined APIs and service load balancers. The APIs provide the buffer and isolation between the implementers and the users of the system.

4.1 Load Balancers

Load balaners make it possible to isolate the implementers from the users. That makes it easy to scale the system, because adding processes from one area does not affect other areas. Adusting and/or reconfiguring of other components is NOT needed, when you increase the scale.

4.2 Microservices

Decoupling servers via APIs not only makes it easy to scale, it also allows each team to focus on a small microservice, and make that efficient, error free, and stable, including implementing the changes needed.

5 Easy Scaling Apps and Clusers

Scaling is trivial for Kubenetes because of the features of decoupling, self- healing, declarative configuration, and immutablity of containers. All one needs to do is specify how many replicas are needed in one line of your container config, and tell Kubernetes of your new desired, declarative state, and Kubernetes does the rest.

5.1 Autoscaling

You can even let Kubernetes autoscale for you as the load demands. Kubernetes simply takes the immutable containers and increases their count with the tied-in load balancers.

The manual chore becauase simply monitoring resources and scaling up the cluster itslef with more resources as needed. Because even the machines in the cluster are identical, Kubernetes helps you with this task too. You can simply image a new machine, and join it to the cluster.

5.2 Forcasting Future Growth

a.k.a Capacity Planning is also more accurate using Kubernetes for these reasons:

  1. Decoupling the teams from the specific machines they are using allows the aggregate services to be shared among all the available machines.
  2. Each service does not need to be dedicated a resource that has reserve capacity for planned expansion that may or may not be needed.
  3. Statistically averaged load across many services is achieved and the aggregate utilization of existing machines can be much higher.

6 Kubernetes Microservices

From kubernetes.io,

A Service in Kubernetes is a REST object, similar to a Pod. Like all of the REST objects, you can POST a Service definition to the API server to create a new instance.

For example, suppose you have a set of Pods that each listen on TCP port 9376 and carry a label app=MyApp:

apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp ports:

  • protocol: TCP port: 80 targetPort: 9376

This specification creates a new Service object named “my-service”, which targets TCP port 9376 on any Pod with the app=MyApp label.

Kubernetes assigns this Service an IP address (sometimes called the “cluster IP”), which is used by the Service proxies (see Virtual IPs and service proxies below).

The controller for the Service selector continuously scans for Pods that match its selector, and then POSTs any updates to an Endpoint object also named “my-service”.

Note: A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field. Port definitions in Pods have names, and you can reference these names in the targetPort attribute of a Service. This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for deploying and evolving your Services. For example, you can change the port numbers that Pods expose in the next version of your backend software, without breaking clients.

The default protocol for Services is TCP; you can also use any other supported protocol.

As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same protocol, or a different one.

Kubernetes provides numerous tools and APIs to make it easy to build these decoupled microservices architectures.

  1. Pods or groups of containers Can group together container images developed by multiple teams into a single deployable unit, called a pod.
  2. Built-in load balancing Kubernetes services provide load balancing, naming, and discovery to isolate one microservice from another
  3. Namespaces Namespaces provide isolation and access control, so that each microservice can control the degree to which other services interact with it.
  4. Ingress Objects Provide an easy-to-use frontend that can combine multiple microservices into a single externalized, API front.

The decoupling of the application container image from the machine it runs on permits differrent microservices to be colocated on the same machine without interferring with each other. Combine that with the self-healing, health- cheching features of Kubernetes, can guaranteed a consistent application rollout despite the proliferation of microservices.

Picture here: screenshot Nov 21 11:55 pm.

6.1 KAAS (Kubernetes as a Service)

On big enough teams, with enough resources, the decoupling of the apps from the kubernetes services can warrant a dedicated team managing the h/w and kubernetes services for a whole bunch of different app developer teams.

Smaller companies though may want to deligate the h/w and kuberentes services management to the cloud so they can focus on just the app development. This is called KaaS, or Kubernetes as a Service.

KAAS can run on:

  1. Microsoft Azure using Azure Container (Kubernetes?) Service, (AKS)
  2. Google Cloud Platform via the Google Kubernetes Engineer (GCP GKE).
  3. Amazon's AWS EKS, Elastic Kubernetes Service, (AWS EKS).

Left off on chapter 1, and "Abstracting your Infrastructure"

7 Working with Containers (creating and operating)

That is what Kubernetes does, it creates, deploys, and manages distributed applications. Distributed apps have one thing in common, despite all the types of apps out there. They all have one or more apps that run on individual machines. They accept input, manipulate data, then return the results. So, step 1 is figure out how to build the application container images that make up our distributed applications.

7.1 Issues with traditional apps

Applications usually have a language runtime, libraries, and your source code. Often applications utilize external libraries such as libc and libssl. These external libraries are generally shipped as shared components in the OS that you have installed on a particular machine.

Problems can occur when an application developed on a progrrammers's laptop has a dependency on a shared library that isn't available when the program is rolled out to the production OS running on a Linux, or other, server.

So there are complex installation scripts that try to check all variations and options so that dependencies are all met. Add to that the fact that running multiple traditional apps on a single machine means these apps have to agree on a set of libraries, because they are shared libraries used by all of them.

Docker has a registry makes it easy to package an application along with all of the library dependencies and push that image to a the remote registry where it can be later pulled by others. There is are many linux container image on docker hub that have been downloaded over 10 million times.

8 Kubernetes Clusters

Clusters contain several components like:

  • Pods: Groups of containers on the same node that are created, scheduled and deployed together.
  • Labels: Key-value tags that are assigned to elements like pods, services, and replication controllers to identify them.
  • Services: Used to give names to Pod Groups. They can act as load balancers to direct traffic to run containers.
  • Replication Controllers: Frameworks that have specifically been designed for the purpose of ensureing that, at any given moment, a certain number of pod replicas are scheduled and running.
  • K8s cluster services (that talk to kubletes on each worker) K8 is also called "Kate" or "Kate's"
  • worker = a container that also runs a small "shim" or "agent" called a "Kublet"
  • Congiration files in YAML format for example: app1.yaml Deployment Pod1
    • Container Image 1
    • Container Image 2

    Replica > 3 Pod2

    • Container Image 3
    • Replicas > 2
    • Replication Sets

8.1 kubectl commands

Running docker you will get kubernetes too. To control kubernetes from the command line, use the kubectl commands.

Some examples: kubectl get all –all-namespaces -o wide

9 Kubernetes Commands

helm is like sed + kubectl apply. It is not just a package manager for kubernetes, but can be developed as a lifecycle manager, and more.

9.1 helm and tiller

helm is the package manager for kubernetes, similar to apt-get, yum, dnf, brew, pip, etc. tiller is the component installed on the kubernetes cluster that will accept commands from your helm client and enforce the resulting configurations on the kubernetes cluster

9.2 helm commands

helm –help

helm init –help

helm repo list helm repo update helm upgrade chart-name help rollback helm search wordpress helm install stable/wordpress

k get po –all-namespaces (aliased to kubectl? check youtube knova

> could not find tiller

So lets tell helm where its tiller is. So run helm init (helm init –help)

9.3 values.yaml

Where you can define all your cluster values. These values are then pulled into the deployment.YAML and service.YAML files rather than have these files define individual values. Examples could be: deployment: image:node/mondodb replicas:2 service: type:NodePort port:8080

Then your deployment.YAML file would not define image: node/mongo but would use image:{{values.deployment.image}} and replicas: 2 would rather be replicas:{{values.deployment.replicas}}

Also your service.YAML file would have entries like: type:{{valuse.service.type}} port:{{values.service.port}}

9.4 kube-system

It is the root

9.5 helm charts ?

best charts are in CenterForOpenScience/helm-charts repo These are like dependencies lists for each app, that includes all the configuration settings for the apps you are deploying on a k8 cluster

9.6 repos

You can push a chart to a repo, so that others can benefit from the configurations that you have created. This is with help package, then helm repo to push the package up to the repo.

9.7 helm aslo includes sprig

9.8 kubernetes manifest ?

9.9 Application Container Images

10 Cloud based Kubernetes Clusters

You can run Kubernetes Clusters on your local environment, or run them in the cloud. Several cloud providers offer tools and hooks to make that easy. This compares the cloud offereings.

10.1 Pricing Structure

GKE and AKS provide cluster management for free: Master node management and machines running it are not billed. You pay only for what you run, like virtual machines, bandwidth, storage, and services.

Amazon EKS, on the other hand, costs $0.20 per hour for each deployed cluster, in addition to all the other services you will need. In a 30-day month, that comes out to a $144 extra. Keep in mind that AWS bills even for testing and staging cluster environments.

10.2 On which cloud provider should I deploy my Kubernetes Cluster?

From this blog on Kubernetes Cost Guide: AWS vs GCP vs Azure vs Digital Ocean "AWS, GCP, Microsoft Azure and Digital Ocean are in the top 6 environments, enterprises deploy Kubernetes workloads on. OpenStack and on-premise are the other two. AWS, GCP and Azure also have managed Kubernetes offerings, while Digital Ocean's is on the way. This coupled with the cloud first trend makes it safe to assume that most enterprise Kubernetes workloads are en-route to the cloud."

100 Core, 400 GB Kubernetes cluster AWS GCP Azure Digital Ocean
         
Direct Deployment $50,882 $32,040 $43,730 $25,920
(on-demand instances)        
Direct Deployment $37,974 $29,883 $31,830  
(70% reserved instances)       -
Managed Kubernetes $50,064 $30,874 $42,048 -
(EKS,GKE, AKS - on-demand instances)        
Managed Kubernetes $37,156 $28,718 $30,148 -
(EKS,GKE, AKS - 70% reserved instances)        

“The above table shows you yearly cloud provider costs for a 100 core, 400 GB Kubernetes cluster. Even though Digital Ocean doesn’t provide either a managed Kubernetes offering or discounts for reserved instances, it still has the lowest cost for running our cluster. GCP is a close second. When our cluster exclusively leverages on-demand instances, GCP’s cost is 37% lower than AWS and 27% lower than Azure. GKE cost, when deployed on on-demand instances is also 38% and 27% lower as compared to EKS and AKS, respectively.”

The "direct deployment" above refers to workloads deployed directly onto virtual machine instances. The "managed Kubernetes" above refers to ones leveraging managed Kubernetes services like AWS EKS, GCP GKE or Azure AKS.

11 AWS EKS

Amazon Web Services, Elastic Kubernetes Service From : aws.amazon.com "Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure. Amazon EKS is certified Kubernetes conformant so you can use existing tooling and plugins from partners and the Kubernetes community. Applications running on any standard Kubernetes environment are fully compatible and can be easily migrated to Amazon EKS. Amazon EKS supports both Windows Containers and Linux Containers to enable all your use cases and workloads."

Support kubectl command-line utility: eg: aws eks --region ${region} update-kubeconfig --name ${cluster}

12 GCP GKE

Google Cloud Platform, Google Kubernetes Engine From cloud.google.com "Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications. It brings our latest innovations in developer productivity, resource efficiency, automated operations, and open source flexibility to accelerate your time to market.

Launched in 2015, Kubernetes Engine builds on Google's experience of running services like Gmail and YouTube in containers for over 12 years. Kubernetes Engine allows you to get up and running with Kubernetes in no time, by completely eliminating the need to install, manage, and operate your own Kubernetes clusters."

It is the first one that was available, and is the most advanced.

Support kubectl command-line utility: eg: gcloud container clusters get-credentials ${cluster}

13 Azure AKS

AKS = Azure Kubernetes Service From azure.microsoft.com, "The fully managed Azure Kubernetes Service (AKS) makes deploying and managing containerized applications easy. It offers serverless Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience, and enterprise-grade security and governance. Unite your development and operations teams on a single platform to rapidly build, deliver, and scale applications with confidence."

Microsoft had experience with the older "Azure Container Service". ACS supported not only Kubernetes, but Docker Swarm and Apache Mesos as well.

Support kubectl command-line utility: eg: az aks get-credentials --resource-group ${RS} --name ${cluster}

14 Comparing GKE, EKS, AKS

From logz.io blog:

Feature GKE AKS EKS
Service      
Lastest K8 is 1.13.6 1.13.5 1.12.6
version v1.14      
(June2019)      
Automatic Master and On-demand, master, On-demand, with
update nodes and nodes upgraded manual, command-
    together line and steps.
      Nodes need to be
      updated manually
CLI Support Supported Supported Supported
Resource Stackdriver Azure Monitor for Third party only
monitoring (paid with Containers and  
  free tier) Appplication  
    Insights  
Auto-scaling Yes under preview Yes
nodes      
Node groups yes No Yes
HA Clusters Yes In development No
RBAC Yes Yes Yes
Bare Metal No No AWS
nodes      

15 Docker Swarm vs Kubernetes

Swarm is apparently known to be more scalable than Kubernetes. Containers can be deployed faster both when it comes to large clusters and high cluster fill stages

16 Virtual Kubelet (https://virtual-kubelet.io)

A kublet is a kubernetes node. And since a node is just a backend doing two things (see below), does the node have to be a virtual machine? No it doesn't. You could run this in the cloud. That is a virtual kubelet.

Instead of having a virtual machine that provides the backend for your containers and pods to run on, let's just farm that out to our cloud provider and let them provide that.

You can have a virtual kubelet join ACI, which lets you spin up as many containers and pods as you want, and it provides the networking.

For example Microsoft Azure could provide the two functions for you, spinning up containers and provide networking, i.e. MS We already have managed kubernetes: Azure k8 service for MS cloud Elastic K services from AWS GKE from the Google cloud These all manage the control structure

A virtual kubelet is similar to just saying to the cloud provider, you take care of our clusters, pods, nodes, network, we will run the virtual kubelet to control what we want and how much.

16.1 Kubernetes node

A kubernetes node is a backend for kubernetes that provides two things.

  1. networking
  2. container runtime this starts and stops your pods and your containers, and manages the life cycle

A node is basically a virtual machine. It runs the kublet, maybe docker or containerd as the container runtime, (something to start and stop the containers), and runs kubeproxy to run the networking.

17 Kubernetes Services

They provide the stable and reliable network for our unreliable Kubernetes pods. Almost always we think of them as providing traffic routing to our pods. It comes in four types:

  1. ClusterIP
  2. NodePort
  3. LoadBalancer (integrates with our cloud service provider and their load balancer)
  4. Externalname

All work towards getting traffic to the right pod.

18 Docker Enterprise 3.0 Webinar - Part V Docker Kubernetes Service DKS

Topic is Docker Kubernetes Service.

18.1 Capabilites

Docker Application is a new packaging option

18.2 Docker Kubernetes Service, DKS

18.3 Kubernetes overview

Modern apps are distributed, in containers, in hybrid-clouds, that contain microservices.

Container Orchestration:

  1. deploying
  2. Sche4duling
  3. Scaling
  4. UPdating
  5. ? missed this.

Kubernetes is hard. experts are rare. 96% of organization can't manage kubernetes on their own.

DKS has a GUI called "Docker Enterprise Universal Control Plane

18.4 Benefits of Kubernetes

Islands of compute

Provides automated scheduling, ensuring that components are running, automatic restarting, or expanding containers, automated roll-out of new app versions, as well as roll-back these upgrades.

Bin-packing (packing more nodes on a single host to get better utilization of compute resources)

18.5 Why do we need security for Kubernetes

Initially was a tool for stateless workload, stateless containers, that did not need protection.

Now however mmany containers have persistent volumes and state is maintained on those. Kubernetes needs to keep track of what volumes store persistent data, and Kubernetes then needs to ensure that these volumes are protected and secured.

More workloads moving into production means compliance to security regulations and need mgt of type of security needed for these different workloads on containers.

Applications developers like to move quickly ionto prodeuction, however operations security managers need to see that the new versions are secure.

As more need for data protection arises, you need more security on the containers. You

18.6 What needs protection (Kubernetes)

Kubernetes itself needs protection. For instance SCD is ubernetes own database keeps info on the application state of the containers. These need protection.

Although stateless workloads on a container can be very simply killed and redeployed, but this is only a subset of applications/workloads.

Configuration data needs protection.

Persistent Volumes. Kubernetes has done a good job at abstracting storage that is cloud agnostic. You may change the storage class for an application but the way you do backups and protection would be identical whether you are using Google Cloud, Azure, AWS or other cloud providers, or private on prem clouds.

good job on security policies, net work policies that can be driven through infrastructure as code in a very declarative approach.

End user view really looks to proctect:

  1. Persistent data
  2. Configuration
  3. Operator customized resources (are part of the application state, stored in SCD

19 Project Volaro

A open-source project that was acquired by VMware. (Heptio). Download Volaro for free and run it on your Kubernetes environment.

A tool for backing up, restoring and migrating Kubernetes applications.

  • Volaro backups are focused on all the objects in the SCD database.
  • Volaro packages them up, including persistent volume snapshots that are associated with the application.
  • Volaro can then do a restore of the applicion as a whole, including the CSD database, containers,and all persistent volumes themselves.
  • Volaro discovers all the components/container and where they are.
  • SCD database backup typically can't be done by users. ( you would have to sleep kubernetes for a while.
  • Then it enumerates them all and packages them up

Can restore whole clusters, parts of a cluster, migrate a cluster. And can be used to backupo Kubernetes itself, so that it can be upgraded then have all the containers, and CSD database resetore on the new upgraded Kubernetes platform.

20 Networking for Kubernetes

From Oracle at Kubecon session The network needs to satisfy the following requirements

  • All containers can communicate with all other containers without NAT
  • All nodes can communicate with all containers without NAT and vice-versa
  • The IP a container sees itself as, is the same IP that others see it as

So to get to a (flannel style) overlay network you can build up these 4 steps:

20.1 Step 1) Single network namespace

20.2 Step 2) Single node, 2 network namespaces

20.3 Step 3) Multiple nodes, same L2 network

20.4 Step 4) Multiple nodes, overlay network

20.5 Home