Kyle Edwards

Kubernetes

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Orchestrates, monitors containers, ensures scaling and uptime, networking and storage.

Describe your desired state, controllers watch the system and ensure it stays in the right state. Kubernetes resources are the primitives of the definition of the system.

Pods, controllers (ReplicaSets, Deployments), services (persistent access points into pod-based applications), storage

Containers

Resources

Kubernetes as a Developer

kubectl version
kubectl cluster-info
kubectl get all
kubectl run [container] --image=[image]
kubectl port-forward [pod] [ports]
kubectl expose ...
kubectl create [resource]
kubectl apply

# Web UI
kubectl apply [dashboard-yaml-url]
kubectl describe secret -n kube-system
# Locate account-token
kubectl proxy
# Go to dashboard URL

Pods

apiVersion: v1
kind: Pod
metadata:
	name: my-nginx
spec:
	containers:
	- name: my-nginx
	  image: nginx:alpine
	  livenessProbe:
		  exec:
			  command...
		  httpGet:
			  path: /index.html
			  port: 80
		  initialDelaySeconds: 15
		  timeoutSeconds: 2
		  periodSeconds: 5
		  failureThreshold: 1
# dry run
kubectl create -f file.pod.yml --dry-run --validate=true

# create if not exists
kubectl create -f file.pod.yml

# create or update if exists (just use this)
kubectl apply -f file.pod.yml

kubectl delete -f file.pod.yml

kubectl describe pod [pod]

# Enter shell
kubectl exec [pod] -it sh

# Edit changes in place
kubectl edit pod [pod]

Labels are important to link up different resources

Get IP Address of Pod

kubectl get pod {name} -o yaml | grep podIP

ReplicaSet

Deployments

Probes

Probes are diagnostics run by kubelets.

Note: It’s important to put resource constraints on pod specs to ensure node health.

… run nodes in compute, private subnet, vpn to connect … expose services to public load balancer

rolling updates (by default when using deployments) blue-green deployments canary deployments rollbacks

Services

single point of entry to one or more pods since pods are ephemeral, pod-specific IP addresses cannot be relied on services establish a fixed IP to abstract pod IPs from consumers pods and services are linked by labels load balances between pods the worker node’s kube-proxy creates a virtual IP for services load balance at the layer 4 level (TCP/UDP over IP) services are not ephemeral

Types of Services

Port Forwarding

You can port forward into multiple resources. However because pods are ephemeral, it’s better to forward into deployments or services.

kubectl port-forward pod/{name} {extPort}:{port}
kubectl port-forward deployment/{name} {extPort}:{port}
kubectl port-forward service/{name} {extPort}

YAML

apiVersion: v1
kind: Service
metadata:
	# name, labels, etc...
	name: nginx # Gives a DNS entry within the cluster
	labels:
		app: nginx
spec:
	type:
		# ClusterIP, NodePort, LoadBalancer, ExternalName
	selector:
		# Pod template label(s)
		app: nginx
	ports:
	- name: http
	  port: 80
	  targetPort: 80
apiVersion: v1
kind: Service
metadata:
	# name, labels, etc...
spec:
	type: NodePort
	selector:
		app: nginx
	ports:
	- port: 80
	  targetPort: 80
	  nodePort: 31000 # Optional for NodePort
apiVersion: v1
kind: Service
metadata:
	# name, labels, etc...
spec:
	type: LoadBalancer
	selector:
		app: nginx
	ports:
	- port: 80
	  targetPort: 80
apiVersion: v1
kind: Service
metadata:
	name: external-service
spec:
	type: ExternalName
    externalName: api.extern.com
	ports:
	- port: 18000

Get Service IP

This is not necessary within a cluster, as the service name is a local DNS name.

kubectl get services

Test Connection Between Pods and Services

kubectl exec {pod} -- curl -s  http://{service|podIP}
kubectl exec {pod} -it sh
> apk add curl
> curl -s http://{service|podIP}

Storage

Can store state/data and share it between pods and containers with Volumes. Pod file system is ephemeral. Pods can have multiple volumes, and containers use a mount path to access a volume.

Volumes

Can be tied to a pod’s lifetime. Mount path.

Types:

containers:
...
	volumeMounts:
		- name: {name}
		  mountPath: /usr/share/...
		  readOnly: true
  
# Look for Volumes
kubectl describe pod {pod}

# See volume mounts
kubectl get pod {pod} -o yaml

Poking Around Host Docker

PersistentVolumes

Cluster-wide storage resource that relies on network-attached storage, works with cloud, NFS, etc. Does not have a lifetime limited by a pod.

Administrator sets up the PersistentVolume resource, sets up a PersisentVolumeClaim resource, then uses that claim on the pod template and defines the mount path.

accessModes capacity resource requests

node affinity to choose which nodes on which it might live

StorageClasses

Dynamically provision storage

PVC can reference storage class, which will provision the PV whenever

StatefulSet

Provides certain guarantees, good for databases, keeps pod naming predictable

ConfigMaps and Secrets

ConfigMaps

keyvalue pairs set up as environment variables or access via configmap volume

apiVersion: v1
kind: ConfigMap
metadata:
	name: app-settings
	labels:
		app: app-settings
data:
	enemies: aliens
	lives: "3"
	enemies.cheat: true
	enemies.cheat.level: noGoodRotten
# config file
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten

# create configmap
kubectl create configmap {name} --from-file={path}

# slightly different
kubectl create configmap {name} --from-env-file={path}

Using ConfigMaps

# get contents
kubectl get cm/configmap {name} -o yaml
spec:
	template:
		...
	spec:
		containers: ...
			env:
			- name: ENEMIES
			  valueFrom:
				  configMapKeyRef:
					  name: app-settings
					  key: enemies

Load the entire ConfigMap into a container spec:

spec:
	template:
		...
	spec:
		containers: ...
			envFrom:
			- configMapRef:
			  name: app-settings

…or create a volume where config variables are files:

Note: The advantage of this is that the files are changed in place without requiring a pod restart.

spec:
	template:
		...
	spec:
		volumes:
			- name: app-config-volume
			  configMap:
				  name: app-settings
		  containers:
			  volumeMounts:
				  - name: app-config-vol
				    mountPath: /etc/config

Secrets

Sensitive data that can be provided securely to containers. Just like ConfigMaps, they can be mounted as files or set as environment variables. They are stored in the tempfs on the worker nodes.

kubectl create secret generic {name} --from-literal={key}={value}
kubectl create secret generic {name} --from-file={key}={path_to_file}

kubectl create secret tls {name} --cert={path_to_cert} --key={path_to_key}

Warning: Secrets in manifest files are only base64 encoded, and not secure.

Best Practices

How does SOPS fit into this?

Where to run Kubernetes?

Control plane has its own core cluster pods within the cluster itself. The control plane node is tainted by default so other pods are not run on it. (Eating your own dog food?)

Use kubectl explain {resource} for quick documentation. You can drill down like kubectl explain pod.spec.containers, things like that…

apiVersion: apps/v1
kind: Deployment
metadata:
	name: hello-world
spec:
	replicas: 1
	selector:
		matchLabels:
			app: hello-world
	template:
		metadata:
			labels:
				app: hello-world
		spec:
			containers:
			- image: gcr.io/google-samples/hello-app:1.0
			  name: hello-app

Generate resource manifests quickly with dry-run flag.

kubectl create deployment hello-world \
  --image=grc.io/google-samples/hello-app:1.0 \
  --dry-run=client -o yaml > deployment.yaml

kubectl expose deployment hello-world \
  --port=80 --target-port=8080 \
  --dry-run=client -o yaml > service.yaml

When you run kubectl apply to apply a new or updated manifest, you are submitting the changes to the API server, which parses and stores them in etcd. The contoller manager watches for any new resources, and the scheduler watches for unscheduled pods to run on given node written into etcd.

The kubelet uses the API server to check for updates, pull containers, run them, and connect networking with kube-proxy.