Kubernetes#
Kubernetes Essentials#
Interactive Diagram
Working with K8s objects
Log rotation
In order for Kubernetes to pull your container image you need to first push it to an image repository like Docker Hub.
To avoid storing your Docker Hub password unencrypted in $HOME/.docker/config.json when you docker login
to your account, use a credentials store. A helper program lets you interact with such a keychain or external store.
If you're doing this on your laptop with Docker Desktop, it already provides a store.
Otherwise, use one of the stores supported by the docker-credential-helper
. Now docker login
on your terminal or on the Docker Desktop app.
Designing Applications for Kubernetes#
Implements the 12-Factor App Design Methodology and based on a Cloud Guru course. It uses Ubuntu 20.04 Focal Fossa LTS and the Calico network plugin instead of Flannel. Example with 1 control plane node and 2 worker nodes.
Prerequisites#
Building a Kubernetes Cluster#
Reference
Installing kubeadm
Creating a cluster with kubeadm
Warning
kubeadm
sometimes doesn't work with the latest and greatest version of Docker right away.
kubeadm
simplifies the process of setting up a K8s cluster.
containerd
manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments.
kubelet
handles running containers on a node.
kubectl
is a tool for managing the cluster.
If you wish, you can set an appropriate hostname for each node.
On the control plane node:
sudo hostnamectl set-hostname k8s-control
On the first worker node:
sudo hostnamectl set-hostname k8s-worker1
sudo hostnamectl set-hostname k8s-worker2
On all nodes, set up the hosts file to enable all the nodes to reach each other using these hostnames.
sudo nano /etc/hosts
On all nodes, add the following at the end of the file. You will need to supply the actual private IP address for each node.
<control plane node private IP> k8s-control
<worker node 1 private IP> k8s-worker1
<worker node 2 private IP> k8s-worker2
Log out of all three servers and log back in to see these changes take effect.
On all nodes, set up containerd. You will need to load some kernel modules and modify some system settings as part of this
process.
# Enable them when the server start up
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
# Enable them right now without having to restart the server
sudo modprobe overlay
sudo modprobe br_netfilter
# Add network configurations K8s will need
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# Apply them immediately
sudo sysctl --system
Install and configure containerd.
sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
# Generate the contents of a default config file and save it
sudo containerd config default | sudo tee /etc/containerd/config.toml
# Restart containerd to make sure it's using that configuration
sudo systemctl restart containerd
On all nodes, disable swap.
sudo swapoff -a
# Some required packages
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
# Set up the package repo for K8s packages. Download the key for the repo and add it
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# Configure the repo
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
# Install the K8s packages. Make sure the versions for all 3 are the same.
sudo apt-get install -y kubelet=1.24.0-00 kubeadm=1.24.0-00 kubectl=1.24.0-00
# Make sure they're not automatically upgraded. Have manual control over when to update K8s
sudo apt-mark hold kubelet kubeadm kubectl
On the control plane node only, initialize the cluster and set up kubectl access.
sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.24.0
# Config File to authenticate and interact with the cluster with kubectl commands
# These are in the output of the previous step
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify the cluster is working. It will be in Not Ready status because we haven't configured the networking plugin.
kubectl get nodes
Install the Calico network add-on.
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Get the join command (this command is also printed during kubeadm init . Feel free to simply copy it from there).
kubeadm token create --print-join-command
Copy the join command from the control plane node. Run it on each worker node as root (i.e. with sudo ).
sudo kubeadm join ...
On the control plane node, verify all nodes in your cluster are ready. Note that it may take a few moments for all of the nodes to enter the READY state.
kubectl get nodes
Installing Docker#
Reference
Install Docker Engine on Ubuntu
Docker credentials store
to avoid storing your Docker Hub password unencrypted in $HOME/.docker/config.json
when you docker login
and docker push
your images.
On the system that will build Docker images from source code e.g. a CI server, install and configure Docker.
For simplicity we'll use the control plane server just so we don't have to create another server for this exercise.
Create a docker group. Users in this group will have permission to use Docker on the system:
sudo groupadd docker
Install required packages.
Note: Some of these packages may already be present on the system, but including them here will not cause any problems:
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
Set up the Docker GPG key and package repository:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install the Docker Engine:
sudo apt-get update && sudo apt-get install -y docker-ce docker-ce-cli
# Type N (default) or enter to keep your current containerd configuration
Test the Docker setup:
sudo docker version
Add cloud_user to the docker group in order to give cloud_user access to use Docker:
sudo usermod -aG docker cloud_user
Test your setup:
docker version
I. Codebase#
One codebase tracked in revision control, many deploys
Keep your codebase in a version control system like Git. There's a one-to-one relationship between the codebase and the app.
If there are multiple codebases it's not an app - it's a distributed system where each component is an app.
Factor shared code into libraries which can be included through the dependency manager.
A deploy is a running instance of the app.
Your apps can be implemented as containers/pods, built and deployed independently of other apps.
II. Dependencies#
Explicitly declare and isolate dependencies
Don't rely on implicit existence of system-wide packages. Declare all dependencies, completely and exactly, via a dependency declaration manifest. With containers, your app and its dependencies are deployed as a unit, allowing it to run almost anywhere - a desktop, a traditional IT infrastructure, or the cloud.
III. Config#
Store config in the environment
ConfigMaps and Secrets#
Encrypting Secret Data at Rest
ConfigMaps
Secrets
Create a production Namespace:
kubectl create namespace production
Get base64-encoded strings for a db username and password:
echo -n my_user | base64
echo -n my_password | base64
Example: Create a ConfigMap
and Secret
to configure the backing service connection information for the app, including the base64-encoded credentials:
cat > my-app-config.yml <<End-of-message
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-config
data:
mongodb.host: "my-app-mongodb"
mongodb.port: "27017"
---
apiVersion: v1
kind: Secret
metadata:
name: my-app-secure-config
type: Opaque
data:
mongodb.username: dWxvZV91c2Vy
mongodb.password: SUxvdmVUaGVMaXN0
End-of-message
kubectl apply -f my-app-config.yml -n production
Create a temporary Pod
to test the configuration setup. Note that you need to supply your Docker Hub username as part of the image name in this file.
This passes configuration data in env variables but you could also do it in files that will show up on the containers filesystem.
cat > test-pod.yml <<End-of-message
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: my-app-server
image: <YOUR_DOCKER_HUB_USERNAME>/my-app-server:0.0.1
ports:
- name: web
containerPort: 3001
protocol: TCP
env:
- name: MONGODB_HOST
valueFrom:
configMapKeyRef:
name: my-app-config
key: mongodb.host
- name: MONGODB_PORT
valueFrom:
configMapKeyRef:
name: my-app-config
key: mongodb.port
- name: MONGODB_USER
valueFrom:
secretKeyRef:
name: my-app-secure-config
key: mongodb.username
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: my-app-secure-config
key: mongodb.password
End-of-message
kubectl apply -f test-pod.yml -n production
Check the logs to verify the config data is being passed to the container:
kubectl logs test-pod -n production
Clean up the test pod:
kubectl delete pod test-pod -n production --force
IV. Backing services#
Treat backing services as attached resources
Makes no distinction between local and third party services. To the app, both are attached resources, accessed via a URL or other locator/credentials stored in the config. You should be able to swap out a local MySQL database with one managed by a third party (such as Amazon RDS) without any changes to the app’s code.
By implementing attached resources as containers/pods you achieve loose coupling between those resources and the deploy they are attached to.
V. Build, release, run#
Strictly separate build and run stages
Build, Release, Run with Docker and Deployments#
Reference
Labels and Selectors
Deployments
Example: After you docker build
and docker push
your image to a repository, create a deployment file for your app.
The selector
selects pods that have the specified label name and value.
template
is the pod template.
This example puts 2 containers in the same pod for simplicity but in the real world you'll want separate deployments to scale them independently.
cat > my-app.yml <<End-of-message
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-config
data:
mongodb.host: "my-app-mongodb"
mongodb.port: "27017"
.env: |
REACT_APP_API_PORT="30081"
---
apiVersion: v1
kind: Secret
metadata:
name: my-app-secure-config
type: Opaque
data:
mongodb.username: dWxvZV91c2Vy
mongodb.password: SUxvdmVUaGVMaXN0
---
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
spec:
type: NodePort
selector:
app: my-app
ports:
- name: frontend
protocol: TCP
port: 30080
nodePort: 30080
targetPort: 5000
- name: server
protocol: TCP
port: 30081
nodePort: 30081
targetPort: 3001
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-server
image: <Your Docker Hub username>/my-app-server:0.0.1
ports:
- name: web
containerPort: 3001
protocol: TCP
env:
- name: MONGODB_HOST
valueFrom:
configMapKeyRef:
name: my-app-config
key: mongodb.host
- name: MONGODB_PORT
valueFrom:
configMapKeyRef:
name: my-app-config
key: mongodb.port
- name: MONGODB_USER
valueFrom:
secretKeyRef:
name: my-app-secure-config
key: mongodb.username
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: my-app-secure-config
key: mongodb.password
- name: my-app-frontend
image: <Your Docker Hub username>/my-app-frontend:0.0.1
ports:
- name: web
containerPort: 5000
protocol: TCP
volumeMounts:
- name: frontend-config
mountPath: /usr/src/app/.env
subPath: .env
readOnly: true
volumes:
- name: frontend-config
configMap:
name: my-app-config
items:
- key: .env
path: .env
End-of-message
Deploy the app.
kubectl apply -f my-app.yml -n production
Create a new container image version to test the rollout process:
docker tag <Your Docker Hub username>/my-app-frontend:0.0.1 <Your Docker Hub username>/my-app-frontend:0.0.2
docker push <Your Docker Hub username>/my-app-frontend:0.0.2
Edit the app manifest my-app.yml
to use the 0.0.2
image version and then:
kubectl apply -f my-app.yml -n production
Get the list of Pods to see the new version rollout:
kubectl get pods -n production
VI. Processes#
Execute the app as one or more stateless processes
Processes with stateless containers#
Edit the app deployment my-app.yml
. In the deployment Pod spec, add a new emptyDir
volume under volumes
:
volumes:
- name: added-items-log
emptyDir: {}
...
Mount the volume to the server container:
containers:
...
- name: my-app-server
...
volumeMounts:
- name: added-items-log
mountPath: /usr/src/app/added_items.log
subPath: added_items.log
readOnly: false
...
Make the container file system read only:
containers:
...
- name: my-app-server
securityContext:
readOnlyRootFilesystem: true
...
Deploy the changes:
kubectl apply -f my-app.yml -n production
Persistent Volumes#
Reference
Persistent Volumes (PV)
local
PV
Create a StorageClass
that supports volume expansion as localdisk-sc.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localdisk
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true
kubectl create -f localdisk-sc.yml
persistentVolumeReclaimPolicy
says how storage can be reused when the volume's associated claims are deleted.
- Retain: Keeps all data. An admin must manually clean up and prepare the resource for reuse.
- Recycle: Automatically deletes all data, allowing the volume to be reused.
- Delete: Deletes underlying storage resource automatically (applies to cloud only).
accessModes
can be:
- ReadWriteOnce: The volume can be mounted as read-write by a single node. Still can allow multiple pods to access the volume when the pods are running on the same node.
- ReadOnlyMany: Can be mounted as read-only by many nodes.
- ReadWriteMany: Can be mounted as read-write by many nodes.
- ReadWriteOncePod: Can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes.
Create a PersistentVolume
in my-pv.yml
.
kind: PersistentVolume
apiVersion: v1
metadata:
name: my-pv
spec:
storageClassName: localdisk
persistentVolumeReclaimPolicy: Recycle
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/output
kubectl create -f my-pv.yml
Check the status of the PersistentVolume
.
kubectl get pv
Create a PersistentVolumeClaim
that will bind to the PersistentVolume
as my-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: localdisk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
kubectl create -f my-pvc.yml
Check the status of the PersistentVolume
and PersistentVolumeClaim
to verify that they have been bound.
kubectl get pv
kubectl get pvc
Create a Pod
that uses the PersistentVolumeClaim
as pv-pod.yml
.
apiVersion: v1
kind: Pod
metadata:
name: pv-pod
spec:
restartPolicy: Never
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'echo Success! > /output/success.txt']
volumeMounts:
- name: pv-storage
mountPath: /output
volumes:
- name: pv-storage
persistentVolumeClaim:
claimName: my-pvc
kubectl create -f pv-pod.yml
Expand the PersistentVolumeClaim
and record the process.
kubectl edit pvc my-pvc --record
...
spec:
...
resources:
requests:
storage: 200Mi
Delete the Pod
and the PersistentVolumeClaim
.
kubectl delete pod pv-pod
kubectl delete pvc my-pvc
Check the status of the PersistentVolume
to verify that it has been successfully recycled and is available again.
kubectl get pv
VII. Port binding#
Export services via port binding
Port Binding with Pods#
Note
Challenge: Only 1 process can listen on a port per host. So how do all apps on the host use a unique port?
- In K8s, each pod has its own network namespace and cluster IP address.
- That IP address is unique within the cluster even if there are multiple worker nodes in the cluster. That means ports only need to be unique within each pod.
- Two pods can listen on the same port because they each have their own unique IP address within the cluster network.
- The pods can communicate across nodes simply using the unique IPs.
Get a list of Pods in the production namespace:
kubectl get pods -n production -o wide
Copy the name of the IP address of the application Pod.
Example: Use the IP address to make a request to the port on the Pod that serves the frontend content:
curl <Pod Cluster IP address>:5000
VIII. Concurrency#
Scale out via the process model
Concurrency with Containers and Scaling#
By using services
to manage access to the app, the service automaticaly picks up the additional pods created during scaling and route traffic to those pods.
When you have services
alongside deployments
you can dynamically change the number of replicas that you have and K8s will take care of everything.
Edit the application deployment my-app.yml
.
Change the number of replicas to 3:
...
replicas: 3
Apply the changes:
kubectl apply -f my-app.yml -n production
Get a list of Pods:
kubectl get pods -n production
Scale the deployment up again in my-app.yml
.
Change the number of replicas to 5:
...
replicas: 5
Apply the changes:
kubectl apply -f my-app .yml -n production
Get a list of Pods:
kubectl get pods -n production
IX. Disposability#
Maximize robustness with fast startup and graceful shutdown
Disposability with Stateless Containers#
Reference
Security Context
Pod Lifecycle
Deployments can be used to maintain a specified number of running replicas automatically replacing pods that fail or are deleted.
Get a list of Pods:
kubectl get pods -n production
Locate one of the Pods from the my-app
deployment and copy the Pod's name.
Delete the Pod using the Pod name:
kubectl delete pod <Pod name> -n production
Get the list of Pods again. You will notice that the deployment is automatically creating a new Pod to replace the one that was deleted:
kubectl get pods -n production
X. Dev/prod parity#
Keep development, staging, and production as similar as possible
Dev/Prod Parity with Namespaces#
K8s namespaces allow us to have multiple environments in the same cluster. A namespace is like a virtual cluster.
Create a new namespace
:
kubectl create namespace dev
Make a copy of the my-app app YAML:
cp my-app.yml my-app-dev.yml
NodePort
services
need to be unique within the cluster. We need to choose unique ports so dev doesn't conflict with production.
Edit the my-app-svc
service in the my-app-dev.yml
file to select different nodePort
s. You will also need to edit the my-app-config
ConfigMap
to reflect the new port.
Set the nodePort
s on the service:
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
spec:
type: NodePort
selector:
app: my-app
ports:
- name: frontend
protocol: TCP
port: 30082
nodePort: 30082
targetPort: 5000
- name: server
protocol: TCP
port: 30083
nodePort: 30083
targetPort: 3001
Update the configured port in the ConfigMap
:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-config
data:
mongodb.host: "my-app-mongodb"
mongodb.port: "27017"
.env: |
REACT_APP_API_PORT="30083"
Deploy the backing service setup in the new namespace:
kubectl apply -f k8s-my-app-mongodb.yml -n dev
kubectl apply -f my-app-mongodb.yml -n dev
Deploy the app in the new namespace:
kubectl apply -f my-app-dev.yml -n dev
kubectl get pods -n dev
Once all the Pods are up and running, you should be able to test the dev environment in a browser at
<Control Plane Node Public IP>:30082
XI. Logs#
Treat logs as event streams
Logs with K8s Container Logging#
Reference
Logging Architecture
Kubectl cheatsheet
K8s captures log data written to stdout by containers. We can use the K8s API, kubectl logs
or external tools to interact with container logs.
Edit the source code for the server e.g. src/server/index.js
. There is a log function that writes to a file. Change this function to simply write log data to the console:
log = function(data) {
console.log(data);
}
Build a new server image because we changed the source code:
docker build -t <Your Docker Hub username>/my-app-server:0.0.4 --target server .
Push the image:
docker push <Your Docker Hub username>/my-app-server:0.0.4
Deploy the new code. Edit my-app.yml
. Change the image version for the server to the new image:
containers:
- name: my-app-server
image: <Your Docker Hub username>/my-app-server:0.0.4
kubectl apply -f my-app.yml -n production
Get a list of Pods:
kubectl get pods -n production
Copy the name of one of the my-app deployment Pods and view its logs specifying the pod, namespace, and container.
kubectl logs <Pod name> -n production -c my-app-server
XII. Admin processes#
Run admin/management tasks as one-off processes
Admin Processes with Jobs#
A Job
e.g. a database migration runs a container until its execution completes. Jobs handle re-trying execution if it fails.
Jobs have a restartPolicy
of Never
because once they complete they don't run again.
This example adds the administrative job to the server image but you could package it into its own image.
Edit the source code for the admin process in e.g. src/jobs/deDeuplicateJob.js
.
Locate the block of code that begins with // Setup MongoDb backing database .
Change the code to make the database connection configurable:
// Setup MongoDb backing database
const MongoClient = require('mongodb').MongoClient
// MongoDB credentials
const username = encodeURIComponent(process.env.MONGODB_USER || "my-app_user");
const password = encodeURIComponent(process.env.MONGODB_PASSWORD || "ILoveTheList");
// MongoDB connection info
const mongoPort = process.env.MONGODB_PORT || 27017;
const mongoHost = process.env.MONGODB_HOST || 'localhost';
// MongoDB connection string
const mongoURI = `mongodb://${username}:${password}@${mongoHost}:${mongoPort}/my-app`;
const mongoURISanitized = `mongodb://${username}:****@${mongoHost}:${mongoPort}/my-app`;
console.log("MongoDB connection string %s", mongoURISanitized);
const client = new MongoClient(mongoURI);
Edit the Dockerfile
to include the admin job code in the server image.
Add the following line after the other COPY directives for the server image:
...
COPY --from=build /usr/src/app/src/jobs .
Build and push the server image:
docker build -t <Your Docker Hub username>/my-app-server:0.0.5 --target server .
docker push <Your Docker Hub username>/my-app-server:0.0.5
Create a Kubernetes Job to run the admin job:
Create de-duplicate-job.yml
.
Supply your Docker Hub username in the image tag:
apiVersion: batch/v1
kind: Job
metadata:
name: de-duplicate
spec:
template:
spec:
containers:
- name: my-app-server
image: <Your Docker Hub username>/my-app-server:0.0.5
command: ["node", "deDeuplicateJob.js"]
env:
- name: MONGODB_HOST
valueFrom:
configMapKeyRef:
name: my-app-config
key: mongodb.host
- name: MONGODB_PORT
valueFrom:
configMapKeyRef:
name: my-app-config
key: mongodb.port
- name: MONGODB_USER
valueFrom:
secretKeyRef:
name: my-app-secure-config
key: mongodb.username
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: my-app-secure-config
key: mongodb.password
restartPolicy: Never
backoffLimit: 4
Run the Job:
kubectl apply -f de-duplicate-job.yml -n production
Check the Job status:
kubectl get jobs -n production
kubectl get pods -n production
Use the Pod name to view the logs for the Job Pod:
kubectl logs <Pod name> -n production
Kubernetes Dashboard#
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
cat << EOF > dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
kubectl apply -f dashboard-adminuser.yaml
kubectl proxy
# Kubectl will make Dashboard available at
# http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
kubectl -n kubernetes-dashboard create token admin-user
# Now copy the token and paste it into the Enter token field on the login screen.
Helm#
You can package your charts and publish them on a repository, which can be any HTTP server. Use the name of the folder containing your Chart.yaml
file.
helm package ./mychart
charts
repo.
mv *.tgz ~/github/USER/charts
helm repo index ~/github/USER/charts
# [commit and push to GitHub]
You can then list your charts repo on the CNCF Artifact Hub. You can search for charts on the website or via the command line:
helm search hub [KEYWORD] --list-repo-url --max-col-width [uint] -o [table|json|yaml]
speedtest
charts in the default hub
(Artifact Hub):
helm search hub speedtest --list-repo-url --max-col-width 55 -o table
Referencing services#
"Normal" (not headless) Services are assigned DNS A and/or AAAA records, depending on the IP family or families of the Service, with a name of the form:
my-svc.my-namespace.svc.cluster-domain.example
.
This resolves to the cluster IP of the Service.
Headless Services (without a cluster IP) Services are also assigned DNS A and/or AAAA records, with a name of the form:
my-svc.my-namespace.svc.cluster-domain.example
.
Unlike normal Services, this resolves to the set of IPs of all of the Pods selected by the Service.
Clients are expected to consume the set or else use standard round-robin selection from the set.
By default, a client Pod's DNS search list includes the Pod's own namespace and the cluster's default domain.
A pod in another namespace can resolve either:
<service-name>.<namespace>
or
<service-name>.<namespace>.svc.cluster.local
About Bitnami charts#
They create a default fully qualified app name common.names.fullname
. The RabbitMQ Service
uses it as its name. They truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If the release name contains the chart name it will be used as a full name. By default it's
<Release Name>-<Chart Name>
.