Kubernetes Cluster Setup

Basic Requirement

You have to configuration your OpenStack CLI has described on dedicated section. The example are based on this configuration, if you have some specific configuration you have to adapt.

You will also have to install magnum client. Magnum is the OpenStack compoment that manage kubernetes provisioning.

$ source ~/.venv/openstack-cli/bin/activate
(openstack-cli) $ pip install python-magnumclient

Danger

Provisioning k8s cluster on Cloud@VD require some other privileges not available by normal users. You have to contact us for more details.

Danger

magnum use a specific API to store trust, unfortunately, trust cannot be managed by application credential. To deploy a k8s cluster through magnum, you have to store your login information instead of application credential in clouds.yaml file. In your clouds-public.yaml and clouds.yaml like the following example. For security issue, it's recommanded to not store password in the file but you will have to type it for every openstack command.

$ cat ~/.config/openstack/clouds-public.yaml
[...]
virtualdata-pwd:  
    auth:
        auth_url: https://keystone.lal.in2p3.fr:5000/v3
    identity_api_version: 3
$ cat ~/.config/openstack/clouds.yaml
[...]
  pwd:
    cloud: virtualdata-pwd
    auth:
      domain_name: u-psud
      project_name: <my-project>
      username: <username>

Cluster Configuration

Templates

Magnum use templates to define the kind of k8s infrastructure you want to start. By default, VirtualData provide a set of public template. You can, of course, tweak those templates, but it will not be documented on page. Our template follow our own naming convention k8s-<version>-<node-os-version>[-lb] where :

  • version is the k8s version
  • node-os-version is the base os of k8s infrastructure
  • -lb is present if a load-balancer is configured in front of k8s

To list availables template, you have to launch the following command

$ openstack coe cluster template list
+--------------------------------------+----------------------+------+
| uuid                                 | name                 | tags |
+--------------------------------------+----------------------+------+
[...]
| 4cad1f92-dc10-434a-966a-7166fb6db3e4 | k8s-1.30-coreos40-lb | None |
+--------------------------------------+----------------------+------+

Auto-scaling

Magnum support k8s auto-scaling. This mean that magnum will monitor k8s usage and start (or stop) a node if more resources are required. When started a cluster, you have to specify the minimum and the maximum number of node of this cluster.

Note

Each node take resource from your project quota when a node is started. You need to take care of having enough resources available or the scaling will fail.

Multi-master

Kubernetes support multiple master configuration. This is mandatory to have 2 masters to allow rolling upgrade of kubernetes infrastructure.

Our first kubernetes infrastructure

Starting our cluster

openstack coe cluster create my-k8s       \
  --cluster-template k8s-1.30-coreos40-lb \
  --key              vd-key               \
  --master-count     2                    \
  --node-count       2                    \
  --time             30                   \
  --labels           min_node_count=2     \
  --labels           max_node_count=5     \
  --merge-labels

Get your k8s status

OpenStack will provide you two monitor value:

  • status which is the status of kubernetes infrastructure in OpenStack point-of-view. If it's CREATE_COMPLETE this mean that all virtual machine, all cinder volumes and load balancer are up and running.
  • health_status which is the status in k8s point-of-view. If it's HEALTHY then all pods are running and a user can use the cluster.
$ openstack coe cluster show my-k8s -c status
+---------------+-----------------+
| Field         | Value           |
+---------------+-----------------+
| status        | CREATE_COMPLETE |
| health_status | HEALTHY         |
+---------------+-----------------+

Retrieve k8s credentials

The credential to access k8s are generated by OpenStack during cluster creation and are independent to your Cloud@VD credential. After your cluster is running, you have to get those credentials to access to your cluster.

$ mkdir -p ~/.k8s/my-k8s
$ openstack coe cluster config my-k8s --dir ~/.k8s/my-k8s/
$ export KUBECONFIG=~/.k8s/my-k8s/config
$

Get k8s cluster informations

To check your k8s cluster status, you just have to launch

$ kubectl cluster-info
Kubernetes control plane is running at https://<ip-add>:6443
CoreDNS is running at https://<ip-add>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

OpenStack ressources for k8s

Of course, all k8s ressources are runs on OpenStack. You can list all OpenStack ressources like the loadbalancer and servers.

Note

To use loadbalancer openstack command you have to install octavia client with the command pip install python-octaviaclient

$ openstack loadbalancer list
+---------+-------------+------------+----------+---------+---------+---------+
| id      | name        | project_id | vip[...] | pr[...] | op[...] | pr[...] |
+---------+-------------+------------+----------+---------+---------+---------+
| 68[...] | my-k8s[...] | 5e426[...] | 10.[...] | ACTIVE  | ONLINE | amphora  |
| 80[...] | my-k8s[...] | 5e426[...] | 10.[...] | ACTIVE  | ONLINE | amphora  |
+---------+-------------+------------+----------+---------+---------+---------+
$  openstack server list
+-------+-----------------------+-+-----------------------------+---------+-+
| ID    | Name                  | | Networks                    | Image   | |
+-------+-----------------------+-+-----------------------------+---------+-+
| [...] | my-k8s[...]-node-0    | | my-k8s=10.[...], 157.[...]  | fe[...] | |
| [...] | my-k8s-[...]-node-1   | | my-k8s=10.[...], 157.[...]  | fe[...] | |
| [...] | my-k8s-[...]-master-1 | | my-k8s=10.[...], 157.[...]  | fe[...] | |
| [...] | my-k8s-[...]-master-0 | | my-k8s=10.[...], 157.[...]  | fe[...] | |
+-------+-----------------------+-+-----------------------------+---------+-+

Deploy application

For the documentation, we will consider to start a basic web service with a nginx frontend and a mysql database. A set of templates are available in gitlab repository you can clone it and take it as example.

The templates are been created to work out-of-the-box with Cloud@VD but can be used on every OpenStack infrastructure. Some part of .yml are OpenStack specific as k8s need to provision volume and ingress control through OpenStack services.

Nginx

nginx installation

In this section, we will install Nginx in your Kubernetes cluster using a Deployment. The Deployment configuration defines the number of replicas, container image, ports, and resource limits for the nginx containers.

nginx deployment

Below is the content of the nginx/deployment.yml file which we will use to create the nginx deployment:

$ kubectl apply -f nginx/deployment.yml
$ kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           5m17s
$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-785d6689b-2twqw   1/1     Running   0          15m
nginx-785d6689b-j4j88   1/1     Running   0          15m

Deployment will start two nginx pods based on nginx:latest docker image. But the service will not be available outside k8s cluster. The next step of our tutorial will be to expose the http port (80).

Expose nginx service

After creating the deployment, apply the service configuration file to expose nginx to the network.

$ kubectl apply -f nginx/service.yml
$ kubernetes % kubectl get service
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
[...]
nginx        LoadBalancer   10.254.108.111   <pending>     80:30966/TCP   5s

If you get service status just after the apply, you will have a <pending> external IP. You have to wait some minutes to have the external IP available

$ kubectl get service  
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
[...]
nginx        LoadBalancer   10.254.108.111   157.136.xxx.yyy   80:30966/TCP   2m33s

By following these steps, you will have a fully functional nginx deployment in your Kubernetes cluster. The deployment ensures that two replicas of the nginx container are running, and the service exposes nginx to handle incoming traffic.

You can test it with curl command

$ curl http://157.136.xxx.yyy > /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   615  100   615    0     0  31185      0 --:--:-- --:--:-- --:--:-- 32368

Custom html pages

We will create a ConfigMap to hold the HTML content and mount it to the nginx pod. This will overwrite the default index.html file provide by nginx:latest docker image.

$ kubectl apply -f nginx/configmap.yml
$ kubectl get ConfigMap  
NAME               DATA   AGE
kube-root-ca.crt   1      11d
nginx-index        1      50s

To link the ConfigMap containing the HTML content to the nginx pods, you need to modify the deployment. To do that, you can use nginx/deployment-with-config-map.yml which is based on nginx/deployment.yml.

$ diff -u nginx/deployment.yml nginx/deployment-with-config-map.yml
--- nginx/deployment.yml        2024-09-28 19:21:27
+++ nginx/deployment-with-config-map.yml        2024-09-28 19:24:54
@@ -22,3 +22,10 @@
           requests:
             memory: "64Mi"
             cpu: "250m"
+        volumeMounts:
+        - name: nginx-index
+          mountPath: /usr/share/nginx/html
+      volumes:
+      - name: nginx-index
+        configMap:
+          name: nginx-index
$ kubectl apply -f nginx/deployment-with-config-map.yml
deployment.apps/nginx configured

Now, nginx service will provide your custom index.html file.

$ curl http://157.136.xxx.yyy
Hi VirtualData !
$

OpenStack ressources for a deployment

Of course, compute ressource are provide by my-k8s-[...]-node-x virtual machine, but the loadbalancer service for nginx has been started by k8s has a OpenStack octavia service

$ openstack loadbalancer list
+---------+-------------------------------------+-+-------------+-+-+----------+
| id      | name                                | | vip_address | | | provider |
+---------+-------------------------------------+-+-------------+-+-+----------+
| 79[...] | kube_service_[...]_default_nginx    | | 10.[...]    | | | amphora  |
+---------+-------------------------------------+-+-------------+-+-+----------+

Mysql

Danger

This part and beyond are not tested

Installation

In this section, we will install MySQL in your Kubernetes cluster using a Deployment. The Deployment configuration defines the container image, ports, and resource limits for the MySQL containers. Additionally, it will include persistent storage to ensure data durability.

To create the MySQL Deployment, apply the following YAML configuration file:

$ kubectl apply -f mysql/deployment.yml
$

Finally, apply the Service configuration to expose MySQL:

kubectl apply -f mysql/service.yml

MYSQL Configuration

After applying the deployment, you can verify that the MySQL pods are running by executing:

kubectl get deployments
kubectl get pods

To check if the MySQL service is running correctly and exposed, you can use the following command:

kubectl get services

Ingress Controller Installation

Problematic

Currently, in our example, our NGINX server uses a single IP. If we multiply the number of web sites, we will end up using multiple IPs, resulting in fewer available IPs. To address this, we will set up a reverse proxy using the Octavia Ingress Controller to direct users to the correct application based on the URL used.

Installation Octavia Ingress Controller

exposur with reverse-proxy

We will install the Octavia Ingress Controller in the cluster and configure it to handle HTTP(S) traffic, routing requests to the appropriate services based on the URL.

Compatibility matrix

For this Kubernetes cluster, we need specific requirements to use the Octavia Ingress Controller:

  • Communication between Octavia Ingress Controller and Octavia is needed.
  • Octavia stable/queens or higher version is required because of features such as bulk pool members operation.
  • OpenStack Key Manager (Barbican) service is required for TLS Ingress, otherwise Ingress creation will fail.
Install Octavia Ingress Controller

Follow the instructions provided in the Octavia Ingress Controller documentation to install the controller.

Apply the service account configuration defined in serviceaccount.yaml for the Octavia Ingress Controller.

kubectl apply -f octavia-ingress-controller/serviceaccount.yaml

Note

Apply the configuration settings specified in config.yaml for the Octavia Ingress Controller.

$ kubectl apply -f octavia-ingress-controller/config.yaml
$

Deploy the Octavia Ingress Controller using the settings defined in deployment.yaml.

$ kubectl apply -f octavia-ingress-controller/deployment.yaml
$

Apply the Ingress resource configuration defined in ingress.yaml to configure traffic routing for the Octavia Ingress Controller. In this file you have to define the url for the service access

$ kubectl apply -f octavia-ingress-controller/ingress.yaml
$
loadbalancer

Load Balancer Configuration

When deploying the reverse proxy, it will dynamically fetch the load balancer's IP address from OpenStack. During the initial deployment of the reverse proxy pods, it is recommended to avoid specifying a fixed IP address for the load balancer in the configuration. Ensure that the IP address assigned by OpenStack is accessible and meets your requirements.

To find the IP address, use the following command:

kubectl get ingress

Add this IP address to the "octavia-ingress-controller/ingress.yaml" file.

Alternatively, you can directly access your nginx web server without using the octavia-ingress-controller. When launching the nginx service, a load balancer is automatically configured for nginx.

To retrieve the IP address of the nginx service, use:

$ kubectl get service
$

You can then make an HTTP request using the IP address of the nginx service.

In our tutorial we will use the ingress controller to limit the use of IPs We are going to delete the LB on the nginx-service

$ kubectl patch service nginx-service -p '{"spec": {"type": "NodePort"}}'
$

DNS

Now an IP is allocated to the pods of ingress-controller, to have a link beetween the IP and the ingress routes created we will have to associate CNAMEs with the IP that Openstack created for the ingress-controller

$ kubectl get ingress
$

TLS

Github documentation

kubectl create secret tls tls-secret \
  --cert nginx-yh-server.ijclab.in2p3.fr.crt \
  --key nginx-yh-server.ijclab.in2p3.fr.key
kubectl apply -f octavia-ingress-controller/default-backend.yaml
kubectl apply -f octavia-ingress-controller/ingress-update.yaml

Now we can access to http://nginx-yh-server.ijclab.in2p3.fr and http://nginx-yh-server.ijclab.in2p3.fr

Dynamically manage our cluster with k9s

GitHub Project k9s

Installation 1

Installation with snap
snap install k9s --devmode

Installation 2

Download the binary

For example, if you are using a 64-bit Linux system

wget https://github.com/derailed/k9s/releases/download/v0.28.2/k9s_Linux_amd64.tar.gz
echo 'Extract the binary'
tar -xzf k9s_Linux_amd64.tar.gz
echo 'Make it executable'
chmod +x k9s
echo 'Move it to a bin directory'
sudo mv k9s /usr/local/bin/