Deploying Charmed Kubernetes with OpenStack Integrator

部署 Charmed Kubernetes with OpenStack Integrator

Introduction

Charmed Kubernetes is Canonical's Kubernetes deployment method, which can deploy Kubernetes to various environments via Juju.

This guide will explain how to deploy Charmed Kubernetes onto OpenStack, and how to use the OpenStack Integrator to leverage OpenStack-provided Persistent Volumes and Load Balancers for use by Kubernetes.

Set up and deploy juju OpenStack Cloud Controller

juju add-cloud --client openstack

Enter the following information

  • Cloud Type: OpenStack
  • endpoint:
  • cert path: none
  • Auth Type: Userpass
  • region: RegionOne (default value)
  • API endpoint url for the region: skip, will directly use endpoint
  • Enter another region? (y/N): N

    Add OpenStack credentials

juju autoload-credentials

Upload image

juju deploy glance-simplestreams-sync --to 0 --channel 2023.2/stable --config use_swift=false
juju integrate glance-simplestreams-sync:identity-service keystone:identity-service
juju integrate glance-simplestreams-sync:certificates vault:certificates
juju run glance-simplestreams-sync/leader sync-images

You can obtain the Image ID using the following command

openstack image list

Set image metadata

mkdir simplestreams
export IMAGE=<IMAGE_ID>
juju metadata generate-image -d ~/simplestreams -i $IMAGE -s jammy -r RegionOne -u <OPENSTACK_API_ENDPOINT>

Set private network

openstack network create --internal user1_net

openstack subnet create --network user1_net --dns-nameserver 8.8.8.8 \
   --subnet-range 192.168.0/24 \
   --allocation-pool start=192.168.0.10,end=192.168.0.99 \
   user1_subnet
openstack router create user1_router
openstack router add subnet user1_router user1_subnet
openstack router set user1_router --external-gateway ext_net

Create Juju controller on OpenStack

juju bootstrap --debug --config network=user1_net --config external-network=<external_network_id> --bootstrap-constraints allocate-public-ip=true --bootstrap-series jammy --bootstrap-constraints instance-type=m1.small --metadata-source $HOME/simplestreams/ openstack openstack

At the same time, you need to open another terminal to assign a floating IP to the bootstrap instance, so that client nodes can connect to the Juju controller

FLOATING_IP=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack server add floating ip <server_id> $FLOATING_IP

Deploy Charmed Kubernetes

Add new Juju model

juju add-model --config default-series=jammy k8s openstack
juju switch openstack:k8s

Create openstack-overlay.yaml

description: Charmed Kubernetes overlay to add native OpenStack support.
applications:
  kubeapi-load-balancer: null
  openstack-integrator:
    annotations:
      gui-x: "600"
      gui-y: "300"
    charm: openstack-integrator
    num_units: 1
    constraints: "cores=1 mem=1G root-disk=15G"
    trust: true
relations:
  - ['openstack-integrator', 'kubernetes-control-plane:openstack']
  - ['openstack-integrator', 'kubernetes-worker:openstack']
  - ['openstack-integrator', 'kubernetes-control-plane:loadbalancer']

Create cilium-overlay.yaml

description: Charmed Kubernetes overlay to add Cilium CNI.
applications:
  calico: null
  cilium:
    charm: cilium
  kubernetes-control-plane:
    options:
      allow-privileged: "true"
      sysctl: &sysctl "{net.ipv4.conf.all.forwarding: 1, net.ipv4.conf.all.rp_filter: 0, net.ipv4.neigh.default.gc_thresh1: 128, net.ipv4.neigh.default.gc_thresh2: 28672, net.ipv4.neigh.default.gc_thresh3: 32768, net.ipv6.neigh.default.gc_thresh1: 128, net.ipv6.neigh.default.gc_thresh2: 28672, net.ipv6.neigh.default.gc_thresh3: 32768, fs.inotify.max_user_instances: 8192, fs.inotify.max_user_watches: 1048576, kernel.panic: 10, kernel.panic_on_oops: 1, vm.overcommit_memory: 1}"
  kubernetes-worker:
    options:
      sysctl: *sysctl
relations:
- [cilium:cni, kubernetes-control-plane:cni]
- [cilium:cni, kubernetes-worker:cni]

Deploy Kubernetes

juju deploy charmed-kubernetes --channel=1.28/stable --overlay openstack-overlay.yaml --trust --overlay cilium-overlay.yaml

If resources are insufficient, you can use the kubernetes-core bundle for testing

juju deploy kubernetes-core --channel=1.28/stable --overlay openstack-overlay.yaml --trust --overlay cilium-overlay.yaml

Note that Charmed Kubernetes has somePredefined instance constraints, OpenStack requires a compatible flavor

Can be overridden via overlay in advance

For example:

application:
  "kubernetes-worker":
    num_units: 1
    constraints: cores=2 mem=4G root-disk=20G
  "kubernetes-control-plane":
    num_units: 1
    constraints: cores=2 mem=4G root-disk=20G
  "etcd":
    num_units: 1
    constraints: "cores=1 mem=2G root-disk=20G"
  "easyrsa":
    num_units: 1
    constraints: "cores=1 mem=1G root-disk=15G"

Deployment complete juju status Output will look like this (using kubernetes-core as example):

Model  Controller  Cloud/Region         Version  SLA          Timestamp
k8s    openstack   openstack/RegionOne  3.1.6    unsupported  00:46:56Z

App                       Version        Status  Scale  Charm                     Channel      Rev  Exposed  Message
cilium                    1.12.5,1.12.5  active      2  cilium                    stable        24  no       Ready
containerd                1.6.8          active      2  containerd                1.28/stable   73  no       Container runtime available
easyrsa                   3.0.1          active      1  easyrsa                   1.28/stable   48  no       Certificate Authority connected.
etcd                      3.4.22         active      1  etcd                      1.28/stable  748  no       Healthy with 1 known peer
kubernetes-control-plane  1.28.4         active      1  kubernetes-control-plane  1.28/stable  321  yes      Kubernetes control-plane running.
kubernetes-worker         1.28.4         active      1  kubernetes-worker         1.28/stable  134  yes      Kubernetes worker running.
openstack-integrator      yoga           active      1  openstack-integrator      stable        69  no       Ready

Unit                         Workload  Agent  Machine  Public address  Ports       Message
easyrsa/0*                   active    idle   0/lxd/0  252.82.3.157                Certificate Authority connected.
etcd/0*                      active    idle   0        192.168.0.82    2379/tcp    Healthy with 1 known peer
kubernetes-control-plane/0*  active    idle   0        192.168.0.82    6443/tcp    Kubernetes control-plane running.
  cilium/1*                  active    idle            192.168.0.82                Ready
  containerd/1*              active    idle            192.168.0.82                Container runtime available
kubernetes-worker/0*         active    idle   1        192.168.0.68    80,443/tcp  Kubernetes worker running.
  cilium/0                   active    idle            192.168.0.68                Ready
  containerd/0               active    idle            192.168.0.68                Container runtime available
openstack-integrator/1*      active    idle   3        192.168.0.52                Ready

Machine  State    Address       Inst id                               Base          AZ    Message
0        started  192.168.0.82  91545e2c-0bbc-475d-9528-fd4742efa0b3  ubuntu@22.04  nova  ACTIVE
0/lxd/0  started  252.82.3.157  juju-572a8e-0-lxd-0                   ubuntu@22.04  nova  Container started
1        started  192.168.0.68  4c3aaf88-05fc-4de2-95fb-d7abaf75535d  ubuntu@22.04  nova  ACTIVE
3        started  192.168.0.52  386403bf-ed3d-4efd-8206-4d77693a7e29  ubuntu@22.04  nova  ACTIVE

Retrieve kubeconfig

juju ssh kubernetes-control-plane/leader -- cat config > ~/.kube/config

At this point kubectl get pods -A The output should include these pods:

ubuntu@juju-572a8e-k8s-0:~$ kubectl get pods -A
NAMESPACE                         NAME                                                      READY   STATUS      RESTARTS   AGE
ingress-nginx-kubernetes-worker   default-http-backend-kubernetes-worker-5c79cc75ff-cvqw7   1/1     Running     0          14m
ingress-nginx-kubernetes-worker   nginx-ingress-controller-kubernetes-worker-bc7zc          1/1     Running     0          12m
kube-system                       cilium-7ndz7                                              1/1     Running     0          14m
kube-system                       cilium-operator-577bfbbd5b-5fmvj                          1/1     Running     0          14m
kube-system                       cilium-operator-577bfbbd5b-8d4m4                          1/1     Running     0          14m
kube-system                       cilium-zb7dp                                              1/1     Running     0          14m
kube-system                       coredns-59cfb5bf46-6tpcg                                  1/1     Running     0          16m
kube-system                       csi-cinder-controllerplugin-684cfb8c48-6qcxp              6/6     Running     0          16m
kube-system                       csi-cinder-nodeplugin-7pxjl                               3/3     Running     0          14m
kube-system                       csi-cinder-nodeplugin-wsp9z                               3/3     Running     0          15m
kube-system                       hubble-generate-certs-394f790584-t7j48                    0/1     Completed   0          16m
kube-system                       kube-state-metrics-78c475f58b-8cjvv                       1/1     Running     0          16m
kube-system                       metrics-server-v0.6.3-69d7fbfdf8-xc2xv                    2/2     Running     0          16m
kube-system                       openstack-cloud-controller-manager-gdgng                  1/1     Running     0          2m24s
kubernetes-dashboard              dashboard-metrics-scraper-5dd7cb5fc-bjq29                 1/1     Running     0          16m
kubernetes-dashboard              kubernetes-dashboard-7b899cb9d9-kxmmt                     1/1     Running     0          16m

Test OpenStack Integrator

Finally, test that OpenStack Integrator is running properly

Storage Integration

Create PVC

kubectl create -f - <<EOY
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: testclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  storageClassName: cdk-cinder
EOY

kubectl get pv You should now be able to see the PV being created

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-d302df77-7cbc-4a7b-af7f-5373f91abbd3   1Gi        RWO            Delete           Bound    default/testclaim   cdk-cinder              15s

openstack volume list You can see Cinder has created a volume

+--------------------------------------+------------------------------------------+-----------+------+-------------+
| ID                                   | Name                                     | Status    | Size | Attached to |
+--------------------------------------+------------------------------------------+-----------+------+-------------+
| 37734a31-5786-48c2-9757-f4782e6cdfd6 | pvc-d302df77-7cbc-4a7b-af7f-5373f91abbd3 | available |    1 |             |
+--------------------------------------+------------------------------------------+-----------+------+-------------+

Load Balancer Integration

Create test pods and expose via Load Balancer

kubectl create deployment hello-world --image=gcr.io/google-samples/node-hello:1.0
kubectl scale deployment hello-world --replicas=5
kubectl expose deployment hello-world --type=LoadBalancer --name=hello --port=8080

At this point, a Load Balancer will be created, which can be accessed via kubectl get svc hello -o wide Check the external IP

NAME    TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE     SELECTOR
hello   LoadBalancer   10.152.183.41   192.168.99.144   8080:30777/TCP   6m16s   app=hello-world

Access the service via the external IP

curl 192.168.99.144:8080
Hello Kubernetes!

openstack loadbalancer list You can also see the Load Balancer has been created

openstack loadbalancer list
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
| id                                   | name                                                                   | project_id                       | vip_address  | provisioning_status | operating_status | provider |
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
| 4cb1c8da-3c71-4fcf-9b13-23f6f21e0336 | openstack-integrator-5a087e572a8e-kubernetes-control-plane             | 4badc745662a485b8957de81ae403ee2 | 192.168.0.78 | ACTIVE              | ONLINE           | ovn      |
| 5cc3a0ce-b798-4b38-a1aa-33f637327560 | kube_service_kubernetes-df70v6ftc5r56zmdyd68zps0cwdmizal_default_hello | 4badc745662a485b8957de81ae403ee2 | 192.168.0.46 | ACTIVE              | ONLINE           | ovn      |
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+

The switch forwards the packet from compute node 1 to compute node 2

Charmed Kubernetes deploys smoothly and comes with useful add-ons pre-installed, such as ingress-nginx, but from a configuration standpoint, it feels no better than Kops.

Want to learn how to use Kops to deploy Kubernetes? Check this outThis article

Reference


Copyright Notice: All articles in this blog are licensed under CC BY-NC-SA 4.0 unless otherwise stated.

Leave a Reply