K0s kasutamine

Allikas: Imre kasutab arvutit
Mine navigeerimisribaleMine otsikasti

Sissejuhatus

TODO

Tööpõhimõte

TODO

k0s host ettevalmistamine

# apt-get install apparmor iptables curl

Paigaldamine

Järgnevas kirjeldatakse k0s süsteemi paigaldamise variatsioone.

Paigaldamine - vanilla

Väited

  • süsteem töötab ilma k0s seadistuseta
  • ei ole paigaldatud pv jaoks vajalikku storage class lahendust paigaldamise osana
  • ei ole paigaldatud ingress controller'it paigaldamise osana

https://docs.k0sproject.io/v1.27.5+k0s.0/install/, kõige lihtsamal juhtumil paigadamine ja käivitamine

# curl -sSLf https://get.k0s.sh | sudo sh
Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.27.5+k0s.0/k0s-v1.27.5+k0s.0-amd64
k0s is now executable in /usr/local/bin
# k0s install controller --single
# k0s start
# k0s status
# k0s kubectl get nodes

Seejärel on k0s kubernetes süsteem kasutatav. Nt lens desktop abil üle võrgu ühendamiseks, või kubectl utiliidiga üle võrgu töökohaarvutist ühendamiseks sobib kasutada saladust failist

/var/lib/k0s/pki/admin.conf

kus

k0s protsesside lõpetamine ja süsteemist eemaldamine

# k0s stop
# k0s reset
# reboot

Paigaldamine - vanilla + openebs

Väited

esmalt moodustatakse k0s seadistusfail

# mkdir /etc/k0s
# k0s config create > /etc/k0s/k0s.yaml

ning kasutatakse k0s.yaml failis muu hulgas sektsiooni (need direktiivid ei esine tõenäoliselt litrally järjest, aga on spreitud sobivalt)

spec:
  extensions:
    storage:
      type: openebs_local_storage

k0s süsteemi käivitamine toimub üldiselt samamoodi nagu vanilla juhtumil, kuid install tuleb teha koos -c suvandiga

# k0s install controller --single -c /etc/k0s/k0s.yaml

Oodatav tulemus on, et süsteemis on olemas openebs storage class

# k0s kubectl get storageclass
NAME               PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device     openebs.io/local   Delete          WaitForFirstConsumer   false                  6d2h
openebs-hostpath   openebs.io/local   Delete          WaitForFirstConsumer   false                  6d2h

Storage class saab kasutada nt sellise nginx yaml abil

apiVersion: v1
kind: Namespace
metadata:
  name: web

---
apiVersion: v1
kind: Service
metadata:
  name: web-server-service
  namespace: web
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-pvc
  namespace: web
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: openebs-hostpath
  resources:
    requests:
      storage: 512Mi

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: web
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx 
        name: nginx
        volumeMounts:
        - name: persistent-storage
          mountPath: /var/lib/nginx
      volumes:
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: nginx-pvc

Deployimiseks sobib öelda

# kubectl apply -f create-pvc.yaml

Tulemusena tekib deployment, eriti kasutatakse pv/pvc ressursse (pv ei ole namespace põhine ressurss)

# k0s kubectl get pvc -n web
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
nginx-pvc   Bound    pvc-4bba23d7-eeb6-4485-b1df-b2b4c6657665   512Mi      RWO            openebs-hostpath   58s

# k0s kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM           STORAGECLASS       REASON   AGE
pvc-4bba23d7-eeb6-4485-b1df-b2b4c6657665   512Mi      RWO            Delete           Bound    web/nginx-pvc   openebs-hostpath            49s

Kustutamiseks sobib öelda

# kubectl delete -f create-pvc.yaml

Paigaldamine - vanilla + metallb

Väited

Paigaldamiseks sobib kasutada custom k0s seadistusfailis lisaks sektsiooni

spec:
  extensions:
    helm:
      repositories:
      - name: metallb
        url: https://metallb.github.io/metallb
      charts:
      - name: metallb
        chartname: metallb/metallb
        namespace: metallb

Edu korral on tekkinud juurde täiendavaid kubernetes custom resources'id

# k0s kubectl api-resources | grep metall
addresspools                                   metallb.io/v1beta1                     true         AddressPool
bfdprofiles                                    metallb.io/v1beta1                     true         BFDProfile
bgpadvertisements                              metallb.io/v1beta1                     true         BGPAdvertisement
bgppeers                                       metallb.io/v1beta2                     true         BGPPeer
communities                                    metallb.io/v1beta1                     true         Community
ipaddresspools                                 metallb.io/v1beta1                     true         IPAddressPool
l2advertisements                               metallb.io/v1beta1                     true         L2Advertisement

ning paigaldada k0s nö tavalisel viisil custom seadistust arvestades. Seejärel moodustada IPAddressPool ja L2Advertisement tüüpi ressursid, kasutada sobivat ip vahemikku

# cat metallb-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb
spec:
  addresses:
  - 192.168.10.120-192.168.10.124

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb

ning kehtestada

# k0s kubectl apply -f metallb-pool.yaml

Tulemusena on olemas vastavad ressursid

# k0s kubectl get IPAddressPool -n metallb
NAME         AGE
first-pool   9m51s

# k0s kubectl get  L2Advertisement -n metallb
NAME      AGE
example   9m54s

MetalLB võimalusi kasutava service paigaldamiseks sobib kasutada sellist manifesti

# cat create-metallb-base-service.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  namespace: web
spec:
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: httpd
        image: httpd:2.4.53-alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-server-service
  namespace: web
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

ning öelda

# k0s kubectl apply -f create-metallb-base-service.yaml

Tulemusena saab brauserist pöörduda aadressile http://192.168.10.120/ (konkreetne ip valitakse pool sees nii nagu ta parasjagu valitakse), küsida saab nii

# k0s kubectl get service -n web
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
web-server-service   LoadBalancer   10.106.230.54   192.168.10.120   80:32482/TCP   6s

Paigaldamine - nginx ingress kontroller + nodeport

Väited

Lahenduse paigaldamiseks sobib öelda

# k0s kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/baremetal/deploy.yaml
# k0s kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-pmsdz        0/1     Completed   0          12m
ingress-nginx-admission-patch-7g225         0/1     Completed   0          12m
ingress-nginx-controller-5d45d7c8c4-rrntc   1/1     Running     0          12m
# k0s kubectl get services -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.213.93   <none>        80:30798/TCP,443:30764/TCP   13m
ingress-nginx-controller-admission   ClusterIP   10.104.36.218   <none>        443/TCP                      13m
# k0s kubectl -n ingress-nginx get ingressclasses
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       13m
# k0s kubectl -n ingress-nginx annotate ingressclasses nginx ingressclass.kubernetes.io/is-default-class="true"

Deploymiseks sobib kasutada

# cat create-nodeport-based-ingress-service.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  namespace: web
spec:
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: httpd
        image: httpd:2.4.53-alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-server-service
  namespace: web
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-server-ingress
  namespace: web
spec:
  ingressClassName: nginx
  rules:
  - host: web.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-server-service
            port:
              number: 5000

Deploymiseks sobib öelda

# k0s kubectl apply -f create-nodeport-based-ingress-service.yaml

Tulemuse vaatlemiseks tuleb töökohaarvutis tekitada /etc/hosts faili rida '192.168.10.164 web.example.com' ning paistab (vt eelmise väljundi nö dünaamiliselt seadistatud portide väärtusi)

Paigaldamine - nginx ingress kontroller + loadbalancer

Väited

Paigaldatakse nginx ingress kontroller

# k0s kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/baremetal/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

kontrollitakse kas ingress kontrolleri pod'id töötavad

# k0s kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-q6xb5        0/1     Completed   0          54s
ingress-nginx-admission-patch-48gkt         0/1     Completed   0          54s
ingress-nginx-controller-5c778bffff-5qpkh   1/1     Running     0          54s

ning

# k0s kubectl get services -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.101.36.202    <none>        80:30497/TCP,443:30424/TCP   3m57s
ingress-nginx-controller-admission   ClusterIP   10.108.143.186   <none>        443/TCP                      3m57s

ning

# k0s kubectl -n ingress-nginx get ingressclasses
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       4m51s

ning

# k0s kubectl -n ingress-nginx annotate ingressclasses nginx ingressclass.kubernetes.io/is-default-class="true"
ingressclass.networking.k8s.io/nginx annotate

Lõpuks tuleb nginx controller nodeport paigaldust kohendada - asendada NodePort -> LoadBalancer

# k0s kubectl edit service ingress-nginx-controller -n ingress-nginx
service/ingress-nginx-controller edited

ning veenduda, et muudatus kehtestus

# k0s kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.105.213.93   192.168.10.120   80:30798/TCP,443:30764/TCP   20m
ingress-nginx-controller-admission   ClusterIP      10.104.36.218   <none>           443/TCP                      20m

kus

  • märgata external-ip väärtust 192.168.10.120
  • pöörduda töökohaarvutist aadressilt https://web.example.com/ kusjuures see dns nimi peab lahenduma ip aadressiks 192.168.10.120.

Deployment tegemiseks sobib öelda

# cat create-loadbalancer-based-ingress-service.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-2-server
  namespace: web
spec:
  selector:
    matchLabels:
      app: web-2
  template:
    metadata:
      labels:
        app: web-2
    spec:
      containers:
      - name: httpd
        image: httpd:2.4.53-alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-2-server-service
  namespace: web
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-2-server-ingress
  namespace: web
spec:
  ingressClassName: nginx
  rules:
  - host: web-2.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-2-server-service
            port:
              number: 5000

Tulemuse vaatlemiseks tuleb töökohaarvutis tekitada /etc/hosts faili rida '192.168.10.164 web.example.com' ning paistab (vt eelmise väljundi nö dünaamiliselt seadistatud portide väärtusi)

http://web-2.example.com/ https://web-2.example.com/

Paigaldamine - nginx ingress kontroller + hostport

Väited

Paigaldamiseks lähtepunktiks sobib kasutada vanilla k0s süsteemi, st puudub ingress kontroller ning puudub metallb. Esmalt kopeeritakse nginx ingress kontrolleri deploy.yaml fail

# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/baremetal/deploy.yaml

failist leitakse üles seal juba olev Deployment sektsioon (ainuke), ja selles leitakse sobiv koht parameetrile 'hostNetwork: true' ja lisatakse

spec:
  template:
    spec:
      hostNetwork: true

Seejärel deploitakse custom seadistausega ingress kontroller

# k0s kubectl apply -f deploy.yaml

Tulemusena kuulab host port 80 ja 443 (nt kui 'netstat -lnpt' abil küsida). Ootus on et töökohaarvutist saab brauseriga küsida https://web.example.com/ kusjuures dns nimele vastab ip aadress 192.168.10.164 (st host ip aadress).

TODO

Paigaldamine - traefik

Väited

k0s tarkvara paigaldamiseks sobib öelda

# curl -sSLf https://get.k0s.sh | sudo sh

kus

  • TODO

Moodustada vaikimisi seadistusfail

# mkdir /etc/k0s
# k0s config create > /etc/k0s/k0s.yaml

Lisada seadistusfaili

  • openebs storage
  • metallb
  • traefik

Kokku on seadistusfail selline, lisatud lõigud on tähistatud

# cat /etc/k0s/k0s.yaml
apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata:
  creationTimestamp: null
  name: k0s
spec:
  api:
    address: 192.168.10.182
    k0sApiPort: 9443
    port: 6443
    sans:
    - 192.168.10.182
    - fe80::9867:8bff:fef0:3754
    tunneledNetworkingMode: false
  controllerManager: {}

# alates siit

  extensions:
    helm:
      repositories:
      - name: traefik
        url: https://traefik.github.io/charts
      - name: bitnami
        url: https://charts.bitnami.com/bitnami
      charts:
      - name: traefik
        chartname: traefik/traefik
        version: "20.5.3"
        namespace: default
      - name: metallb
        chartname: bitnami/metallb
        version: "2.5.4"
        namespace: default
        values: |2
          configInline:
            address-pools:
            - name: generic-cluster-pool
              protocol: layer2
              addresses:
             - 192.168.10.131-192.168.10.135
    storage:
      type: openebs_local_storage

# lopetades siin

  installConfig:
    users:
      etcdUser: etcd
      kineUser: kube-apiserver
      konnectivityUser: konnectivity-server
      kubeAPIserverUser: kube-apiserver
      kubeSchedulerUser: kube-scheduler
...

Paigaldatakse

# k0s install controller --single

Käivitada, seejuures tekitatakse systemd unit seadistus

# k0s start

systemd unit seadistust saab esitada

# systemctl status k0scontroller

Tulemusena on ootus et kubernetest saab kasutada, nt

# export KUBECONFIG=/var/lib/k0s/pki/admin.conf
# k0s kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
k0s-traefik   Ready    control-plane   2m35s   v1.27.3+k0s

cert-manager kasutamine - kubectl

cert-manager paigaldamiseks k8s klastrisse sobib öelda, https://cert-manager.io/docs/installation/ → Getting Started → 'kubectl apply: '

# kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml

Tulemusena

# k0s kubectl get pods --namespace cert-manager
NAME                                      READY   STATUS    RESTARTS        AGE
cert-manager-7476c8fcf4-bl9jp             1/1     Running   2 (9m46s ago)   87m
cert-manager-cainjector-bdd866bd4-59d2v   1/1     Running   4 (9m4s ago)    87m
cert-manager-webhook-5655dcfb4b-5jvxm     1/1     Running   4 (9m4s ago)    87m

Seejärel kirjeldatakse lets encrypt staging

# cat cert-manager-issuer-staging.yaml 
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: imre@auul.pri.ee
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

ning

# cat cert-manager-issuer-prod.yaml 
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: imre@auul.pri.ee
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

ning rakendatakse

# k0s kubectl apply -f cert-manager-issuer-staging.yaml
# k0s kubectl apply -f cert-manager-issuer-prod.yaml

Nt selline deploy

# create-loadbalancer-based-ingress-service-4.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: web

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-4-server
  namespace: web
spec:
  selector:
    matchLabels:
      app: web-4
  template:
    metadata:
      labels:
        app: web-4
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: web-4-server-service
  namespace: web
spec:
  selector:
    app: web-4
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-4-server-ingress
  annotations:
#    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    cert-manager.io/cluster-issuer: "letsencrypt-staging"
  namespace: web
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - 80.235.106.153.sslip.io
    secretName: tls-secret-80.235.106.153.sslip.io
  rules:
  - host: 80.235.106.153.sslip.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-4-server-service
            port:
              number: 5000

kus

  • TODO

deploy rakendada

# k0s kubectl apply -f create-loadbalancer-based-ingress-service-4.yaml

ning veenduda, et sertifikaadi tellimine õnnestus

# k0s kubectl get events -A

ning sertifikaat võeti kasutusele avades brauseriga https://80.235.106.153.sslip.io/

Monitooringu kasutamine

TODO

Käsitsi sertifikaatide kasutamine

TODO

Kasulikud lisamaterjalid

Velero varundamise kasutamine

TODO

RP näide - rp-3

apiVersion: v1
kind: Namespace
metadata:
  name: ns-rp-test-3

---
.apiVersion: apps/v1
kind: Deployment
metadata:
  name: dm-rp-test-3
  namespace: ns-rp-test-3
spec:
  selector:
    matchLabels:
      app: lbl-rp-test-3
  template:
    metadata:
      labels:
        app: lbl-rp-test-3
    spec:
      containers:
      - name: cnt-nginx
        image: nginx:latest
        ports:
        - containerPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dm-rp-test-3-url-path-1
  namespace: ns-rp-test-3
spec:
  selector:
    matchLabels:
      app: lbl-rp-test-3-url-path-1
  template:
    metadata:
      labels:
        app: lbl-rp-test-3-url-path-1
    spec:
      containers:
      - name: cnt-nginx-url-path-1
        image: nginx:latest
        ports:
        - containerPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dm-rp-test-3-url-path-2
  namespace: ns-rp-test-3
spec:
  selector:
    matchLabels:
      app: lbl-rp-test-3-url-path-2
  template:
    metadata:
      labels:
        app: lbl-rp-test-3-url-path-2
    spec:
      containers:
      - name: cnt-nginx-url-path-2
        image: nginx:latest
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: svc-rp-test-3
  namespace: ns-rp-test-3
spec:
  selector:
    app: lbl-rp-test-3
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: svc-rp-test-3-url-path-1
  namespace: ns-rp-test-3
spec:
  selector:
    app: lbl-rp-test-3-url-path-1
  ports:
    - protocol: TCP
      port: 5001
      targetPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: svc-rp-test-3-url-path-2
  namespace: ns-rp-test-3
spec:
  selector:
    app: lbl-rp-test-3-url-path-2
  ports:
    - protocol: TCP
      port: 5002
      targetPort: 80

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ing-rp-test-3
  namespace: ns-rp-test-3
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-issuer
spec:
  ingressClassName: nginx
  tls:
    - hosts:
      - rp-test-3.auul.pri.ee
      secretName: scrt-rp-test-3
  rules:
  - host: rp-test-3.auul.pri.ee
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: svc-rp-test-3
            port:
              number: 5000
      - path: /url-path-1
        pathType: Prefix
        backend:
          service:
            name: svc-rp-test-3-url-path-1
            port:
              number: 5001
      - path: /url-path-2
        pathType: Prefix
        backend:
          service:
            name: svc-rp-test-3-url-path-2
            port:
              number: 5002

RP näide - rp-2

apiVersion: v1
kind: Namespace
metadata:
  name: ns-rp-test-2

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dm-rp-test-2
  namespace: ns-rp-test-2
spec:
  selector:
    matchLabels:
      app: lbl-rp-test-2
  template:
    metadata:
      labels:
        app: lbl-rp-test-2
    spec:
      containers:
      - name: cnt-nginx
        image: nginx:latest
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: svc-rp-test-2
  namespace: ns-rp-test-2
spec:
  selector:
    app: lbl-rp-test-2
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ing-rp-test-2
  namespace: ns-rp-test-2
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-issuer
spec:
  ingressClassName: nginx
  tls:
    - hosts:
      - rp-test-2.auul.pri.ee
      secretName: scrt-rp-test-2
  rules:
  - host: rp-test-2.auul.pri.ee
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: svc-rp-test-2
            port:
              number: 5000

RP näide - rp-5 - persistent volume claim (non-template)

apiVersion: v1
kind: Namespace
metadata:
  name: ns-rp-test-5

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-rp-test-5
  namespace: ns-rp-test-5
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 300Mi

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sts-rp-test-5
  namespace: ns-rp-test-5
spec:
  selector:
    matchLabels:
      app: lbl-rp-test-5
  serviceName: svc-rp-test-5
  template:
    metadata:
      labels:
        app: lbl-rp-test-5
    spec:
      containers:
      - name: cnt-nginx-rp-test-5
        image: nginx:latest
        ports:
        - containerPort: 80
        
        volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: vol-rp-test-5

      volumes:
        - name: vol-rp-test-5
          persistentVolumeClaim:
            claimName: pvc-rp-test-5

---
apiVersion: v1
kind: Service
metadata:
  name: svc-rp-test-5
  namespace: ns-rp-test-5
spec:
  selector:
    app: lbl-rp-test-5
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ing-rp-test-5
  namespace: ns-rp-test-5
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-issuer
spec:
  ingressClassName: nginx
  tls:
    - hosts:
      - rp-test-5.auul.pri.ee
      secretName: scrt-rp-test-5
  rules:
  - host: rp-test-5.auul.pri.ee
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: svc-rp-test-5
            port:
              number: 5000

RP näide - rp-6 - persistent volume template

apiVersion: v1
kind: Namespace
metadata:
  name: ns-rp-test-6

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sts-rp-test-6
  namespace: ns-rp-test-6
spec:
  selector:
    matchLabels:
      app: lbl-rp-test-6
  serviceName: svc-rp-test-6
  template:
    metadata:
      labels:
        app: lbl-rp-test-6
    spec:
      containers:
      - name: cnt-nginx-rp-test-6
        image: nginx:latest
        ports:
        - containerPort: 80
        
        volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: voltmpl-rp-test-6

  volumeClaimTemplates:
  - metadata:
      name: voltmpl-rp-test-6
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 280Mi

---
apiVersion: v1
kind: Service
metadata:
  name: svc-rp-test-6
  namespace: ns-rp-test-6
spec:
  selector:
    app: lbl-rp-test-6
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ing-rp-test-6
  namespace: ns-rp-test-6
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-issuer
spec:
  ingressClassName: nginx
  tls:
    - hosts:
      - rp-test-6.auul.pri.ee
      secretName: scrt-rp-test-6
  rules:
  - host: rp-test-6.auul.pri.ee
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: svc-rp-test-6
            port:
              number: 5000

Helm

Mõisted

  • repo
  • release
  • history

Tööpõhimõte

Väited

  • helm chart esitatakse tavaliselt .tgz arhiivi faili kujul (võib olla kata lokaalne kataloog, st seda antud näites kasutatakse)
  • maksab googeldada helm vs kustomize st tundub, et neil on kontseptuaalne/funktsionaalne/jne ülekattuvus mingis osas
  • esmapilgul võib tunduda, et helm on üks järjekordne kadalipp liigendamisi/sõltuvusi/asendamisi/jne, loodetavasti on versioonide pidamsest jms kasu

Helm chart moodustamine

Chart sisu koosneb antud näites kuuest failist

root@ubu1804-eid:~/20231106# find hc-rp-test-23/ -type f -ls
    40276      4 -rw-rw-r--   1 imre     imre          176 Nov  5 21:04 hc-rp-test-23/Chart.yaml
    40279      4 -rw-rw-r--   1 imre     imre          893 Nov  5 21:07 hc-rp-test-23/templates/deployment.yaml
    40278      4 -rw-rw-r--   1 imre     imre          289 Nov  5 21:13 hc-rp-test-23/templates/configmap.yaml
    40281      4 -rw-rw-r--   1 imre     imre          258 Nov  5 21:08 hc-rp-test-23/templates/service.yaml
    40280      4 -rw-r--r--   1 imre     imre          580 Nov  5 21:18 hc-rp-test-23/templates/ingress.yaml
    40277      4 -rw-rw-r--   1 imre     imre          194 Nov  5 19:30 hc-rp-test-23/values.yaml

kus

  • Chart.yaml - osa helm muutujate väärtusi asendatakse siit
  • values.yaml - osa helm muutujate väärtusi asendatakse siit

fail Chart.yaml

# cat hc-rp-test-23/Chart.yaml 
apiVersion: v2
name: hc-rp-test-23
description: My 2nd Helm Chart
type: application
version: 0.2.0
appVersion: "2.0.0"
maintainers:
- email: imre@xxx
  name: testime

fail values.yaml

# cat hc-rp-test-23/values.yaml 
replicaCount: 1

image:
  repository: nginx
  tag: "1.16.0"
  pullPolicy: IfNotPresent

service:
  name: svc-{{ .Release.Name }}
  type: ClusterIP
  port: 80
  targetPort: 9001

env:
  name: dev

fail hc-rp-test-23/templates/deployment.yaml

# cat hc-rp-test-23/templates/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dm-{{ .Release.Name }}
  namespace: ns-rp-test-23
  labels:
    app: lbl-{{ .Release.Name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: lbl-{{ .Release.Name }}
  template:
    metadata:
      labels:
        app: lbl-{{ .Release.Name }}
    spec:
      containers:
        - name: cnt-{{ .Release.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          volumeMounts:
            - name: vol-{{ .Release.Name }}
              mountPath: /usr/share/nginx/html/
      volumes:
        - name: vol-{{ .Release.Name }}
          configMap:
            name: cm-{{ .Release.Name }}-nginx-index

fail hc-rp-test-23/templates/configmap.yaml

# cat hc-rp-test-23/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: cm-{{ .Release.Name }}-nginx-index
  namespace: ns-rp-test-23
data:
  index.html: |
    <html>
    <h1>Welcome</h1>
    </br>
    <h1>Hi! I got deployed in {{ .Values.env.name }} Environment using Helm Chart - 23th v1 </h1>
    </html>

fail hc-rp-test-23/templates/service.yaml

# cat hc-rp-test-23/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: svc-{{ .Release.Name }}
  namespace: ns-rp-test-23
spec:
  selector:
    app: lbl-{{ .Release.Name }}
  ports:
    - protocol: {{ .Values.service.protocol | default "TCP" }}
      port: 9001
      targetPort: 80

fail hc-rp-test-23/templates/ingress.yaml

# cat hc-rp-test-23/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ing-{{ .Release.Name }}
  namespace: ns-rp-test-23
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-issuer
spec:
  ingressClassName: nginx
  tls:
    - hosts:
      - rp-test-23.auul.pri.ee
      secretName: scrt-{{ .Release.Name }}
  rules:
    - host: rp-test-23.auul.pri.ee
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: svc-{{ .Release.Name }}
                port:
                  number: 9001

chart materjali kontrollimiseks sobib öelda materjali kataloogis

# helm lint .
==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

kusjuures nii saab küsida muutujate väärtuste asendusega esitust

$ helm template . | less -N

Helm käsundamine

Release paigaldamiseks tuleb esmalt moodustada namespace tavalisel viisil

$ kubectl create ns ns-rp-test-23

ning seejärel öelda üks samm kõrgemal materjali kataloogist

$ helm install rp-test-23 hc-rp-test-23 --namespace=ns-rp-test-23
NAME: rp-test-23
LAST DEPLOYED: Fri Nov  3 00:52:29 2023
NAMESPACE: ns-rp-test-23
STATUS: deployed
REVISION: 1
TEST SUITE: None

release uuendamine

$ helm upgrade rp-test-23 hc-rp-test-23 --namespace=ns-rp-test-23
Release "rp-test-23" has been upgraded. Happy Helming!
NAME: rp-test-23 
LAST DEPLOYED: Fri Nov  3 00:54:22 2023
NAMESPACE: ns-rp-test-23
STATUS: deployed
REVISION: 2
TEST SUITE: None

varasema release juurde pöördumine

$ helm rollback rp-test-23 1 --namespace=ns-rp-test-23
Rollback was a success! Happy Helming!

release eemaldamine

$ helm uninstall rp-test-23 --namespace=ns-rp-test-23
release "rp-test-23" uninstalled

helm release'ide nimekirja küsimine

$ helm list --namespace=ns-rp-test-23
NAME      	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART            	APP VERSION
rp-test-23	ns-rp-test-23	4       	2023-11-05 21:18:46.635210288 +0200 EET	deployed	hc-rp-test-23-0.2.0	2.0.0 

helm release kohta tema release history küsimine

imre@k0s-imre-test:~/20231101$ helm history rp-test-23 --namespace=ns-rp-test-23
REVISION	UPDATED                 	STATUS    	CHART            	APP VERSION	DESCRIPTION
1       	Sun Nov  5 21:12:53 2023	superseded	hc-rp-test-23-0.2.0	2.0.0      	Install complete
2       	Sun Nov  5 21:14:11 2023	superseded	hc-rp-test-23-0.2.0	2.0.0      	Upgrade complete
3       	Sun Nov  5 21:17:54 2023	superseded	hc-rp-test-23-0.2.0	2.0.0      	Upgrade complete
4       	Sun Nov  5 21:18:46 2023	deployed  	hc-rp-test-23-0.2.0	2.0.0      	Upgrade complete

Misc korraldused, kõigi süsteemi paigaldatud helm chartide nimekirja küsimine

$ helm list -A

Kasulikud lisamaterjalid

Kasulikud lisamaterjalid

  • TODO