Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Implementation of StatefulSet (stateful Service) based on K8s

2024-12-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

StatefulSet describes the problems encountered:

The Pod created with Deployment is stateless. When hanging after the Volume, if the Pod dies, Replication Controller will run one more to ensure availability, but because it is stateless, the relationship with the previous Volume has been disconnected when the Pod is hung up, and the new Pod cannot find the previous Pod. But as far as users are concerned, they are not aware of the underlying Pod hanging, but when the Pod hangs, they can no longer use the previously mounted disk.

StatefulSet: a controller that provides a unique flag to Pod that ensures the order of deployment and expansion.

Pod consistency: including order (start and stop order), network consistency. This consistency is related to Pod, regardless of which node node is dispatched.

Stable order: for N replicas of StatefulSet, each Pod is assigned a numeric sequence number within the range of [0recoery N) and is unique.

Stable network: the hostname mode of Pod is (statefulset name)-(serial number).

Stable storage: create a PV for each Pod through VolumeClaimTemplate. Delete, reduce copies, do not delete related volumes.

RC, RS, Deployment, DS. -> stateless service

Template (template): according to the Pod created by the template, their J states are exactly the same (except for name, IP, domain name).

It can be understood as: any Pod can be deleted and replaced with the newly generated Pod.

(2) stateful services: related events in previous or multiple communications need to be recorded as a classification standard for the following communications. For example: database services such as mysql. The name of Pod cannot be changed at will. The directory for data persistence is also different, and each Pod has its own unique data persistence directory.)

Mysql: master-slave relationship.

If the previously stateless service is compared to cattle, sheep and other livestock, because these can be "given" at a certain time. In that case, the state is compared to: pets, but pets do not arrive at a certain time like livestock "give away", people tend to take care of pets for a lifetime.

(3) each Pod---- > corresponds to a PVC---- > each PVC corresponds to a PV.

Storageclass: automatically create PV

Need to solve: automatically create PVC.

Realization principle

Like ReplicaSet and Deployment resources, StatefulSet is implemented as a controller. It is mainly managed by three components: StatefulSetController, StatefulSetControl and StatefulPodControl. StatefulSetController accepts add, delete and modify events from both PodInformer and ReplicaSetInformer and pushes them to the queue:

The controller StatefulSetController will start multiple Goroutine protocols in the Run method, and these protocols will get the StatefulSet resources to be processed from the queue for synchronization. Next, we will introduce the process of Kubernetes synchronizing StatefulSet.

one, Example (1) create a yaml file for statefulset [root@master yaml] # vim statefulset.yamlapiVersion: v1kind: name: headless-svc labels: app: headless-svcspec: ports:-port: 80 selector: app: headless-pod clusterIP: None # without the same ip---apiVersion: apps/v1kind: StatefulSetmetadata: name: statefulset-testspec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: Metadata: labels: app: headless-pod spec: containers:-name: myhttpd image: httpd ports:-containerPort: 80

Deployment: Deploy+RS+ random string (name of Pod.) There is no order, but

With something that doesn't replace at random.

1. Headless-svc: headless service. Because there is no IP address, it does not have the function of load balancing. Because statefulset requires that the names of Pod are ordered, each Pod cannot be replaced at will, that is, even after the Pod is rebuilt, the name remains the same. Name each Pod at the back end.

2. StatefulSet: define specific applications

3. VolumeClaimT emplates: automatically create PVC to provide proprietary storage for backend Pod.

Execute [root@master yaml] # kubectl apply-f statefulset.yaml to check [root@master yaml] # kubectl get svc

[root@master yaml] # kubectl get pod// can see that these pod are in order

First, create a StorageClass resource object. 1. Create a NFS service based on NFS service.

Download the installation package required for nfs

[root@node02 ~] # yum-y install nfs-utils rpcbind

Create a shared directory

[root@master ~] # mkdir / nfsdata

Permissions to create a shared directory

[root@master ~] # vim / etc/exports/nfsdata * (rw,sync,no_root_squash)

Enable nfs and rpcbind

[root@master ~] # systemctl start nfs-server.service [root@master ~] # systemctl start rpcbind

Test it

[root@master] # showmount-e

2. Create rbac permissions. [root@master yaml] # vim rbac-rolebind.yaml apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner---apiVersion: rbac.authorization.k8s.io/v1kind: name: nfs-provisioner-runnerrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update" "patch"]-apiGroups: ["] apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: nfs-provisioner-runnerrules:-apiGroups: ["] resources: [" persistentvolumes "] verbs: [" get "," list "," watch "," create " "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"]-apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get", "create", "list", "watch" "update"]-apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner namespace: default # required field roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io Execute [root@master yaml] # kubectl apply-f rbac-rolebind.yaml 3, Create a Deployment resource object Replace the real NFS service with Pod. [root@master yaml] # vim nfs-deployment.yaml apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-client-provisionerspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisionerspec: serviceAccount: nfs-provisioner containers:-name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner volumeMounts :-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: bdqn-name: NFS_SERVER value: 192.168.1.21-name: NFS_PATH value: / nfsdata volumes:-name: nfs-client-root Nfs: server: 192.168.1.21 path: / nfsdata execute [root@master yaml] # kubectl apply-f nfs-deployment.yaml to check [root@master yaml] # kubectl get pod

4. Create the yaml file of storageclass [root@master yaml] # vim test-storageclass.yaml apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: stateful-nfsprovisioner: bdqn # Associate to the above DeployreclaimPolicy: Retain through the provisioner field: execute [root@master yaml] # kubectl apply-f test-storageclass.yaml to check [root@master yaml] # kubectl get sc

two, Solve the problem of automatically creating pvc1, Create the yaml file of statefulset [root@master yaml] # vim statefulset.yaml apiVersion: v1kind: Servicemetadata: headless-svc labels: app: headless-svcspec: ports:-port: 80 name: myweb selector: app: headless-pod clusterIP: None---apiVersion: apps/v1kind: StatefulSetmetadata: name: statefulset-testspec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: metadata: labels: App: headless-pod spec: containers:-image: httpd name: myhttpd ports:-containerPort: 80 name: httpd volumeMounts:-mountPath: / mnt name: test volumeClaimTemplates: # > automatically create PVC Provides proprietary storage for the back-end Pod. * *-metadata: name: test annotations: # this is the specified storageclass volume.beta.kubernetes.io/storage-class: stateful-nfs spec: accessModes:-ReadWriteOnce resources: requests: storage: 100Mi

In this example:

A Service object named headless-svc is created, indicated by the metadata: name field. The Service locates an application called headless-svc, indicated by labels: app: headless-svc and selector: app: headless-pod. The Service exposes port 80 and names it web. And the Service controls the domain and routes Internet traffic to containerized applications deployed by StatefulSet. A StatefulSet named web is created with three copies of Pod (replicas: 3). The Pod template (spec: template) indicates that its Pod is marked app: headless-pod. The Pod specification (template: spec) instructs the Pod of StatefulSet to run a container myhttpd that runs as a httpd image. Container images are managed by Container Registry. The Pod specification uses the web port opened by Service. Template: spec: volumeMounts specifies a mountPath named test. MountPath is the path where the storage volume should be mounted in the container. StatefulSet is pre-configured with a PersistentVolumeClaim:test with 100mb pre-configured storage space. Execute [root@master yaml] # kubectl apply-f statefulset.yaml to check [root@master yaml] # kubectl get pod

If there is a problem with the first pod, the later pod will not be generated.

[root@master yaml] # kubectl get statefulsets

2. Verify that the file [root@master yaml] # kubectl exec-it statefulset-test-0 / bin/sh# cd / mnt# touch testfile# exit host is created in the data storage container. Check [root@master yaml] # ls / nfsdata/default-test-statefulset-test-0-pvc-bf1ae1d0-f496-4d69-b33b-39e8aa0a6e8d/testfile III.

Create a namespace with your own name in which all of the following resources run. Running a httpd web service with statefuset resources requires 3 Pod, but the main interface content of each Pod is different, and all need to do proprietary data persistence. Try to delete one of the Pod, view the newly generated Pod, and summarize and compare with the previous Pod controlled by the Deployment resource controller. What is the difference?

(1) create a StorageClass resource object.

Note: the nfs service needs to be enabled

1. Create the name of namespace's yaml file [root@master yaml] # vim namespace.yaml kind: NamespaceapiVersion: v1metadata: name: xgp-lll # namespave. Execute [root@master yaml] # kubectl apply-f namespace.yaml to check [root@master yaml] # kubectl get namespaces.

two。 Create rbac permissions. [root@master yaml] # vim rbac-rolebind.yamlapiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner namespace: xgp-lll---apiVersion: ClusterRolemetadata: name: nfs-provisioner-runner namespace: xgp-lllrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"]-apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get", "create", "list", "watch" "update"]-apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner namespace: xgp-lllroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io execute [ Root@master yaml] # kubectl apply-f rbac-rolebind.yaml3, Create a Deployment resource object Replace the real NFS service with Pod. [root@master yaml] # vim nfs-deployment.yaml apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-client-provisioner namespace: xgp-lllspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers:-name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner VolumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: xgp-name: NFS_SERVER value: 192.168.1.21-name: NFS_PATH value: / nfsdata volumes:-name: nfs -client-root nfs: server: 192.168.1.21 path: / nfsdata execute [root@master yaml] # kubectl apply-f nfs-deployment.yaml to check [root@master yaml] # kubectl get pod-n xgp-lll

4. Create the yaml file of storageclass [root@master yaml] # vim test-storageclass.yaml apiVersion: storage.k8s.io/v1kind: name: stateful-nfs namespace: xgp-lllprovisioner: xgp # associate it to the above DeployreclaimPolicy: Retain through the provisioner field, execute [root@master yaml] # kubectl apply-f test-storageclass.yaml, check [root@master yaml] # kubectl get sc-n xgp-lll

(2) solve the problem of automatically creating pvc1, Create the yaml file apiVersion: v1kind: Servicemetadata: name: headless-svc namespace: xgp-lll labels: app: headless-svcspec: ports:-port: 80 name: myweb selector: app: headless-podapiVersion: apps/v1kind: StatefulSetmetadata: name: statefulset-test namespace: xgp-lllspec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: metadata: labels: app: headless-pod spec: Containers:-image: httpd name: myhttpd ports:-containerPort: 80 name: httpd volumeMounts:-mountPath: / usr/local/apache2/htdocs name: test volumeClaimTemplates: # > automatically create PVC Provides proprietary > storage for backend Pod. * *-metadata: name: test annotations: # this is specified storageclass volume.beta.kubernetes.io/storage-class: stateful-nfs spec: accessModes:-ReadWriteOnce resources: requests: storage: 100Mi execute [root@master yaml] # kubectl apply-f statefulset.yaml to check [root@master yaml] # kubectl get pod-n xgp-lll

2 、 Verify that the first [root@master yaml] # kubectl exec-it-n xgp-lll statefulset-test-0 / bin/bash root@statefulset-test-0:/usr/local/apache2# echo 123 > / usr/local/apache2/htdocs/index.html second [root@master yaml] # kubectl exec-it-n xgp-lll statefulset-test-1 / bin/bash root@statefulset-test-2:/usr/local/apache2# echo 456 > is created in the data storage container. / usr/local/apache2/htdocs/index.html third [root@master yaml] # kubectl exec-it-n xgp-lll statefulset-test-2 / bin/bash root@statefulset-test-1:/usr/local/apache2# echo 789 > / usr/local/apache2/htdocs/index.html host check the first [root@master yaml] # cat / nfsdata/xgp-lll-test-statefulset-test-0-pvc-ccaa02df-4721-4453-a6ec-4f2c928221d7/index.html 123th Two [root@master yaml] # cat / nfsdata/xgp-lll-test-statefulset-test-1-pvc-88e60a58-97ea-4986-91d5-a3a6e907deac/index.html 456.The third [root@master yaml] # cat / nfsdata/xgp-lll-test-statefulset-test-2-pvc-4eb2bbe2-63d2-431a-ba3e-b7b8d7e068d3/index.html 789 visit

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report