Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use pod Controller in docker

2025-05-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article is about how to use the pod controller in docker. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

The previously created pod is defined through the resource configuration list. If such a pod is deleted manually, it will not be recreated by itself. The pod created in this way is called autonomous Pod.

We seldom use autonomous pod in production.

Let's learn another kind of pod, called Pod managed by the controller. The controller will strictly control the number of pod according to the defined policy. Once it is found that the number of pod is small, it will automatically establish a new pod;. Once it is found that there is more pod, it will automatically kill the excess Pod.

Pod controller: ReplicaSet controller, Deployment controller (must be mastered), DaenibSet controller, Job controller

ReplicaSet controller: create a specified number of Pod copies for the user, and ensure that the pod copies meet the number expected by the user; and update the automatic expansion and reduction mechanism. Replicat mainly consists of three components: 1, the number of pod copies expected by users; 2, tag selector (control and management of pod copies); 3, pod resource template (if the number of pod is less than expected, according to the pod template to create a certain number of pod).

Deployment controller: Deployment controls Pod by controlling replicaset. Deployment supports rolling updates and rollback, declarative configuration. Deployment focuses only on the group, not on the individual.

DaemonSet controller: used to ensure that each node in the cluster runs only one copy of pod (voiceover, if there is no DaemonSet, a node can run multiple copies of pod). If you add a new node to the cluster, the new node will also automatically generate a copy of Pod.

Job controller: for those who only do it once, exit normally as long as it is finished, and reconstruct the pod if it is not finished, which is called job controller.

StatefulSet controller: manages stateful applications, and each copy of pod is managed separately. It has its own unique logo.

Starting with 1.2 + and 1.7, K8s supports TPR (third party resources third-party resources). In k8s 1.8 +, CDR (Custom Defined Reources, user-defined resources) is supported.

Replicaset Controller [root@master manifests] # kubectl explain replicaset [root@master manifests] # kubectl explain rs (abbreviated for replicaset) [root@master manifests] # kubectl explain rs.spec.template [root@master manifests] # kubectl get deployNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEmyapp 2 2 20 10dmytomcat 3 3 3 10dnginx-deploy 1 1 1 13d [root@master manifests] # kubectl delete deploy myapp deployment.extensions "myapp" deleted [root@master manifests] # kubectl delete deploy nginx-deploydeployment.extensions "nginx-deploy" deleted [root@master manifests] # cat rs-demo.yaml apiVersion: apps/v1kind: ReplicaSetmetadata: name: myapp namespace: defaultspec: # this is spec replicas of the controller: 2 # several copies selector: # View help: Tag selector. Kubectl explain rs.spec.selector matchLabels: app: myapp release: canary template: # View help: template kubectl explain rs.spec.template metadata: # kubectl explain rs.spec.template.metadata name: myapp-pod labels: # must match the content of the tag selector selector defined above app: myapp release: canary environment: qa spec: # this is the spec containers of pod: -name: myapp-container image: ikubernetes/nginx:latest ports:-name: http containerPort: 80 [root@master manifests] # kubectl create-f rs-demo.yaml replicaset.apps/myapp created [root@master manifests] # kubectl get rsNAME DESIRED CURRENT READY AGEmyapp 2223m

You can see that the ready above is 2, which means that both replcatset controllers are running normally.

[root@master manifests] # kubectl get pods-- show-labelsmyapp-6kncv 1 to 1 Running 0 15m app=myapp,environment=qa,release=canarymyapp-rbqjz 1 to 1 Running 0 15m app=myapp,environment=qa,release=canary 5mpod-demo 0 to 2 CrashLoopBackOff 2552 9d app=myapp,tier=frontend

Above are the two pod created by the replicatset controller.

[root@master manifests] # kubectl describe pods myapp-6kncv IP: 10.244.2.44 [root@master manifests] # curl 10.244.2.44Hello MyApp | Version: V1 | Pod Name

Edit the configuration file for replicatset (this file is not created by hand, but maintained by apiserver)

[root@master manifests] # kubectl edit rs myapp

Change the replicas inside to 5, and it will take effect immediately after it is saved.

[root@master manifests] # kubectl get pods-- show-labelsNAME READY STATUS RESTARTS AGE LABELSclient 0 kubectl get pods 1 Error 0 11d run=clientliveness-httpget-pod 1 Running 3 5d myapp-6kncv 1 Accord 1 Running 0 31m app=myapp Environment=qa,release=canarymyapp-c64mb 1/1 Running 0 3s app=myapp,environment=qa,release=canarymyapp-fsrsg 1/1 Running 0 3s app=myapp,environment=qa,release=canarymyapp-ljczj 0/1 ContainerCreating 0 3s app=myapp,environment=qa Release=canarymyapp-rbqjz 1/1 Running 0 31m app=myapp,environment=qa,release=canary

Similarly, you can use the command kubectl edit rs myapp to upgrade the version, change the image: ikubernetes/myapp:v2 inside, so that it becomes the v2 version.

[root@master manifests] # kubectl get rs-o wideNAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTORmyapp 5 551 h myapp-container ikubernetes/myapp:v2 app=myapp,release=canary

However, only after the pod is rebuilt, such as adding and deleting Pod, will it be updated to v2.

Deployment controller

We can update the version of pod dynamically through the Deployment controller.

Let's first build the replicatset v2 version, and then delete the Pod in the replicatset v1 version one by one, so that the automatically newly created pod will become the v2 version. When pod is all changed to v2, replicatset v1 will not be deleted, so if you find a problem with v2, you can fall back to v1.

Usually deployment retains version 10 of replicatset by default.

[root@master manifests] # kubectl explain deploy [root@master manifests] # kubectl explain deploy.spec [root@master manifests] # kubectl explain deploy.spec.strategy (Update Policy) [root@master ~] # kubectl delete rs myapp [root@master manifests] # cat deploy-demo.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: myapp-deploy namespace: defaultspec: replicas: 2 selector: # tag selector matchLabels: # the matching tag is app: myapp release: canary template : metadata: labels: app: myapp # and the above myapp to match release: canary spec: containers:-name: myapp image: ikubernetes/myapp:v1 ports:-name: http containerPort: 80 [root@master manifests] # kubectl apply-f deploy-demo.yaml deployment.apps/myapp-deploy created

Apply represents a declarative update and creation.

[root@master manifests] # kubectl get deployNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEmyapp-deploy 2 2 2 21 m [root@master ~] # kubectl get rsNAME DESIRED CURRENT READY AGEmyapp-deploy-69b47bc96d 2 2 217m

The above rs deployment is automatically created.

[root@master ~] # kubectl get podsNAME READY STATUS RESTARTS AGEmyapp-deploy-69b47bc96d-7jnwx 1 kubectl get podsNAME READY STATUS RESTARTS AGEmyapp-deploy-69b47bc96d-7jnwx 1 Running 0 19mmyapp-deploy-69b47bc96d-btskk 1 Running 0 19m

Modify the configuration file deploy-demo.yaml, change the replicas number to 3, and then execute kubectl apply-f deploy-demo.yaml to make the contents of the configuration file effective.

[root@master ~] # kubectl describe deploy myapp-deploy [root@master ~] # kubectl get pods-l app=myapp-w

-l filter tags

-w is dynamic monitoring

[root@master] # kubectl get rs-o wideNAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTORmyapp-deploy-69b47bc96d 2 2 21 h myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary

Look at the history of rolling updates:

[root@master ~] # kubectl rollout history deployment myapp-deploydeployments "myapp-deploy" REVISION CHANGE-CAUSE1

Let's change the deployment to 5: we can use the vim deploy-demo.yaml method to change the replicas to 5. Of course, there is another method that can be used, for example, the patch method.

[root@master manifests] # kubectl patch deployment myapp-deploy-p'{"spec": {"replicas": 5}} 'deployment.extensions/myapp-deploy patched [root@master manifests] # kubectl get deployNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEmyapp-deploy 5 55 52 h [root@master manifests] # kubectl get podsNAME READY STATUS RESTARTS AGEmyapp-deploy-69b47bc96d-7jnwx 1/1 Running 0 2hmyapp-deploy-69b47bc96d-8gn7v 1/1 Running 0 59smyapp-deploy-69b47bc96d-btskk 1/1 Running 0 2hmyapp-deploy-69b47bc96d-p5hpd 1/1 Running 0 59smyapp-deploy-69b47bc96d-zjv4p 1/1 Running 0 59smytomcat-5f8c6fdcb-9krxn 1/1 Running 0 8h

Modify the policy as follows:

[root@master manifests] # kubectl patch deployment myapp-deploy-p'{"spec": {"strategy": {"rollingUpdate": {"maxSurge": 1, "maxUnavaliable": 0} 'deployment.extensions/myapp-deploy patched

Strategy: represents a policy

MaxSurge: indicates that up to several controllers exist

MaxUnavaliable: indicates that up to several controllers are unavailable

[root@master manifests] # kubectl describe deployment myapp-deployRollingUpdateStrategy: 0 max unavailable, 1 max surge

Let's use the set image command to upgrade the mirror myapp to v3 and mark the myapp-deploy controller as paused. Resources paused by the pause command will not be coordinated by the controller, and the "kubectl rollout resume" command can be used to resume the paused resources.

[root@master manifests] # kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 & & kubectl rollout pause deployment myapp-deploy [root@master ~] # kubectl get pods-l app=myapp-w

Stop pausing:

[root@master ~] # kubectl rollout resume deployment myapp-deploydeployment.extensions/myapp-deploy resumed

See continue to update (that is, delete one update, delete one update):

[root@master manifests] # kubectl rollout status deployment myapp-deployWaiting for deployment "myapp-deploy" rollout to finish: 2 out of 5 new replicas have been updated...Waiting for deployment spec update to be observed...Waiting for deployment spec update to be observed...Waiting for deployment "myapp-deploy" rollout to finish: 2 out of 5 new replicas have been updated...Waiting for deployment "myapp-deploy" rollout to finish: 3 out of 5 new replicas have been updated...Waiting for deployment "myapp-deploy" rollout to finish: 3 Out of 5 new replicas have been updated...Waiting for deployment "myapp-deploy" rollout to finish: 4 out of 5 new replicas have been updated...Waiting for deployment "myapp-deploy" rollout to finish: 4 out of 5 new replicas have been updated...Waiting for deployment "myapp-deploy" rollout to finish: 4 out of 5 new replicas have been updated...Waiting for deployment "myapp-deploy" rollout to finish: 1 old replicas are pending termination...Waiting for deployment "myapp-deploy" rollout to finish: 1 old replicas are Pending termination...deployment "myapp-deploy" successfully rolled out [root@master manifests] # kubectl get rs-o wideNAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTORmyapp-deploy-69b47bc96d 0006h myapp ikubernetes/myapp:v1 app=myapp Pod-template-hash=2560367528,release=canarymyapp-deploy-6bdcd6755d 55 5 3h myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=2687823118,release=canarymytomcat-5f8c6fdcb 3 3 3 12h mytomcat tomcat pod-template-hash=194729876,run=mytomcat

You can see that myapp has v1 and v3 versions.

[root@master manifests] # kubectl rollout history deployment myapp-deploydeployments "myapp-deploy" REVISION CHANGE-CAUSE1 2

You can see above that there are two historical updates.

Let's roll back v3 to the previous version (not specified is the previous version).

[root@master manifests] # kubectl rollout undo deployment myapp-deploy-- to-revision=1deployment.extensions/myapp-deploy

You can see that the first version has been restored to version 3:

[root@master manifests] # kubectl rollout history deployment myapp-deploydeployments "myapp-deploy" REVISION CHANGE-CAUSE2 3

You can see that what you are working on is v1, that is, fallback to v1.

[root@master manifests] # kubectl get rs-o wideNAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTORmyapp-deploy-69b47bc96d 55 56h myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canarymyapp-deploy-6bdcd6755d 0.003h myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=2687823118 Release=canaryDaemonSet controller

You can see what versions of filebeat are available through https://hub.docker.com/r/ikubernetes/filebeat/tags/:

[root@node1 manifests] # docker pull ikubernetes/filebeat:5.6.5-alpine [root@node2 manifests] # docker pull ikubernetes/filebeat:5.6.5-alpine

Filebeat images are downloaded on both node1 and node2.

[root@node1 ~] # docker image inspect ikubernetes/filebeat:5.6.5-alpine [root@master manifests] # kubectl explain pods.spec.containers.env [root@master manifests] # cat ds-demo.yaml apiVersion: apps/v1kind: DaemonSetmetadata: name: myapp-ds namespace: defaultspec: selector: # tag selector matchLabels: # the matching tag is app: filebeat release: stable template: metadata: labels: app: filebeat # Match the above myapp to release: stable spec: containers:-name: myapp image: ikubernetes/myapp:v1 env:-name: REDIS_HOST value: redis.default.svc.cluster.local # casually named name: REDIS_LOG_LEVEL value: info [root@master manifests] # kubectl apply-f ds-demo.yaml daemonset.apps/myapp-ds created

See that myapp-ds is up and running and there are two myapp-ds, because we have two Node nodes. In addition, the myapp-ds controller will not run on the master node because the master is tainted (myapp-ds can be allowed on the master unless you set it)

[root@master manifests] # kubectl get podsNAME READY STATUS RESTARTS AGEmyapp-ds-5tmdd 1 1mmyapp-ds-dkmjj 1 Running 0 1mmyapp-ds-dkmjj 1 Running 0 1m [root@master ~] # kubectl logs myapp-ds-dkmjj [root@master manifests] # kubectl delete-f ds-demo.yaml [root@master manifests] # Cat ds-demo.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: redis namespace: defaultspec: replicas: 1 selector: matchLabels: app: redis role: logstor # Log Storage role template: metadata: labels: app: redis role: logstor spec: # this is the container spec containers:-name: redis image: redis:4.0-alpine ports:- Name: redis containerPort: 637 isolate the resource definition list with a minus sign-apiVersion: apps/v1kind: DaemonSetmetadata: name: filebeat-ds namespace: defaultspec: selector: # tag selector matchLabels: # matches the tag app: filebeat release: stable template: metadata: labels: app: filebeat # and the myapp above matches release: stable spec: containers: -name: filebeat image: ikubernetes/filebeat:5.6.6-alpine env:-name: REDIS_HOST # this is the name of the environment variable Value is its value value: redis.default.svc.cluster.local # casually named-name: REDIS_LOG_LEVEL value: info [root@master manifests] # kubectl create-f ds-demo.yaml deployment.apps/redis createddaemonset.apps/filebeat-ds created [root@master manifests] # kubectl expose deployment redis-port=6379 # # this is creating service in expose mode In fact, there is another way to create the abbreviation NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) serviceservice/redis exposed [root@master manifests] # kubectl get svc # service according to the list. AGEredis ClusterIP 10.106.138.181 6379/TCP 48s [root@master manifests] # kubectl get podsNAME READY STATUS RESTARTS AGEfilebeat-ds -hgbhr 1 + 1 Running 0 9hfilebeat-ds-xc7v7 1 + 1 Running 0 9hredis-5b5d6fbbbd-khws2 1 + + 1 Running 0 33m [root@master manifests] # kubectl exec-it redis-5b5d6fbbbd-khws2-/ bin/sh/data # netstat-tnlActive Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0 LISTEN tcp 0 0:: 6379:: * LISTEN / data # nslookup redis.default.svc.cluster.local # see that DNS can parse ipnslookup: can't resolve'(null) ': Name does not resolveName: redis.default.svc.cluster.localAddress 1: 10.106.138.181 redis.default.svc.cluster.local / data # redis-cli-h redis.default.svc.cluster.localredis.default.svc.cluster.local:6379 > keys * (empty list or set) redis.default.svc.cluster.local:6379 > [root@master manifests] # kubectl exec-it filebeat-ds-pnk8b-/ bin/sh/ # ps auxPID USER TIME COMMAND 1 Root 0:00 / usr/local/bin/filebeat-e-c / etc/filebeat/filebeat.yml 15 root 0:00 / bin/sh 22 root 0:00 ps aux / # cat / etc/filebeat/filebeat.ymlfilebeat.registry_file: / var/log/containers/filebeat_registryfilebeat.idle_timeout: 5sfilebeat.spool_size: 2048logging.level: infofilebeat.prospectors:- input_type: log paths:-"/ var/log/containers/ * .log "-" / var/log/docker/containers/*.log "-" / var/log/startupscript.log "-" / var/log/kubelet.log "-" / var/log/kube-proxy.log "-" / var/log/kube-apiserver.log "-" / var/log/kube-controller-manager.log "-" / var/log/kube-scheduler.log "- "/ var/log/rescheduler.log"-"/ var/log/glbc.log"-"/ var/log/cluster-autoscaler.log" symlinks: true json.message_key: log json.keys_under_root: true json.add_error_key: true multiline.pattern:'^\ s' multiline.match: after document_type: kube-logs tail_files: true fields_under_root: trueoutput.redis: hosts: ${REDIS_HOST:?No Redis host configured. Use env var REDIS_HOST to set host.} key: "filebeat" / # printenvREDIS_HOST=redis.default.svc.cluster.local/ # nslookup redis.default.svc.cluster.localnslookup: can't resolve'(null)': Name does not resolveName: redis.default.svc.cluster.localAddress 1: 10.106.138.181 redis.default.svc.cluster.local

Daemon-set also supports rolling updates.

[root@master manifests] # kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.5.7-alpine

Description: daemonsets filebeat-ds means daemonsets's name is filebeat-ds.

Filebeat=ikubernetes/filebeat:5.5.7-alpine means filebeat container = ikubernetes/filebeat:5.5.7-alpine

Thank you for reading! This is the end of the article on "how to use the pod controller in docker". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report