In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-05-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "Kubernetes's etcd status data and its backup". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Kubernetes uses etcd to store real-time running data of the cluster (such as node status information), while other pod are stateless and can drift between multiple nodes (node) according to load scheduling. Etcd itself can be deployed as a non-central multi-node mutual backup cluster, thus eliminating a single point of failure of the whole cluster. Under the default deployment of kubeadm, only one etcd instance (etcd-xxx) runs on master, and you can use kubectl get pod-n kube-system to see the running status.
1. View etcd service container information
Let's explore how the etcd instance of kubernetes is implemented and managed. On the master node of kubernetes, enter:
Kubectl describe pod/etcd-podc01-n kube-system > etcd.txt
The output is as follows:
Name: etcd-podc01Namespace: kube-systemPriority: 2000000000PriorityClassName: system-cluster-criticalNode: podc01/10.1.1.181Start Time: Mon 03 Dec 2018 10:42:05 + 0800Labels: component=etcd tier=control-planeAnnotations: kubernetes.io/config.hash: bcc0eea4c53f3b70d13b771ad88e31b7 kubernetes.io/config.mirror: bcc0eea4c53f3b70d13b771ad88e31b7 kubernetes.io/config.seen: 2018-12-05T11:05:31.8690622+08:00 kubernetes.io/config.source: file scheduler. Alpha.kubernetes.io/critical-pod: Status: RunningIP: 10.1.1.181Containers: etcd: Container ID: docker://8f301c91902a9399f144943013166a09dd0766a9b96c26fe2d8e335418a55cab Image: k8s.gcr.io/etcd:3.2.24 Image ID: docker-pullable://registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-etcd@sha256:7b073bdab8c52dc23dfb3e2101597d30304437869ad8c0b425301e96a066c408 Port: Host Port: Command: etcd-advertise-client-urls= https://127.0.0.1:2379-cert-file=/etc/kubernetes/pki/etcd/server.crt-client-cert-auth=true-data-dir=/var/lib/etcd-initial-advertise-peer-urls= https://127.0.0.1:2380-initial-cluster=podc01= https://127.0.0.1:2380 -key-file=/etc/kubernetes/pki/etcd/server.key-- listen-client-urls= https://127.0.0.1:2379-- listen-peer-urls= https://127.0.0.1:2380-- name=podc01-- peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt-- peer-client-cert-auth=true-- peer-key-file=/etc/kubernetes/pki/ Etcd/peer.key-peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt-snapshot-count=10000-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt State: Running Started: Wed 05 Dec 2018 11:05:35 + 0800 Ready: True Restart Count: 0 Liveness: exec [/ bin/sh-ec ETCDCTL_API=3 etcdctl-- endpoints= https://[127.0.0.1]:2379-- cacert=/etc/kubernetes/pki/etcd/ca.crt-- cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt-- key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s # success=1 # failure=8 Environment : Mounts: / etc/kubernetes/pki/etcd from etcd-certs (rw) / var/lib/etcd from etcd-data (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: etcd-data: Type: HostPath (bare host directory volume) Path: / var/lib/etcd HostPathType: DirectoryOrCreate etcd- Certs: Type: HostPath (bare host directory volume) Path: / etc/kubernetes/pki/etcd HostPathType: DirectoryOrCreateQoS Class: BestEffortNode-Selectors: Tolerations:: NoExecuteEvents:
As you can see, etcd uses the host-network network, and then maps the system parameters, data, and so on to the host's directory (the configuration parameters are located in / var/lib/etcd of the host, and the certificate file is located in / etc/kubernetes/pki/etcd).
2. View the etcd configuration parameter file
Enter sudo ls-l / var/lib/etcd/member/snap under the host to see the snapshot file generated by the etcd service, as shown below:
Total amount of supermap@podc01:~/openthings/kubernetes-tools/jupyter$ sudo ls-l / var/lib/etcd/member/snap 8924 root root RWFY-1 root root 8160 December 5 09:19 00000000000005-000000001fbdd0.SnapRWFLV-1 root root 8160 December 5 10:37 00000000000005-000000001fe4e1.snapDefrMutual Rafe-1 root root 8508 December 5 11:42 00000000000006-00000000000000200bf2.snapmurrMack r- -1 root root 8509 December 5 12:49 00000000000006-00000000203303.SnapMyRwkashi rwkashi-1 root root 8509 December 5 13:56 00000000000006-00000000205a14.Showww1 root root 24977408 December 5 14:13 db
View the certificate file for etcd:
Supermap@podc01:~/openthings/kubernetes-tools/jupyter$ ls-l / etc/kubernetes/pki/etcd total amount 32 root root 1017 November 23 10:08 ca.crt-rw- 1 root root 1679 November 23 10:08 ca.key-rw-r--r-- 1 root root 1094 November 23 10:08 healthcheck-client.crt-rw- 1 root root 1679 November 23 10:08 healthcheck-client.key -rw-r--r-- 1 root root 1127 November 23 10:08 peer.crt-rw- 1 root root 1679 November 23 10:08 peer.key-rw-r--r-- 1 root root 1119 November 23 10:08 server.crt-rw- 1 root root 1675 November 23 10:08 server.key
These files are exactly the same as what you see from the pod command line of Kubernetes (originally the same directory).
3. Directly access etcd services
Next, let's connect to this instance and view the specific running information.
First, install etcd-client, which is a stand-alone client for etcd.
Sudo apt install etcd-client
Then, connect to the etcd instance (endpoints is the address parameter shown above: advertise-client-urls):
Sudo etcdctl-endpoints https://127.0.0.1:2379-cert-file=/etc/kubernetes/pki/etcd/server.crt-key-file=/etc/kubernetes/pki/etcd/server.key-ca-file=/etc/kubernetes/pki/etcd/ca.crt member list
Note: because the kubernetes cluster uses https, three parameters-- cert-file,-- key-file, and-- ca-file need to be specified, and the parameter files are all located in the / etc/kubernetes/pki/etcd directory.
My output here is:
A874c87fd42044f: name=podc01 peerURLs= https://127.0.0.1:2380 clientURLs= https://127.0.0.1:2379 isLeader=true
You can enter additional commands here to access the instance started by kubernetes (the actual run time is controlled by the kubelet service).
4. Backup and recovery
Knowing the above secret, it is not difficult to back up etcd. There are three ways:
You can directly back up the contents of the files under / etc/kubernetes/pki/etcd and / var/lib/etcd.
If you only back up the files under / var/lib/etcd/member/snap, you do not need to stop the service.
If you stop the etcd service, the service will be interrupted during the backup.
For multi-node etcd services, you cannot use the method of directly backing up and restoring catalog files.
Use docker stop to stop the service before backup, and then start it.
By default, etcd will generate a snap every 10000 changes.
Back up through the etcd-client client. The following (note that snapshot is supported in API3, and the three parameter names of cert/key/cacert are different from the command of API2):
Sudo ETCDCTL_API=3 etcdctl snapshot save "/ home/supermap/k8s-backup/data/etcd-snapshot/$ (date +% Y%m%d_%H%M%S) _ snapshot.db"-endpoints=127.0.0.1:2379-cert= "/ etc/kubernetes/pki/etcd/server.crt"-key= "/ etc/kubernetes/pki/etcd/server.key"-cacert= "/ etc/kubernetes/pki/etcd/ca.crt"
Use kubernetes's cronjob to automate backups on a regular basis.
Using kubernetes's cronjob to achieve regular automatic backup requires some adjustments to images and startup parameters. My modified yaml file is as follows:
ApiVersion: batch/v1beta1kind: CronJobmetadata: name: etcd-disaster-recovery namespace: cronjobspec: schedule: "0 22 *" jobTemplate: spec: template: metadata: labels: app: etcd-disaster-recovery spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions: -key: kubernetes.io/hostname operator: In values:-podc01 containers:-name: etcd image: k8s.gcr.io/etcd:3.2.24 imagePullPolicy: "IfNotPresent" command:-sh-- c-"export ETCDCTL_API=3 \ etcdctl-endpoints=$ENDPOINT\-cert=/etc/kubernetes/pki/etcd/server.crt\-key=/etc/kubernetes/pki/etcd/server.key\-cacert=/etc/kubernetes/pki/etcd/ca.crt\ snapshot save / snapshot/$ (date +% Y%m%d_%H%M%S) _ snapshot.db \ echo etcd backup success "env:-name: ENDPOINT value:" https://127.0.0.1:2379" volumeMounts:-mountPath: "/ etc/kubernetes/pki/etcd" name: etcd-certs-mountPath: "/ var/lib/etcd" name: etcd- Data-mountPath: "/ snapshot" name: snapshot subPath: data/etcd-snapshot-mountPath: / etc/localtime name: lt-config-mountPath: / etc/timezone name: tz-config restartPolicy: OnFailure volumes:-name: etcd-certs hostPath: Path: / etc/kubernetes/pki/etcd-name: etcd-data hostPath: path: / var/lib/etcd-name: snapshot hostPath: path: / home/supermap/k8s-backup-name: lt-config hostPath: path: / etc/localtime -name: tz-config hostPath: path: / etc/timezone hostNetwork: true
In addition, the etcd master database of Kubernetes is backed up.
However, some other operations are required to fully back up and restore the kubernetes cluster, and a separate backup operation is required for each running application.
This is the end of the content of "Kubernetes's etcd status data and its backup". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.