国产性生交xxxxx免费-国产中文字幕-啊灬啊灬啊灬快灬高潮了,亚洲国产午夜精品理论片在线播放 ,亚洲欧洲日本无在线码,色爽交视频免费观看

鍋爐信息網 > 鍋爐知識 > 鍋爐學習

k8s集群版本升級

發布時間:

環境集群版本:kubernetes v1.15.3部署方式:kubeadm操作系統版本: CentOS Linux release 7.6.1810 (Core)集群節點:ops-k8s-m1、ops-k8

環境
集群版本:kubernetes v1.15.3
部署方式:kubeadm
操作系統版本: CentOS Linux release 7.6.1810 (Core)
集群節點:ops-k8s-m1、ops-k8s-m2、ops-k8s-m3 共3個master

環境準備

配置yum安裝源

cat >/etc/yum.repos.d/k8s.repo << EOFn[kubernetes]nname=Kubernetesnbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64nenabled=1ngpgcheck=0nrepo_gpgcheck=0ngpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgnnEOF

修改集群初始化時的下載鏡像地址 默認是http://k8s.gcr.io,升級下載鏡像會用到,避免因為網絡原因拉鏡像失敗 修改命令

kubectl edit cm kubeadm-config -n kube-systemnn# 找到imageRepository把對應地址改為阿里云的鏡像倉庫地址:: registry.aliyuncs.com/google_containers并保存,下面只貼出相關的部分配置nimageRepository: registry.aliyuncs.com/google_containersnkind: ClusterConfigurationnkubernetesVersion: v1.15.3

如何找到穩定版的kubernetes

可以使用下面命令列出kubeadm 、kubectl、kubelet的所有版本,每個大版本的最后一個版本為穩定版

yum list --showduplicates kubeadm kubectl kubelet --disableexcludes=kubernetes

升級kubernetes

發現不能從kubernetes v1.15.3直接升級到kubernetes v1.19.16會報錯,版本跨越太大,不能直接升級

[root@ops-k8s-m1 k8s]# kubeadm upgrade plann[upgrade/config] Making sure the configuration is correct:n[upgrade/config] Reading configuration from the cluster...n[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'n[upgrade/config] FATAL: this version of kubeadm only supports deploying clusters with the control plane version >= 1.18.0. Current version: v1.15.3nTo see the stack trace of this error execute with --v=5 or higher

于是,想先升級到1.15.x系列的穩定版本,找到了是v1.15.12

升級kubernetes v1.15.3 到 kubernetes v1.15.12

查看當前集群

[root@ops-k8s-m1 k8s]# kubectl get nodenNAME STATUS ROLES AGE VERSIONnops-k8s-m1 Ready master 344d v1.15.3nops-k8s-m2 Ready master 344d v1.15.3nops-k8s-m3 Ready master 344d v1.15.3

我這邊先升級ops-k8s-m1,升級之前先做好備份工作,如果是虛擬機最好做個快照

cp -ar /etc/kubernetes /etc/kubernetes.bakncp -ar /var/lib/etcd /etc/var/lib/etcd.bak

驅逐pod設置成不可調度

kubectl drain ops-k8s-m1 --ignore-daemonsets --force

升級kubeadm,kubelet、kubectl程序

yum install -y kubeadm-1.15.12 kubelet-1.15.12 kubectl-1.15.12 --disableexcludes=kubernetes

查看更新計劃

kubeadm upgrade plan

升級

kubeadm upgrade apply v1.15.12nnn# 升級過程n[root@ops-k8s-m1 k8s]# kubeadm upgrade apply v1.15.12n[upgrade/config] Making sure the configuration is correct:n[upgrade/config] Reading configuration from the cluster...n[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'n[preflight] Running pre-flight checks.n[upgrade] Making sure the cluster is healthy:n[upgrade/version] You have chosen to change the cluster version to "vv1.15.12"n[upgrade/versions] Cluster version: v1.15.3n[upgrade/versions] kubeadm version: v1.15.12n[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: yn[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]n[upgrade/prepull] Prepulling image for component etcd.n[upgrade/prepull] Prepulling image for component kube-apiserver.n[upgrade/prepull] Prepulling image for component kube-controller-manager.n[upgrade/prepull] Prepulling image for component kube-scheduler.n[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcdn[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-schedulern[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-managern[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiservern[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcdn[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-schedulern....n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentialsn[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Tokenn[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the clustern[addons]: Migrating CoreDNS Corefilen[addons] Applied essential addon: CoreDNSn[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane addressn[addons] Applied essential addon: kube-proxynn[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.12". Enjoy!

重啟kubelet

systemctl daemon-reloadnsystemctl restart kubelet

再次查看集群版本

[root@ops-k8s-m1 k8s]# kubectl get nodenNAME STATUS ROLES AGE VERSIONnops-k8s-m1 Ready master 15m v1.15.12nops-k8s-m2 Ready master 344d v1.15.3nops-k8s-m3 Ready master 344d v1.15.3

接下來安裝統一的方法再升級另外兩個master,我這邊省略步驟,升級完成再查看集群版本

[root@ops-k8s-m1 k8s]# kubectl get nodenNAME STATUS ROLES AGE VERSIONnops-k8s-m1 Ready master 15m v1.15.12nops-k8s-m2 Ready master 30m v1.15.12nops-k8s-m3 Ready master 30m v1.15.12

升級kubernetes v1.15.12 到 kubernetes v1.16.15

升級命令

yum install -y kubeadm-v1.16.15 kubelet-v1.16.15 kubectl-v1.16.15 --disableexcludes=kubernetesnnkubeadm upgrade plannnkubeadm upgrade apply v1.16.15nnsystemctl daemon-reloadnnsystemctl restart kubeletnnkubectl get node

升級過程很順利

[root@ops-k8s-m1 k8s]# kubectl get nodenNAME STATUS ROLES AGE VERSIONnops-k8s-m1 Ready master 26h v1.16.15nops-k8s-m2 Ready master 27h v1.16.15nops-k8s-m3 Ready master 27h v1.16.15

想偷懶嘗試直接升級到1.19.16,結果還是報錯,看來只能一個一個版本去升級

[root@ops-k8s-m1 k8s]# kubeadm upgrade plann[upgrade/config] Making sure the configuration is correct:n[upgrade/config] Reading configuration from the cluster...n[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'n[upgrade/config] FATAL: this version of kubeadm only supports deploying clusters with the control plane version >= 1.18.0. Current version: v1.16.15nTo see the stack trace of this error execute with --v=5 or higher

升級kubernetes v1.16.15 到 kubernetes v1.17.17

升級命令

yum install -y kubeadm-v1.17.17 kubelet-v1.17.17 kubectl-v1.17.17 --disableexcludes=kubernetesnnkubeadm upgrade plannnkubeadm upgrade apply v1.17.17nnsystemctl daemon-reloadnnsystemctl restart kubeletnnkubectl get node

升級的時候遇到兩個問題

問題一:

[root@ops-k8s-m1 k8s]# kubeadm upgrade plann[upgrade/config] Making sure the configuration is correct:n[upgrade/config] Reading configuration from the cluster...n[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'n[preflight] Running pre-flight checks.n[upgrade] Making sure the cluster is healthy:n[upgrade] Fetching available versions to upgrade ton[upgrade/versions] Cluster version: v1.16.15n[upgrade/versions] kubeadm version: v1.17.16nI1111 17:12:50.236748 27033 version.go:251] remote version is much newer: v1.25.4; falling back to: stable-1.17n[upgrade/versions] Latest stable version: v1.17.17n[upgrade/versions] FATAL: etcd cluster contains endpoints with mismatched versions: map[https://192.168.23.241:2379:3.3.15 https://192.168.23.242:2379:3.3.10 https://192.168.23.243:2379:3.3.10]

大概是說v1.17.17升級的節點的etcd版本會變成3.3.15,而原來的etcd版本是3.3.10,這個直接強制升級就好了

解決: 升級命令改為:kubeadm upgrade apply v1.17.17 -f

問題二: 升級完啟動kubelet遇到問題節點一直是NOTReady狀態,查看kubelet日志如下:

[root@ops-k8s-m1 k8s]# systemctl status kubeletn● kubelet.service - kubelet: The Kubernetes Node Agentn Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)n Drop-In: /usr/lib/systemd/system/kubelet.service.dn └─10-kubeadm.confn Active: active (running) since Mon 2022-11-14 10:26:16 CST; 34s agon Docs: https://kubernetes.io/docs/n Main PID: 24321 (kubelet)n Tasks: 15n Memory: 30.8Mn CGroup: /system.slice/kubelet.servicen └─24321 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=harbor.dragonpa...nnNov 14 10:26:46 ops-k8s-m1 kubelet[24321]: "capabilities": {nNov 14 10:26:46 ops-k8s-m1 kubelet[24321]: "portMappings": truenNov 14 10:26:46 ops-k8s-m1 kubelet[24321]: }nNov 14 10:26:46 ops-k8s-m1 kubelet[24321]: }nNov 14 10:26:46 ops-k8s-m1 kubelet[24321]: ]nNov 14 10:26:46 ops-k8s-m1 kubelet[24321]: }nNov 14 10:26:46 ops-k8s-m1 kubelet[24321]: : [failed to find plugin "flannel" in path [/opt/cni/bin]]nNov 14 10:26:46 ops-k8s-m1 kubelet[24321]: W1114 10:26:46.665687 24321 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.dnNov 14 10:26:47 ops-k8s-m1 kubelet[24321]: E1114 10:26:47.178298 24321 kubelet.go:2190] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitializednNov 14 10:26:49 ops-k8s-m1 kubelet[24321]: E1114 10:26:49.563003 24321 csi_plugin.go:287] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource

解決: 分析是網絡插件加載失敗,原因是升級完成后我的flannelv1.11.0版本太舊導致的不兼容,于是升級了下flannel到v1.15.0解決了這個問題(配置文件不變,改了鏡像版本) 這邊貼出flannel的配置文件

[root@ops-k8s-m1 network]# cat flannel.yaml n---napiVersion: policy/v1beta1nkind: PodSecurityPolicynmetadata:n name: psp.flannel.unprivilegedn annotations:n seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultn seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultn apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultn apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/defaultnspec:n privileged: falsen volumes:n - configMapn - secretn - emptyDirn - hostPathn allowedHostPaths:n - pathPrefix: "/etc/cni/net.d"n - pathPrefix: "/etc/kube-flannel"n - pathPrefix: "/run/flannel"n readOnlyRootFilesystem: falsen runAsUser:n rule: RunAsAnyn supplementalGroups:n rule: RunAsAnyn fsGroup:n rule: RunAsAnyn allowPrivilegeEscalation: falsen defaultAllowPrivilegeEscalation: falsen allowedCapabilities: ['NET_ADMIN']n defaultAddCapabilities: []n requiredDropCapabilities: []n hostPID: falsen hostIPC: falsen hostNetwork: truen hostPorts:n - min: 0n max: 65535n seLinux:n rule: 'RunAsAny'n---nkind: ClusterRolenapiVersion: rbac.authorization.k8s.io/v1beta1nmetadata:n name: flannelnrules:n - apiGroups: ['extensions']n resources: ['podsecuritypolicies']n verbs: ['use']n resourceNames: ['psp.flannel.unprivileged']n - apiGroups:n - ""n resources:n - podsn verbs:n - getn - apiGroups:n - ""n resources:n - nodesn verbs:n - listn - watchn - apiGroups:n - ""n resources:n - nodes/statusn verbs:n - patchn---nkind: ClusterRoleBindingnapiVersion: rbac.authorization.k8s.io/v1beta1nmetadata:n name: flannelnroleRef:n apiGroup: rbac.authorization.k8s.ion kind: ClusterRolen name: flannelnsubjects:n- kind: ServiceAccountn name: flanneln namespace: kube-systemn---napiVersion: v1nkind: ServiceAccountnmetadata:n name: flanneln namespace: kube-systemn---nkind: ConfigMapnapiVersion: v1nmetadata:n name: kube-flannel-cfgn namespace: kube-systemn labels:n tier: noden app: flannelndata:n cni-conf.json: |n {n "name": "cbr0",n "cniVersion": "0.3.1",n "plugins": [n {n "type": "flannel",n "delegate": {n "hairpinMode": true,n "isDefaultGateway": truen }n },n {n "type": "portmap",n "capabilities": {n "portMappings": truen }n }n ]n }n net-conf.json: |n {n "Network": "172.23.0.0/16",n "Backend": {n "Type": "vxlan"n }n }n---napiVersion: apps/v1nkind: DaemonSetnmetadata:n name: kube-flannel-ds-amd64n namespace: kube-systemn labels:n tier: noden app: flannelnspec:n selector:n matchLabels:n app: flanneln template:n metadata:n labels:n tier: noden app: flanneln spec:n affinity:n nodeAffinity:n requiredDuringSchedulingIgnoredDuringExecution:n nodeSelectorTerms:n - matchExpressions:n - key: beta.kubernetes.io/osn operator: Inn values:n - linuxn - key: beta.kubernetes.io/archn operator: Inn values:n - amd64n hostNetwork: truen tolerations:n - operator: Existsn effect: NoSchedulen serviceAccountName: flanneln initContainers:n - name: install-cnin image: quay.io/coreos/flannel:v0.15.0-amd64n command:n - cpn args:n - -fn - /etc/kube-flannel/cni-conf.jsonn - /etc/cni/net.d/10-flannel.conflistn volumeMounts:n - name: cnin mountPath: /etc/cni/net.dn - name: flannel-cfgn mountPath: /etc/kube-flannel/n containers:n - name: kube-flanneln image: quay.io/coreos/flannel:v0.15.0-amd64n command:n - /opt/bin/flanneldn args:n - --ip-masqn - --kube-subnet-mgrn resources:n requests:n cpu: "100m"n memory: "50Mi"n limits:n cpu: "100m"n memory: "50Mi"n securityContext:n privileged: falsen capabilities:n add: ["NET_ADMIN"]n env:n - name: POD_NAMEn valueFrom:n fieldRef:n fieldPath: metadata.namen - name: POD_NAMESPACEn valueFrom:n fieldRef:n fieldPath: metadata.namespacen volumeMounts:n - name: runn mountPath: /run/flanneln - name: flannel-cfgn mountPath: /etc/kube-flannel/n volumes:n - name: runn hostPath:n path: /run/flanneln - name: cnin hostPath:n path: /etc/cni/net.dn - name: flannel-cfgn configMap:n name: kube-flannel-cfgn---napiVersion: apps/v1nkind: DaemonSetnmetadata:n name: kube-flannel-ds-arm64n namespace: kube-systemn labels:n tier: noden app: flannelnspec:n selector:n matchLabels:n app: flanneln template:n metadata:n labels:n tier: noden app: flanneln spec:n affinity:n nodeAffinity:n requiredDuringSchedulingIgnoredDuringExecution:n nodeSelectorTerms:n - matchExpressions:n - key: beta.kubernetes.io/osn operator: Inn values:n - linuxn - key: beta.kubernetes.io/archn operator: Inn values:n - arm64n hostNetwork: truen tolerations:n - operator: Existsn effect: NoSchedulen serviceAccountName: flanneln initContainers:n - name: install-cnin image: quay.io/coreos/flannel:v0.11.0-arm64n command:n - cpn args:n - -fn - /etc/kube-flannel/cni-conf.jsonn - /etc/cni/net.d/10-flannel.conflistn volumeMounts:n - name: cnin mountPath: /etc/cni/net.dn - name: flannel-cfgn mountPath: /etc/kube-flannel/n containers:n - name: kube-flanneln image: quay.io/coreos/flannel:v0.11.0-arm64n command:n - /opt/bin/flanneldn args:n - --ip-masqn - --kube-subnet-mgrn resources:n requests:n cpu: "100m"n memory: "50Mi"n limits:n cpu: "100m"n memory: "50Mi"n securityContext:n privileged: falsen capabilities:n add: ["NET_ADMIN"]n env:n - name: POD_NAMEn valueFrom:n fieldRef:n fieldPath: metadata.namen - name: POD_NAMESPACEn valueFrom:n fieldRef:n fieldPath: metadata.namespacen volumeMounts:n - name: runn mountPath: /run/flanneln - name: flannel-cfgn mountPath: /etc/kube-flannel/n volumes:n - name: runn hostPath:n path: /run/flanneln - name: cnin hostPath:n path: /etc/cni/net.dn - name: flannel-cfgn configMap:n name: kube-flannel-cfgn---napiVersion: apps/v1nkind: DaemonSetnmetadata:n name: kube-flannel-ds-armn namespace: kube-systemn labels:n tier: noden app: flannelnspec:n selector:n matchLabels:n app: flanneln template:n metadata:n labels:n tier: noden app: flanneln spec:n affinity:n nodeAffinity:n requiredDuringSchedulingIgnoredDuringExecution:n nodeSelectorTerms:n - matchExpressions:n - key: beta.kubernetes.io/osn operator: Inn values:n - linuxn - key: beta.kubernetes.io/archn operator: Inn values:n - armn hostNetwork: truen tolerations:n - operator: Existsn effect: NoSchedulen serviceAccountName: flanneln initContainers:n - name: install-cnin image: quay.io/coreos/flannel:v0.11.0-armn command:n - cpn args:n - -fn - /etc/kube-flannel/cni-conf.jsonn - /etc/cni/net.d/10-flannel.conflistn volumeMounts:n - name: cnin mountPath: /etc/cni/net.dn - name: flannel-cfgn mountPath: /etc/kube-flannel/n containers:n - name: kube-flanneln image: quay.io/coreos/flannel:v0.11.0-armn command:n - /opt/bin/flanneldn args:n - --ip-masqn - --kube-subnet-mgrn resources:n requests:n cpu: "100m"n memory: "50Mi"n limits:n cpu: "100m"n memory: "50Mi"n securityContext:n privileged: falsen capabilities:n add: ["NET_ADMIN"]n env:n - name: POD_NAMEn valueFrom:n fieldRef:n fieldPath: metadata.namen - name: POD_NAMESPACEn valueFrom:n fieldRef:n fieldPath: metadata.namespacen volumeMounts:n - name: runn mountPath: /run/flanneln - name: flannel-cfgn mountPath: /etc/kube-flannel/n volumes:n - name: runn hostPath:n path: /run/flanneln - name: cnin hostPath:n path: /etc/cni/net.dn - name: flannel-cfgn configMap:n name: kube-flannel-cfgn---napiVersion: apps/v1nkind: DaemonSetnmetadata:n name: kube-flannel-ds-ppc64len namespace: kube-systemn labels:n tier: noden app: flannelnspec:n selector:n matchLabels:n app: flanneln template:n metadata:n labels:n tier: noden app: flanneln spec:n affinity:n nodeAffinity:n requiredDuringSchedulingIgnoredDuringExecution:n nodeSelectorTerms:n - matchExpressions:n - key: beta.kubernetes.io/osn operator: Inn values:n - linuxn - key: beta.kubernetes.io/archn operator: Inn values:n - ppc64len hostNetwork: truen tolerations:n - operator: Existsn effect: NoSchedulen serviceAccountName: flanneln initContainers:n - name: install-cnin image: quay.io/coreos/flannel:v0.11.0-ppc64len command:n - cpn args:n - -fn - /etc/kube-flannel/cni-conf.jsonn - /etc/cni/net.d/10-flannel.conflistn volumeMounts:n - name: cnin mountPath: /etc/cni/net.dn - name: flannel-cfgn mountPath: /etc/kube-flannel/n containers:n - name: kube-flanneln image: quay.io/coreos/flannel:v0.11.0-ppc64len command:n - /opt/bin/flanneldn args:n - --ip-masqn - --kube-subnet-mgrn resources:n requests:n cpu: "100m"n memory: "50Mi"n limits:n cpu: "100m"n memory: "50Mi"n securityContext:n privileged: falsen capabilities:n add: ["NET_ADMIN"]n env:n - name: POD_NAMEn valueFrom:n fieldRef:n fieldPath: metadata.namen - name: POD_NAMESPACEn valueFrom:n fieldRef:n fieldPath: metadata.namespacen volumeMounts:n - name: runn mountPath: /run/flanneln - name: flannel-cfgn mountPath: /etc/kube-flannel/n volumes:n - name: runn hostPath:n path: /run/flanneln - name: cnin hostPath:n path: /etc/cni/net.dn - name: flannel-cfgn configMap:n name: kube-flannel-cfgn---napiVersion: apps/v1nkind: DaemonSetnmetadata:n name: kube-flannel-ds-s390xn namespace: kube-systemn labels:n tier: noden app: flannelnspec:n selector:n matchLabels:n app: flanneln template:n metadata:n labels:n tier: noden app: flanneln spec:n affinity:n nodeAffinity:n requiredDuringSchedulingIgnoredDuringExecution:n nodeSelectorTerms:n - matchExpressions:n - key: beta.kubernetes.io/osn operator: Inn values:n - linuxn - key: beta.kubernetes.io/archn operator: Inn values:n - s390xn hostNetwork: truen tolerations:n - operator: Existsn effect: NoSchedulen serviceAccountName: flanneln initContainers:n - name: install-cnin image: quay.io/coreos/flannel:v0.11.0-s390xn command:n - cpn args:n - -fn - /etc/kube-flannel/cni-conf.jsonn - /etc/cni/net.d/10-flannel.conflistn volumeMounts:n - name: cnin mountPath: /etc/cni/net.dn - name: flannel-cfgn mountPath: /etc/kube-flannel/n containers:n - name: kube-flanneln image: quay.io/coreos/flannel:v0.11.0-s390xn command:n - /opt/bin/flanneldn args:n - --ip-masqn - --kube-subnet-mgrn resources:n requests:n cpu: "100m"n memory: "50Mi"n limits:n cpu: "100m"n memory: "50Mi"n securityContext:n privileged: falsen capabilities:n add: ["NET_ADMIN"]n env:n - name: POD_NAMEn valueFrom:n fieldRef:n fieldPath: metadata.namen - name: POD_NAMESPACEn valueFrom:n fieldRef:n fieldPath: metadata.namespacen volumeMounts:n - name: runn mountPath: /run/flanneln - name: flannel-cfgn mountPath: /etc/kube-flannel/n volumes:n - name: runn hostPath:n path: /run/flanneln - name: cnin hostPath:n path: /etc/cni/net.dn - name: flannel-cfgn configMap:n name: kube-flannel-cfg

解決這兩個問題也順利完成升級

[root@ops-k8s-m2 ~]# kubectl get nodenNAME STATUS ROLES AGE VERSIONnops-k8s-m1 Ready master 4d2h v1.17.17nops-k8s-m2 Ready master 4d2h v1.17.17nops-k8s-m3 Ready master 4d2h v1.17.17

升級kubernetes v1.17.17 到 kubernetes v1.18.20

升級命令

yum install -y kubeadm-v1.18.20 kubelet-v1.18.20 kubectl-v1.18.20 --disableexcludes=kubernetesnnkubeadm upgrade plannnkubeadm upgrade apply v1.18.20nnsystemctl daemon-reloadnnsystemctl restart kubeletnnkubectl get node

這個也順利升級

[root@ops-k8s-m3 ~]# kubectl get nodenNAME STATUS ROLES AGE VERSIONnops-k8s-m1 Ready master 4d2h v1.18.20nops-k8s-m2 Ready master 4d2h v1.18.20nops-k8s-m3 Ready master 4d2h v1.18.20

升級kubernetes v1.18.20 到 kubernetes v1.19.16

升級命令

yum install -y kubeadm-v1.19.16 kubelet-v1.19.16 kubectl-v1.19.16 --disableexcludes=kubernetesnnkubeadm upgrade plannnkubeadm upgrade apply v1.19.16nnsystemctl daemon-reloadnnsystemctl restart kubeletnnkubectl get node

升級也很順利

NAME STATUS ROLES AGE VERSIONnops-k8s-m1 Ready master 4d3h v1.19.16nops-k8s-m2 Ready master 4d3h v1.19.16nops-k8s-m3 Ready master 4d3h v1.19.16

最終的版本也升級成功,順利完成了從v1.15.3到v1.19.16的全過程

測試

運行一個應用測試一下,由于我的是三個master節點,先把污點去掉,當node來用(生產環境不建議這樣干)

kubectl taint node ops-k8s-m1 node-role.kubernetes.io/master-nkubectl taint node ops-k8s-m2 node-role.kubernetes.io/master-nkubectl taint node ops-k8s-m3 node-role.kubernetes.io/master-

創建一個kubebox應用

[root@ops-k8s-m1 kubebox]# kubectl apply -f kubebox-deploy.yml nservice/kubebox createdn[root@ops-k8s-m1 kubebox]# kubectl get pod -n kube-ops nNAME READY STATUS RESTARTS AGEnkubebox-676ff9d4f4-bdszh 1/1 Running 0 113s

訪問也是沒有問題


查看kube-system下面各組件的日志是否有異常,還有節點的massages日志是否有異常。 除此之外,想升級到v1.19.16更高版本的kubernetes版本也可以用此方法。


參考原文:https://www.lishiwei.com.cn

可參考官方文檔:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

上一篇:K8s集群搭建

下一篇:sds抑郁癥

精選推薦

  • 催化燃燒設備供應商
    催化燃燒設備供應商

    催化燃燒設備供應商,催化燃燒處理裝置廠家,本裝置是采用廢氣先進入噴淋塔過濾——干式過濾—-蜂窩活性碳吸附—脫附再生——催化燃

  • 該不該有模具供應商
    該不該有模具供應商

    今天紅姐又來跟我探討供應商的管理問題了。故事是這樣的:供應商來料不良,原因是模具問題。而那個模具是我們找的一家模具供應商做的

  • 什么牌子高壓鍋好,高壓鍋哪個牌子好,高壓鍋什么牌子好,高壓鍋哪個品牌好
    什么牌子高壓鍋好,高壓鍋哪個牌子好,高

    1蘇泊爾 雙重安全閥門 高壓鍋雙重安全閥,防燙把手,復合底。這款高壓鍋擁有雙重安全閥門,更好的保證使用安全。搭載防燙傷把手,方便起

  • 高壓鍋啥牌子好,高壓鍋哪個牌子的好,什么高壓鍋牌子好,高壓鍋推薦選購指南
    高壓鍋啥牌子好,高壓鍋哪個牌子的好,什

    1、雙喜階梯型復底高壓鍋推薦理由:高壓鍋滿足上蒸下煮,飯菜同時進行,方便快速,有效提升烹飪效率。多重安全防護,安全系數較高,家人使用

0