k8s集群搭建
零、資料下載1.下文需要的yaml文件所在的github地址如下https://github.com/luckylucky421/kubernetes1.17.3/tree/master2.下文
零、資料下載
1.下文需要的yaml文件所在的github地址如下
https://github.com/luckylucky421/kubernetes1.17.3/tree/master
2.下文里提到的初始化k8s集群需要的鏡像獲取方式:鏡像在百度網盤
鏈接:https://pan.baidu.com/s/1k1heJy8lLnDk2JEFyRyJdA
提取碼:udkj
一、準備實驗環境
1.準備兩臺centos7虛擬機,用來安裝k8s集群
操作系統:centos7.6以及更高版本都可以配置:2核cpu,4G內存,兩塊50G硬盤
網絡:橋接網絡
master1 192.168.0.6
node1 192.168.0.56
二、初始化實驗環境
1.配置靜態ip
把虛擬機或者物理機配置成靜態ip地址,這樣機器重新啟動后ip地址也不會發生改變。
1.1 在master1節點配置網絡
修改/etc/sysconfig/network-scripts/ifcfg-ens33文件,變成如下:
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.0.6
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes
修改配置文件之后需要重啟網絡服務才能使配置生效,重啟網絡服務命令如下:
service network restart
注:ifcfg-ens33文件配置解釋:
IPADDR=192.168.0.6
#ip地址,需要跟自己電腦所在網段一致
NETMASK=255.255.255.0
#子網掩碼,需要跟自己電腦所在網段一致
GATEWAY=192.168.0.1
#網關,在自己電腦打開cmd,輸入ipconfig /all可看到
DNS1=192.168.0.1
#DNS,在自己電腦打開cmd,輸入ipconfig /all可看到
1.2 在node1節點配置網絡
修改/etc/sysconfig/network-scripts/ifcfg-ens33文件,變成如下:
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.0.56
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes
修改配置文件之后需要重啟網絡服務才能使配置生效,重啟網絡服務命令如下:
service network restart
2.修改yum源,各個節點操作
(1)備份原來的yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
(2)下載阿里的yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
(3)生成新的yum緩存
yum makecache fast
(4)配置安裝k8s需要的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
(5)清理yum緩存
yum clean all
(6)生成新的yum緩存
yum makecache fast
(7)更新yum源
yum -y update
(8)安裝軟件包
yum -y install yum-utils device-mapper-persistent-data lvm2
(9)添加新的軟件源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all
yum makecache fast
3.安裝基礎軟件包,各個節點操作
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntplibaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate
4.關閉firewalld防火墻
各個節點操作,centos7系統默認使用的是firewalld防火墻,停止firewalld防火墻,并禁用這個服務
systemctl stopfirewalld && systemctl disable firewalld
5.安裝iptables
各個節點操作,如果你用firewalld不是很習慣,可以安裝iptables,這個步驟可以不做,根據大家實際需求
5.1 安裝iptables
yum install iptables-services -y
5.2 禁用iptables
service iptables stop && systemctl disable iptables
6.時間同步,各個節點操作
6.1 時間同步
ntpdate http://cn.pool.ntp.org
6.2 編輯計劃任務,每小時做一次同步
1)crontab -e
* */1 * * * /usr/sbin/ntpdate http://cn.pool.ntp.org
2)重啟crond服務:
service crond restart
7.關閉selinux,各個節點操作
關閉selinux,設置永久關閉,這樣重啟機器selinux也處于關閉狀態
修改/etc/sysconfig/selinux和/etc/selinux/config文件,把
SELINUX=enforcing變成SELINUX=disabled,也可用下面方式修改:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
上面文件修改之后,需要重啟虛擬機,可以強制重啟:
reboot -f
8.關閉交換分區,各個節點操作
swapoff -a
#永久禁用,打開/etc/fstab注釋掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab
9.修改內核參數,各個節點操作
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
10.修改主機名
在192.168.0.6上:
hostnamectl set-hostname master1
在192.168.0.56上:
hostnamectl set-hostname node1
11.配置hosts文件,各個節點操作
在/etc/hosts文件增加如下幾行:
192.168.0.6 master1
192.168.0.56 node1
12.配置master1到node1無密碼登陸,配置master1到node1無密碼登陸
在master1上操作
ssh-keygen -t rsa
#一直回車就可以
cd /root && ssh-copy-id -i .ssh/id_rsa.pub root@node1
#上面需要輸入yes之后,輸入密碼,輸入node1物理機密碼即可
三、安裝kubernetes1.18.2單master節點的高可用集群
1.安裝docker19.03,各個節點操作
1.1 查看支持的docker版本
yum list docker-ce --showduplicates |sort -r
1.2 安裝19.03.7版本
yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker
#查看docker狀態,如果狀態是active(running),說明docker是正常運行狀態
systemctl status docker
1.3 修改docker配置文件
cat > /etc/docker/daemon.json <<EOF
{ "exec-opts":["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts": { "max-size": "100m"
},
"storage-driver":"overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
1.4 重啟docker使配置生效
systemctl daemon-reload && systemctl restartdocker
1.5 設置網橋包經IPTables,core文件生成路徑,配置永久生效
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
""" > /etc/sysctl.conf
sysctl -p
1.6 開啟ipvs,不開啟ipvs將會使用iptables,但是效率低,所以官網推薦需要開通ipvs內核
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rrip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sedip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -Ffilename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ];then
/sbin/modprobe${kernel_module}
fi
done
EOF
modprobe ip_vs
chmod 755 /etc/sysconfig/modules/ipvs.modules &&bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
2.安裝kubernetes1.18.6
2.1在master1和node1上安裝kubeadm和kubelet
yum install kubeadm-1.18.6 kubelet-1.18.6 -y
systemctl enable kubelet
初始化k8s集群
kubeadm init --kubernetes-version=v1.18.6 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.6 --image-repository http://registry.aliyuncs.com/google_containers
注釋:
--image-repository http://registry.aliyuncs.com/google_containers 是指定阿里云的鏡像源,基于這個我們可以安裝任何版本的k8s;
--kubernetes-version=v1.18.6是指定k8s版本
初始化命令執行成功之后顯示如下內容,說明初始化成功了
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regularuser:
mkdir -p $HOME/.kube
sudo cp -i/etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g)$HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the optionslisted at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following oneach as root:
kubeadm join 192.168.0.6:6443 --token si1c9n.3c5os94xcuzq6wl3
--discovery-token-ca-cert-hashsha256:9d3a35eab0f6badba61ebb833d420902e4f9e0168ee1c1374121668ab382a596
注:kubeadm join ... 這條命令需要記住,我們把k8s的node1節點加入到集群需要在這些節點節點輸入這條命令,每次執行這個結果都是不一樣的,大家記住自己執行的結果,在下面會用到
2.2 在master1節點執行如下,這樣才能有權限操作k8s資源
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
在master1節點執行
kubectl get nodes
顯示如下,master1節點是NotReady
NAME STATUS ROLES AGE VERSION master1 NotReady master 8m11s v1.18.6
kubectl get pods -n kube-system
顯示如下,可看到cordns也是處于pending狀態
coredns-7ff77c879f-j48h6 0/1 Pending 0 3m16scoredns-7ff77c879f-lrb77 0/1 Pending 0 3m16s
上面可以看到STATUS狀態是NotReady,cordns是pending,是因為沒有安裝網絡插件,需要安裝calico或者flannel,接下來我們安裝calico,在master1節點安裝calico網絡插件:
安裝calico需要的鏡像是http://quay.io/calico/cni:v3.5.3和http://quay.io/calico/node:v3.5.3,鏡像在文章開頭處的百度網盤地址
手動上傳上面兩個鏡像的壓縮包到各個節點,通過docker load -i解壓
docker load -i cni.tar.gz docker load -i calico-node.tar.gz
在master1節點執行如下
kubectl apply -f calico.yaml
calico.yaml文件內容在如下提供的地址,打開下面鏈接可復制內容:
https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml
如果打不開上面的鏈接,可以訪問下面的github地址,把下面的目錄clone和下載下來,解壓之后,在把文件傳到master1節點即可
https://github.com/luckylucky421/kubernetes1.17.3/tree/master
在master1節點執行
kubectl get nodes
顯示如下,看到STATUS是Ready
NAME STATUS ROLES AGE VERSION
master1 Ready master 98m v1.18.6
kubectl get pods -n kube-system
看到cordns也是running狀態,說明master1節點的calico安裝完成
NAME READY STATUS RESTARTS AGE
calico-node-6rvqm 1/1 Running 0 17m
coredns-7ff77c879f-j48h6 1/1 Running 0 97m
coredns-7ff77c879f-lrb77 1/1 Running 0 97m
etcd-master1 1/1 Running 0 97m
kube-apiserver-master1 1/1 Running 0 97m
kube-controller-manager-master1 1/1 Running 0 97m
kube-proxy-njft6 1/1 Running 0 97m
kube-scheduler-master1 1/1 Running 0 97m
2.3 把node1節點加入到k8s集群,在node1節點操作
kubeadm join 192.168.0.6:6443 --token si1c9n.3c5os94xcuzq6wl3
--discovery-token-ca-cert-hashsha256:9d3a35eab0f6badba61ebb833d420902e4f9e0168ee1c1374121668ab382a596
注:上面的這個加入到k8s節點的一串命令kubeadm join就是在2.4初始化的時候生成的
2.8 在master1節點查看集群節點狀態
kubectl get nodes
顯示如下:
NAME STATUS ROLES AGE VERSION
master1 Ready master 3m36s v1.18.6
node1 Ready 3m36s v1.18.6
說明node1節點也加入到k8s集群了,通過以上就完成了k8s單master高可用集群的搭建
2.4 安裝traefik
官網:https://docs.traefik.io/
把traefik鏡像上傳到各個節點,按照如下方法通過docker load -i解壓,鏡像地址在文章開頭處的百度網盤里,可自行下載
docker load -i traefik_1_7_9.tar.gz
traefik用到的鏡像是http://k8s.gcr.io/traefik:1.7.9
1)生成traefik證書,在master1上操作
mkdir ~/ikube/tls/ -p
echo """
[req]
distinguished_name = req_distinguished_name
prompt = yes
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = CN
stateOrProvinceName = State orProvince Name (full name)
stateOrProvinceName_value = Beijing
localityName = Locality Name (eg, city)
localityName_value =Haidian
organizationName =Organization Name (eg, company)
organizationName_value = Channelsoft
organizationalUnitName = OrganizationalUnit Name (eg, p)
organizationalUnitName_value = R & D Department
commonName = Common Name (eg, your name or your server's hostname)
commonName_value =*.http://multi.io
emailAddress = Email Address
emailAddress_value =lentil1016@gmail.com
""" > ~/ikube/tls/openssl.cnf
openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key
kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt--key ~/ikube/tls/tls.key
2)執行yaml文件,創建traefik
kubectl apply -f traefik.yaml
traefik.yaml文件內容在如下鏈接地址處復制:
https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/traefik.yaml
上面如果訪問不了,可以訪問下面的鏈接,然后把下面的分支克隆和下載,手動把yaml文件傳到master1上即可:
https://github.com/luckylucky421/kubernetes1.17.3
3)查看traefik是否部署成功:
kubectl get pods -n kube-system traefik-ingress-controller-csbp8 1/1 Running 0 5s traefik-ingress-controller-hqkwf 1/1 Running 0 5s
3.安裝kubernetes-dashboard-2版本(kubernetes的web ui界面)
把kubernetes-dashboard鏡像上傳到各個節點,按照如下方法通過docker load -i解壓,鏡像地址在文章開頭處的百度網盤里,可自行下載
docker load -i dashboard_2_0_0.tar.gz
docker load -i metrics-scrapter-1-0-1.tar.gz
解壓出來的鏡像是kubernetesui/dashboard:v2.0.0-beta8和kubernetesui/metrics-scraper:v1.0.1
在master1節點操作
kubectl apply -f kubernetes-dashboard.yaml
kubernetes-dashboard.yaml文件內容在如下鏈接地址處復制https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/kubernetes-dashboard.yaml
上面如果訪問不了,可以訪問下面的鏈接,然后把下面的分支克隆和下載,手動把yaml文件傳到master1上即可:
https://github.com/luckylucky421/kubernetes1.17.3
查看dashboard是否安裝成功:
kubectl get pods -n kubernetes-dashboard
顯示如下,說明dashboard安裝成功了
NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-694557449d-8xmtf 1/1 Running 0 60s
kubernetes-dashboard-5f98bdb684-ph9wg 1/1 Running 2 60s 查看dashboard前端的service
kubectl get svc -n kubernetes-dashboard
顯示如下:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.100.23.9 8000/TCP 50s
kubernetes-dashboard ClusterIP 10.105.253.155 443/TCP 50s 修改service type類型變成NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
把type: ClusterIP變成 type: NodePort,保存退出即可。
kubectl get svc -n kubernetes-dashboard
顯示如下:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.100.23.9 8000/TCP 3m59s kubernetes-dashboard NodePort 10.105.253.155 443:31175/TCP 4m 上面可看到service類型是NodePort,訪問master1節點ip:31175端口即可訪問kubernetes dashboard,我的環境需要輸入如下地址
https://192.168.0.6:31775/
可看到出現了dashboard界面
3.1通過yaml文件里指定的默認的token登陸dashboard
1)查看kubernetes-dashboard名稱空間下的secret
kubectl get secret -n kubernetes-dashboard
顯示如下:
NAME TYPE DATA AGE default-token-vxd7t http://kubernetes.io/service-account-token 3 5m27s kubernetes-dashboard-certs Opaque 0 5m27s kubernetes-dashboard-csrf Opaque 1 5m27s kubernetes-dashboard-key-holder Opaque 2 5m27s kubernetes-dashboard-token-ngcmg http://kubernetes.io/service-account-token 3 5m27s 2)找到對應的帶有token的kubernetes-dashboard-token-ngcmg
kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard
顯示如下:
...
... token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
記住token后面的值,把下面的token值復制到瀏覽器token登陸處即可登陸:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
點擊sing in登陸,顯示如下,默認是只能看到default名稱空間內容
3.2 創建管理員token,可查看任何空間權限
kubectl create clusterrolebinding dashboard-cluster-admin--clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
1)查看kubernetes-dashboard名稱空間下的secret
kubectl get secret -n kubernetes-dashboard
顯示如下:
NAME TYPE DATA AGE default-token-vxd7t http://kubernetes.io/service-account-token 3 5m27s kubernetes-dashboard-certs Opaque 0 5m27s kubernetes-dashboard-csrf Opaque 1 5m27s kubernetes-dashboard-key-holder Opaque 2 5m27s kubernetes-dashboard-token-ngcmg http://kubernetes.io/service-account-token 3 5m27s 2)找到對應的帶有token的kubernetes-dashboard-token-ngcmg
kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard
顯示如下:
...
... token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
記住token后面的值,把下面的token值復制到瀏覽器token登陸處即可登陸:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
點擊sing in登陸,顯示如下,這次就可以看到和操作任何名稱空間的資源了
4.安裝metrics組件
把metrics-server-amd64_0_3_1.tar.gz和addon.tar.gz鏡像上傳到各個節點,按照如下方法通過docker load -i解壓,鏡像地址在文章開頭處的百度網盤里,可自行下載
docker load -i metrics-server-amd64_0_3_1.tar.gz
docker load -i addon.tar.gz
metrics-server版本0.3.1,用到的鏡像是http://k8s.gcr.io/metrics-server-amd64:v0.3.1
addon-resizer版本是1.8.4,用到的鏡像是http://k8s.gcr.io/addon-resizer:1.8.4
在k8s的master1節點操作
kubectl apply -f metrics.yaml
metrics.yaml文件內容在如下鏈接地址處復制
https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml
上面如果訪問不了,可以訪問下面的鏈接,然后把下面的分支克隆和下載,手動把yaml文件傳到master1上即可:
https://github.com/luckylucky421/kubernetes1.17.3
上面組件都安裝之后,查看組件安裝是否正常,STATUS狀態是Running,說明組件正常,如下所示:
kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATE calico-node-h66ll 1/1 Running 0 51m 192.168.0.56 node1 calico-node-r4k6w 1/1 Running 0 58m 192.168.0.6 master1 coredns-66bff467f8-2cj5k 1/1 Running 0 70m 10.244.0.3 master1 coredns-66bff467f8-nl9zt 1/1 Running 0 70m 10.244.0.2 master1 etcd-master1 1/1 Running 0 70m 192.168.0.6 master1 kube-apiserver-master1 1/1 Running 0 70m 192.168.0.6 master1 kube-controller-manager-master1 1/1 Running 0 70m 192.168.0.6 master1 kube-proxy-qts4n 1/1 Running 0 70m 192.168.0.6 master1 kube-proxy-x647c 1/1 Running 0 51m 192.168.0.56 node1 kube-scheduler-master1 1/1 Running 0 70m 192.168.0.6 master1 metrics-server-8459f8db8c-gqsks 2/2 Running 0 16s 10.244.1.6 node1 traefik-ingress-controller-xhcfb 1/1 Running 0 39m 192.168.0.6 master1 traefik-ingress-controller-zkdpt 1/1 Running 0 39m 192.168.0.56 node1
上面如果看到metrics-server-8459f8db8c-gqsks是running狀態,說明metrics-server組件部署成功了,
接下來就可以在master1節點上使用kubectl top pods -n kube-system或者kubectl top nodes命令
上一篇:SZWT5T
下一篇:K8S 集群環境搭建







