Kubernetes v1.19.0 高可用安装部署

1,219次阅读

共计 17592 个字符,预计需要花费 44 分钟才能阅读完成。

5.2 初始化k8s-master01

1 编写文件

所有节点执行

cat >> kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "k8s-lb:16443"
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.211.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
EOF

2 下载镜像

所有节点执行

yaml

3 集群初始化

k8s-master01执行

kubeadm init --config kubeadm.yaml --upload-certs
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc
...
...
...
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
​
Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
You can now join any number of the control-plane node running the following command on each as root:
​
  kubeadm join k8s-lb:16443 --token n7ipit.4dkqd9uxa0b9d153 \
    --discovery-token-ca-cert-hash sha256:41a1353a03c99f46868294c28f9948bbc2cca957d98eb010435a493112ec7caa\
    --control-plane --certificate-key 6ce0872da76396c30c430a0d4e629bee46a508890c29d0f86d7982380c621889
​
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join k8s-lb:16443 --token n7ipit.4dkqd9uxa0b9d153 \
    --discovery-token-ca-cert-hash sha256:41a1353a03c99f46868294c28f9948bbc2cca957d98eb010435a493112ec7caa

配置环境变量

cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc

5.3 安装网络插件

wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
kubectl apply -f calico.yaml

5.4 查看集群状态

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   19m   v1.19.0

6 节点加入集群

6.1 master节点加入

请确保待加入集群的节点已做好初始化配置,安装docker、kubeadm、kubectl、kubelet,且已经下载好需要的镜像

k8s-master02和k8s-master03执行

kubeadm join k8s-lb:16443 --token n7ipit.4dkqd9uxa0b9d153 \
    --discovery-token-ca-cert-hash sha256:41a1353a03c99f46868294c28f9948bbc2cca957d98eb010435a493112ec7caa\
    --control-plane --certificate-key 6ce0872da76396c30c430a0d4e629bee46a508890c29d0f86d7982380c621889
...
This node has joined the cluster and a new control plane instance was created:
​
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
​
To start administering your cluster from this node, you need to run the following as a regular user:
​
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
Run 'kubectl get nodes' to see this node join the cluster.
...

集群状态查看

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   19h   v1.19.0
k8s-master02   Ready    master   19h   v1.19.0
k8s-master03   Ready    master   19h   v1.19.0

6.2 node节点加入

k8s-node01执行

kubeadm join k8s-lb:16443 --token n7ipit.4dkqd9uxa0b9d153 \
    --discovery-token-ca-cert-hash sha256:41a1353a03c99f46868294c28f9948bbc2cca957d98eb010435a493112ec7caa

查询集群节点信息

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   19h   v1.19.0
k8s-master02   Ready    master   19h   v1.19.0
k8s-master03   Ready    master   19h   v1.19.0
k8s-node01     Ready    worker   17h   v1.19.0

7 测试集群高可用

7.1 查看当前vip地址所在节点

[root@k8s-master01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:82:dc:70:b8:00 brd ff:ff:ff:ff:ff:ff
    inet 172.20.5.11/24 brd 172.20.5.255 scope global eth0
       valid_lft forever preferred_lft forever
    # 虚拟IP当前在k8s-master01节点
    inet 172.20.5.10/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5cf6:fe52:d77c:a6c6/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::6e1c:9620:3254:e840/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::903f:b002:5039:f925/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever

k8s-master01

[root@k8s-master01 ~]# poweroff

7.3 查看虚拟IP转移,并验证集群功能

[root@k8s-master02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:f6:59:0d:a6:00 brd ff:ff:ff:ff:ff:ff
    inet 172.20.5.12/24 brd 172.20.5.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.20.5.10/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5cf6:fe52:d77c:a6c6/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::6e1c:9620:3254:e840/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::903f:b002:5039:f925/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
[root@k8s-master02 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   master   19h   v1.19.0
k8s-master02   Ready      master   19h   v1.19.0
k8s-master03   Ready      master   19h   v1.19.0
k8s-node01     Ready      worker   17h   v1.19.0
[root@k8s-master02 ~]# kubectl get po -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP                NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-75d555c48-6h26g   1/1     Running   0          18h   192.168.195.3     k8s-master03   <none>           <none>
calico-node-dknrd                         1/1     Running   1          19h   172.20.5.11       k8s-master01   <none>           <none>
calico-node-klpd8                         1/1     Running   0          19h   172.20.5.13       k8s-master03   <none>           <none>
calico-node-sqps2                         1/1     Running   0          17h   172.20.2.11       k8s-node01     <none>           <none>
calico-node-w4nh9                         1/1     Running   0          19h   172.20.5.12       k8s-master02   <none>           <none>
coredns-546565776c-fhvc8                  1/1     Running   0          19h   192.168.122.131   k8s-master02   <none>           <none>
coredns-546565776c-kf7sm                  1/1     Running   0          19h   192.168.122.129   k8s-master02   <none>           <none>
etcd-k8s-master01                         1/1     Running   1          19h   172.20.5.11       k8s-master01   <none>           <none>
etcd-k8s-master02                         1/1     Running   0          19h   172.20.5.12       k8s-master02   <none>           <none>
etcd-k8s-master03                         1/1     Running   0          19h   172.20.5.13       k8s-master03   <none>           <none>
kube-apiserver-k8s-master01               1/1     Running   1          19h   172.20.5.11       k8s-master01   <none>           <none>
kube-apiserver-k8s-master02               1/1     Running   0          19h   172.20.5.12       k8s-master02   <none>           <none>
kube-apiserver-k8s-master03               1/1     Running   2          19h   172.20.5.13       k8s-master03   <none>           <none>
kube-controller-manager-k8s-master01      1/1     Running   4          19h   172.20.5.11       k8s-master01   <none>           <none>
kube-controller-manager-k8s-master02      1/1     Running   1          19h   172.20.5.12       k8s-master02   <none>           <none>
kube-controller-manager-k8s-master03      1/1     Running   0          19h   172.20.5.13       k8s-master03   <none>           <none>
kube-proxy-cjm2b                          1/1     Running   1          17h   172.20.2.11       k8s-node01     <none>           <none>
kube-proxy-d7hs9                          1/1     Running   1          19h   172.20.5.11       k8s-master01   <none>           <none>
kube-proxy-s57dl                          1/1     Running   0          19h   172.20.5.13       k8s-master03   <none>           <none>
kube-proxy-z8bfl                          1/1     Running   0          19h   172.20.5.12       k8s-master02   <none>           <none>
kube-scheduler-k8s-master01               1/1     Running   3          19h   172.20.5.11       k8s-master01   <none>           <none>
kube-scheduler-k8s-master02               1/1     Running   0          19h   172.20.5.12       k8s-master02   <none>           <none>
kube-scheduler-k8s-master03               1/1     Running   0          19h   172.20.5.13       k8s-master03   <none>           <none>

8 异常处理

8.1 创建registry容器失败

OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:297: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown

8.2 节点执行kubeadm join后状态一直不正常,kubelet状态不断重启

kubelet: Failed to start ContainerManager Cannot set property TasksAccounting, or unknown property

9 参考文档

https://mp.weixin.qq.com/s/S01dVNKKg4E41wdKxHDiZQ

  • 1 2
正文完
 
mervinwang
版权声明:本站原创文章,由 mervinwang 2020-12-18发表,共计17592字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
文章搜索