kubeadm1.23

资源规划

角色IP组件
k8s-master1192.168.60.128kubectl-1.23.0
kubelet-1.23.0
kubeadm-1.23.0
docker-ce-20.10.21
keepalived
haproxy
k8s-master2192.168.60.129kubectl-1.23.0
kubelet-1.23.0
kubeadm-1.23.0
docker-ce-20.10.21
keepalived
haproxy
k8s-master3192.168.60.130kubectl-1.23.0
kubelet-1.23.0
kubeadm-1.23.0
docker-ce-20.10.21
keepalived
haproxy
k8s-node1192.168.60.131kubectl-1.23.0
kubelet-1.23.0
kubeadm-1.23.0
docker-ce-20.10.21
harbor192.168.60.132docker-ce-20.10.21
vip192.168.60.200

高可用部署

haproxy+keepalived(每个master节点都要安装)

HAProxy是一种免费的、非常快速且可靠的解决方案,它提供了高可用性、负载平衡和对TCP和基于http的应用程序的代理.

yum -y install haproxy

要改

添加 haproxy 负载均衡 (注意修改IP地址) (启动端口8443 代理所有master节点6443端口)

cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
vim /etc/haproxy/haproxy.cfg

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  apiserver
    mode tcp
    bind *:16443 
    option tcplog
    default_backend             apiserver

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend apiserver
    balance     roundrobin
    server  192.168.60.128  192.168.60.128:6443 check
    server  192.168.60.129  192.168.60.129:6443 check
    server  192.168.60.130  192.168.60.130:6443 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
listen stats
    bind  *:1080
    stats auth admin:wangjian
    stats refresh 5s #5秒刷新一次
    stats realm HAProxy\ Statistics
    stats uri /admin

启动

systemctl enable haproxy 
systemctl daemon-reload

systemctl stop haproxy
systemctl start haproxy 
systemctl status haproxy

访问haproxy后台管理页面

查看集群状态 如果master还没部署就会显示down

http://192.168.60.200:1080/admin

图片[1]-kubeadm1.23-秋风落叶

admin wangjian

访问服务验证

如果apiserver还没部署,访问不出来。

https://192.168.60.200:6443

https://192.168.60.128:6443

图片[2]-kubeadm1.23-秋风落叶

安装keepalived 和master安装在一起

yum -y install keepalived

编辑keepalived配置文件 和 检测脚本

cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat > /etc/keepalived/keepalived.conf << EOF 
! Configuration File for keepalived

global_defs {
   router_id apiserver #名字随便起,如果有备份节点,那需要保持一直
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER #master代表主节点,如果有备份节点需要修改
    interface ens32 #改为实际网卡的名字,会在这个网卡上安装一个虚拟vip
    virtual_router_id 51
    priority 200 #优先级 备份节点可以设置为100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {
        192.168.60.200 #这个就是vip
    }

    track_script {
        check_haproxy
     }

}

EOF

启动 haproxy keepalived

systemctl enable keepalived 
systemctl start keepalived 
systemctl status keepalived

使用ip a 查看虚拟IP是否绑定到网卡上

图片[3]-kubeadm1.23-秋风落叶

这是备节点配置

图片[4]-kubeadm1.23-秋风落叶

以上就是高可用配置安装完成。

部署k8s

基础环境配置

#关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld

#关闭seliunx
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

#关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久

#每台主机修改对应的即可
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-master2
hostnamectl set-hostname k8s-master3
hostnamectl set-hostname k8s-node1

#修改host
cat >> /etc/hosts << EOF
192.168.60.128 k8s-master1
192.168.60.129 k8s-master2
192.168.60.130 k8s-master2
192.168.60.131 k8s-node1
192.168.60.200 k8s-vip
EOF

#将桥接的IPv4流量传递到iptables的链
cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#使生效
sysctl --system

#时间同步
yum install ntpdate -y
ntpdate time.windows.com

安装docker 此处忽略(每个节点都要安装)

部署Master节点

只下载不安装

mkdir -p /root/k8s-1.23.0-rpm
yum -y install --downloadonly --downloaddir=/root/k8s-1.23.0-rpm kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0

mkdir -p /root/k8s-1.25.0-rpm
yum -y install --downloadonly --downloaddir=/root/k8s-1.25.0-rpm kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0

安装(每个节点都要安装)

图片[5]-kubeadm1.23-秋风落叶
查看版本
kubeadm version
systemctl enable kubelet

1.23版本相关镜像如下:

[root@k8s-master1 k8s-1.23.0-rpm]# docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.23.0   e6bf5ddd4098   11 months ago   135MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.23.0   37c6aeb3663b   11 months ago   125MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.23.0   56c5af1d00b5   11 months ago   53.5MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.23.0   e03484a90585   11 months ago   112MB
registry.aliyuncs.com/google_containers/etcd                      3.5.1-0   25f8c7f3da61   12 months ago   293MB
registry.aliyuncs.com/google_containers/coredns                   v1.8.6    a4ca41631cc7   13 months ago   46.8MB
registry.aliyuncs.com/google_containers/pause                     3.6       6270bb605e12   15 months ago   683kB

图片展示

图片[6]-kubeadm1.23-秋风落叶

初始化master1

kubeadm init \
  --control-plane-endpoint 192.168.60.200 \
  --image-repository 192.168.60.132:80/library \
  --kubernetes-version v1.23.0 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16

如果想看详细日志,可以加
--v=6

初始化成功如下:

[root@k8s-master1 ~]# kubeadm init \
>   --control-plane-endpoint 192.168.60.200 \
>   --image-repository 192.168.60.132:80/library \
>   --kubernetes-version v1.23.0 \
>   --service-cidr=10.1.0.0/16 \
>   --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.60.128 192.168.60.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.60.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.60.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.503467 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: jl9y6f.400u2n6ndxyifeta
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.60.200:6443 --token jl9y6f.400u2n6ndxyifeta \
	--discovery-token-ca-cert-hash sha256:47383e3963e5b95aeaee2312d163945397329812a97fb3ff6c4a3884e8988fc8 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.60.200:6443 --token jl9y6f.400u2n6ndxyifeta \
	--discovery-token-ca-cert-hash sha256:47383e3963e5b95aeaee2312d163945397329812a97fb3ff6c4a3884e8988fc8 

出现以上页面说明master1初始化成功,在按照提示配置环境变量

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

这是查看只有一个节点

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE    VERSION
k8s-master1   NotReady   control-plane,master   3m2s   v1.23.0

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

[root@k8s-master1 ~]# kubectl get pods -n kube-system -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
coredns-c97bc75dd-2knt7               0/1     Pending   0          3m34s   <none>           <none>        <none>           <none>
coredns-c97bc75dd-mjl86               0/1     Pending   0          3m34s   <none>           <none>        <none>           <none>
etcd-k8s-master1                      1/1     Running   0          3m46s   192.168.60.128   k8s-master1   <none>           <none>
kube-apiserver-k8s-master1            1/1     Running   0          3m46s   192.168.60.128   k8s-master1   <none>           <none>
kube-controller-manager-k8s-master1   1/1     Running   0          3m49s   192.168.60.128   k8s-master1   <none>           <none>
kube-proxy-7smnh                      1/1     Running   0          3m34s   192.168.60.128   k8s-master1   <none>           <none>
kube-scheduler-k8s-master1            1/1     Running   0          3m46s   192.168.60.128   k8s-master1   <none>           <none>

master2操作

  • k8s的基础系统参数要配置
  • 如果有镜像,可以提前导入节省时间
  • docker和k8s-1.23.0-rpm 都要安装完成
  • 把master1 生产的证书文件拷贝到master2,因为是集群,所以证书认证要一样,需要拷贝的证书文件如下:
[root@k8s-master2 pki]# pwd
/etc/kubernetes/pki
[root@k8s-master2 pki]# ll
总用量 24
-rw-r--r-- 1 root root 1099 11月 25 14:00 ca.crt
-rw-r--r-- 1 root root 1675 11月 25 14:00 ca.key
drwxr-xr-x 2 root root   34 11月 25 14:00 etcd
-rw-r--r-- 1 root root 1115 11月 25 14:00 front-proxy-ca.crt
-rw-r--r-- 1 root root 1679 11月 25 14:00 front-proxy-ca.key
-rw-r--r-- 1 root root 1675 11月 25 14:00 sa.key
-rw-r--r-- 1 root root  451 11月 25 14:00 sa.pub
[root@k8s-master2 pki]# cd etcd/
[root@k8s-master2 etcd]# pwd
/etc/kubernetes/pki/etcd
[root@k8s-master2 etcd]# ll
总用量 8
-rw-r--r-- 1 root root 1086 11月 25 14:00 ca.crt
-rw-r--r-- 1 root root 1679 11月 25 14:00 ca.key
[root@k8s-master2 etcd]# 

只要以上显示的,其他的都不要

以上工作都完成后,开始加入master集群

master2加入

kubeadm join 192.168.60.200:6443 --token jl9y6f.400u2n6ndxyifeta \
> --discovery-token-ca-cert-hash sha256:47383e3963e5b95aeaee2312d163945397329812a97fb3ff6c4a3884e8988fc8 \
> --control-plane

下面是加入成功的日志

[root@k8s-master2 kubernetes]# kubeadm join 192.168.60.200:6443 --token jl9y6f.400u2n6ndxyifeta \
> --discovery-token-ca-cert-hash sha256:47383e3963e5b95aeaee2312d163945397329812a97fb3ff6c4a3884e8988fc8 \
> --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.60.129 192.168.60.200]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.60.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.60.129 127.0.0.1 ::1]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

如果要在master2上执行kubectl命令,那也需要加入环境变量

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

这时在查看master集群

[root@k8s-master2 etcd]# kubectl get nodes
NAME          STATUS     ROLES                  AGE    VERSION
k8s-master1   NotReady   control-plane,master   21m    v1.23.0
k8s-master2   NotReady   control-plane,master   3m7s   v1.23.0

master3加入

master3 加入和master2加入操作一样

这是master3加入成功的日志

kubeadm join 192.168.60.200:6443 --token jl9y6f.400u2n6ndxyifeta --discovery-token-ca-cert-hash sha256:47383e3963e5b95aeaee2312d163945397329812a97fb3ff6c4a3884e8988fc8 --control-plane

[root@k8s-master3 kubernetes]# kubeadm join 192.168.60.200:6443 --token jl9y6f.400u2n6ndxyifeta \
> --discovery-token-ca-cert-hash sha256:47383e3963e5b95aeaee2312d163945397329812a97fb3ff6c4a3884e8988fc8 \
> --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master3 localhost] and IPs [192.168.60.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master3 localhost] and IPs [192.168.60.130 127.0.0.1 ::1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.60.130 192.168.60.200]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master3 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

Node节点加入

  • k8s基础系统参数配置完成
  • docker安装完成
  • k8s-1.23.0-rpm安装完成

以上要求都做完了,就可以开始加入node节点了

kubeadm join 192.168.60.200:6443 --token jl9y6f.400u2n6ndxyifeta \
> --discovery-token-ca-cert-hash sha256:47383e3963e5b95aeaee2312d163945397329812a97fb3ff6c4a3884e8988fc8 

加入成功的日志

[root@k8s-node1 k8s-1.23.0-rpm]# kubeadm join 192.168.60.200:6443 --token jl9y6f.400u2n6ndxyifeta \
> --discovery-token-ca-cert-hash sha256:47383e3963e5b95aeaee2312d163945397329812a97fb3ff6c4a3884e8988fc8 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

再次查看集群信息,成功加入

[root@k8s-master2 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   4h53m   v1.23.0
k8s-master2   Ready    control-plane,master   4h43m   v1.23.0
k8s-master3   Ready    control-plane,master   3h57m   v1.23.0
k8s-node1     Ready    worker                 138m    v1.23.0

部署网络

现在没有部署网络,所有状态都是notready

部署网络见1.19.0

[root@k8s-master1 ~]# kubectl apply -f calicov3.20.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

网络部署完成

再次查看节点

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE    VERSION
k8s-master1   Ready    control-plane,master   147m   v1.23.0
k8s-master2   Ready    control-plane,master   129m   v1.23.0
k8s-node1     Ready    <none>                 89m    v1.23.0

部署dashbord

任意选择一个master部署即可

参见1.19.0

根据官网介绍Kubernetes version1.23 对应的版本是v2.5.1。说明文档

https://github.com/kubernetes/dashboard/releases/tag/v2.5.1

对应的镜像是:

docker pull kubernetesui/dashboard:v2.5.1
docker pull kubernetesui/metrics-scraper:v1.0.8

安装
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

dashbord访问

https://192.168.60.128:30003

https://192.168.60.200:30003

token

eyJhbGciOiJSUzI1NiIsImtpZCI6IkZHVDFFSUY1YzQtTXphRGRsZXAyczNnVW1xNkZDQW5DZkVGell2SXNIdHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNXFrZ2wiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDc5MzY2ZmQtYmMxNS00MDkxLTgwNTQtYmYzMTQyODZhZmFhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.SNIQwWcPmIHTeS7cZlSJxIsavkDnxoCASeraD23vN_GcF0wVlrqWYTdQ_rwdPu1KeffZ62U7SbKEIm2CiOL6mn9PdagJPRE8EX5uGBfAXU1m5DYeITqI54pNZtGdif3gf2mO3GSzSX4jyVfsS_f3vl4S6QqM_KJ5SuWakHHVJVelk7Fta32t70LDBjGGwekg98L9p4W90_HP_Ptcjrrmsr2UCnCuA6s7hn0sxDUwUsBDj_bdpusbRmnhT8J-OgwfwhS83EzCtLZYjPPJiTPYI57fEcxBerJQE7w2tOxdYNCXGlxN82tLRlS79wn02kGftdxheSBAz4_Mavfp_sICcQ

如果页面打不开就看pod有没有启动

kubectl get pods,svc -n kubernetes-dashboard

杀手锏删除dashboard重新生成

kubectl delete -f recommended.yaml

创建一个nginx试下

kubectl create deployment nginx --image nginx:1.20.0
kubectl expose deployment nginx --port=80 --type=NodePort 
[root@k8s-master1 ~]# kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-78549cd587-kds5h   1/1     Running   0          54s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        5h11m
service/nginx        NodePort    10.1.248.49   <none>        80:30887/TCP   54s


#为了让80端口对所有地址开放访问
nohup kubectl port-forward --address 0.0.0.0 pod/nginx-78549cd587-kds5h 80:80 &
#这个值 pod/nginx-78549cd587-kds5h 是通过kubectl get pod,svc获取的

访问nginx页面成功

http://192.168.60.200/

图片[7]-kubeadm1.23-秋风落叶
© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

取消
昵称表情代码图片快捷回复

    暂无评论内容