架构图
![图片[1]-kubeadm安装k8s-秋风落叶](https://wangjian.run/wp-content/uploads/2024/08/20240829171950905-1724923190-image-1024x580.png)
环境准备
集群角色 | 主机名 | 操作系统 | IP地址 |
master | k8s-master | centos7.9 | 192.168.15.128 |
node | k8s-node1 | centos7.9 | 192.168.15.129 |
node | k8s-node2 | centos7.9 | 192.168.15.130 |
系统配置阿里镜像源
https://developer.aliyun.com/mirror/centos
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache
系统初始化配置
#关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
#关闭seliunx
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
#修改主机名每台主机修改对应的即可
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
#某些 Kubernetes 网络插件可能使用网络桥接(Bridge),为了确保网络桥接的数据包经
#过 Iptables 处理,需要启用相关的内核参数:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#必须修修改hosts
cat >> /etc/hosts <<EOF
192.168.15.128 k8s-master
192.168.15.129 k8s-node1
192.168.15.130 k8s-node2
EOF
more /etc/hosts
#时间同步
yum install ntpdate -y
ntpdate time.windows.com
安装docker
https://developer.aliyun.com/mirror/docker-ce
CentOS 7(使用 yum 进行安装)
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 开启Docker服务
systemctl enable docker.service
systemctl start docker
systemctl status docker
# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ce.repo
# 将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
# Loading mirror speeds from cached hostfile
# Loaded plugins: branch, fastestmirror, langpacks
# docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
# docker-ce.x86_64 17.03.1.ce-1.el7.centos @docker-ce-stable
# docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
# Available Packages
# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
# sudo yum -y install docker-ce-[VERSION]
安装k8s软件
https://developer.aliyun.com/mirror/kubernetes
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
目前该源支持 v1.24 – v1.29 版本,后续版本会持续更新
由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用以下命令安装
yum install -y –nogpgcheck kubelet kubeadm kubectl
![图片[2]-kubeadm安装k8s-秋风落叶](https://m4.publicimg.browser.qq.com/publicimg/pcqb/sidebar/sidebar_selection_option.png)
安装 cri-docker
Kubernetes 1.20 版本发布时宣布:为了优化核心代码,减少维护负担,将在1.24 版本中正式移除“Dockershim”,而当时 Docker 不支持 CRI,这就意味着 Kubernetes无法再将 Docker 作为容器运行时。Docker 官方为了解决这个问题,与Mirantis 公司合作开发了一个名为“cri-dockerd”的代理程序,负责 kubelet 与 Docker 之间的通信.
因此,从 Kubernetes 1.24 版本及更高版本开始,使用 Docker 作为容器运行时,需要安装 cri-dockerd。你可以在 GitHub Releases 页面(https://github.com/Mirantis/cri-dockerd/releases)上找到适用于你系统平台版本的安装包,下载该安装包,然后将其上传到所有节点上并进行安装.
[root@localhost ~]# rpm -ivh cri-dockerd-0.3.2-3.el7.x86_64.rpm
安装完成后,修改 Systemd 服务文件,将依赖的 Pause 镜像指定为阿里云镜像地址:
cp /usr/lib/systemd/system/cri-docker.service /usr/lib/systemd/system/cri-docker.service.bakup
vim /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
![图片[3]-kubeadm安装k8s-秋风落叶](https://wangjian.run/wp-content/uploads/2024/08/20240828134627533-1724823987-image-1024x243.png)
systemctl start cri-docker
systemctl enable cri-docker
systemctl status cri-docker
安装master节点
安装 kubeadm 和 kubelet
kubeadm init --apiserver-advertise-address=192.168.15.128 --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.28.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket=unix:///var/run/cri-dockerd.sock
该命令中各参数含义如下。
–apiserver-advertise-address:指定 API Server 监听的 IP 地址。如果没有设置,则
将使用默认的网络接口。
–image-repository:指定镜像仓库地址。默认值为“registry.k8s.io”,但该仓库在
中国无法访问,因此这里指定阿里云仓库。
–kubernetes-version:指定 Kubernetes 版本。
–pod-network-cidr:指定 Pod 网络的 CIDR 地址范围。
–service-cidr:指定 Service 网络的 CIDR 地址范围。
–cri-socket:指定 kubelet 连接容器运行时的 UNIX 套接字文件。
这是执行成功的日志
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.15.128 --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.28.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket=unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.15.128]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.15.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.15.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.004401 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: aqrwo8.base8cd5kmqhmi53
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.15.128:6443 --token aqrwo8.base8cd5kmqhmi53 \
--discovery-token-ca-cert-hash sha256:a04dbbc1319920d9b0e5a44ef3938f7465e61686b6a0528544d4152c910b76b8
根据上述提示,执行以下命令开始使用集群:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
这些命令是将文件“/etc/kubernetes/admin.conf”复制到“$HOME/.kube/config”中,
以便 kubectl 根据该配置文件连接和管理 Kubernetes 集群。
查看master是否安装成功
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 7m39s v1.28.13
部署 Node 节点
在两个工作节点上执行上述master节点返回的“kubeadm join”命令,并添加“–cri-socket”参数,
以将这些工作节点添加到集群中:
[root@k8s-node1 ~]# kubeadm join 192.168.15.128:6443 --token aqrwo8.base8cd5kmqhmi53 --discovery-token-ca-cert-hash sha256:a04dbbc1319920d9b0e5a44ef3938f7465e61686b6a0528544d4152c910b76b8 --cri-socket=unix:///var/run/cri-dockerd.sock
[root@k8s-master ~]# kubeadm token create –print-join-command
kubeadm join 192.168.15.128:6443 –token otj0r9.jiymsyspupya49ci –discovery-token-ca-cert-hash sha256:a04dbbc1319920d9b0e5a44ef3938f7465e61686b6a0528544d4152c910b76b8
执行成功日志如下:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
此时去master节点查看节点
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 93m v1.28.13
k8s-node1 NotReady <none> 30s v1.28.13
在上述结果中,节点状态显示为“NotReady”,表示该节点尚未准备就绪。这是由于
kubelet 服务未发现网络插件导致的,不要慌。
节点2操作方式同理,此时去master节点查看节点
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 122m v1.28.13
k8s-node1 NotReady <none> 29m v1.28.13
k8s-node2 NotReady <none> 10s v1.28.13
部署网络插件
镜像导入到各个节点后,在master服务器上执行以下命令
[root@k8s-master ~]# kubectl create -f https://raw.githubusercontent.com/
projectcalico/calico/v3.26.0/manifests/tigera-operator.yaml
[root@k8s-master ~]# wget https://raw.githubusercontent.com/
projectcalico/calico/v3.26.0/manifests/custom-resources.yaml
[root@k8s-master ~]# kubectl create -f tigera-operator.yaml
[root@k8s-master ~]# vim custom-resources.yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16 # 修改此值,与“kubeadm init”命令中指定的 Pod 网络
CIDR 地址范围保持一致
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
...
[root@k8s-master ~]# kubectl create -f custom-resources.yaml
等待片刻,查看 Pod 对象:
[root@k8s-master ~]# kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-85955d4f5b-rgtnt 1/1 Running 0 2m42s
calico-node-7dntf 1/1 Running 0 2m42s
calico-node-glw6m 1/1 Running 0 2m42s
calico-node-rzm8z 1/1 Running 0 2m42s
calico-typha-9cc98854c-lxrtz 1/1 Running 0 2m36s
calico-typha-9cc98854c-xskp5 1/1 Running 0 2m42s
csi-node-driver-drkl7 2/2 Running 0 2m42s
csi-node-driver-lb6fk 2/2 Running 0 2m42s
csi-node-driver-t5b6z 2/2 Running 0 2m42s
所有 Pod 的状态均显示为“Running”,说明 Calico 安装成功。再通过“kubectl get nodes”
命令查看节点,状态转为“Ready”,表示节点准备就绪。
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 3h16m v1.28.13
k8s-node1 Ready <none> 103m v1.28.13
k8s-node2 Ready <none> 74m v1.28.13
部署 Dashboard
Dashboard 是官方开发的一个 Web 管理系统。通过它,你可以管理集群资源、查看应用概览、查看容器日志和访问容器等操作。
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/
dashboard/v2.7.0/aio/deploy/recommended.yaml
将 Service 的类型设置为“NodePort”类型并指定访问端口,以便将其暴露到集群外部进行访问,修改如下:
[root@k8s-master ~]# vim recommended.yaml
…
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # 指定 NodePort 类型
ports:
- port: 443
targetPort: 8443
nodePort: 30001 # 指定访问端口
selector:
k8s-app: kubernetes-dashboard
…
在集群中创建资源:
[root@k8s-master ~]# kubectl apply -f recommended.yaml
查看 Pod 对象
[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-5657497c4c-4jlvd 1/1 Running 0 9m50s
kubernetes-dashboard-78f87ddfc-vjcn7 1/1 Running 0 9m50s
所有 Pod 的状态都显示为“Running”,说明 Dashboard 安装成功。在浏览器中访问“https://<节点 IP 地址>:30001”,你将看到登录页面
https://192.168.15.128:30001/#/login
![图片[4]-kubeadm安装k8s-秋风落叶](https://wangjian.run/wp-content/uploads/2024/08/20240828174140716-1724838100-image-1024x522.png)
创建一个服务账号并授予集群管理员权限:
[root@k8s-master ~]# kubectl create serviceaccount admin-user
serviceaccount/admin-user created
[root@k8s-master ~]# kubectl create clusterrolebinding admin-user --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:admin-user
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
根据服务账号创建 Token
[root@k8s-master ~]# kubectl create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IktaNEx5NDdteVdOWDVYQmVvV3RzSDhwa1ctNFhaSFE0U29rZk9qNF8yaWsifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzI0ODQyMTg1LCJpYXQiOjE3MjQ4Mzg1ODUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWluLXVzZXIiLCJ1aWQiOiJiNThiZjI1Yy01MjQ0LTQyZjUtOGEzYS1kOTE5MWRiZDIxZTcifX0sIm5iZiI6MTcyNDgzODU4NSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6YWRtaW4tdXNlciJ9.MSFukU6d2bZn1OPbznjllM4xsYqQuusBNd8AbX8gChiwO74kwD3JIGNizJZpDq0B1vnLV1PG8zykmF08IzW_Si6ztVMZVXVYMG7v-MnK6qy5kXwx647zmy_LIKufxrUCc6UbdYi5Gw5Oo3NmaVvm-4pdfcRKFw9Q23pQfYJDrTlNRby8gyqHNuEmVvrfth5Ct0NIvzHctHfkyXM_rCWZQHn0FlrIKObi1lIXwX8CrExeVDu8ij683y2s84KihR00k6HMbBFVfccZAAUKLPB-yCqtnADPX5txS9o5c8UGh2x2W54G57R5kOcxa7Z6Dy3QSRR4Td9R6NDkS7KEy1U7NQ
将输出的 Token 复制到输入框中,然后单击登录,进入 Dashboard 首页
![图片[5]-kubeadm安装k8s-秋风落叶](https://wangjian.run/wp-content/uploads/2024/08/20240828175149336-1724838709-image-1024x538.png)
至此k8s安装完成
清空 Kubernetes 环境
如果需要重新部署或者卸载 Kubernetes 集群环境,可以使用以下命令:
kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
该命令将清空当前节点上由 kubeadm 生成的所有操作和配置。
暂无评论内容