CentOS8下超详细安装配置kubernetes(K8S)
文章目录
一、环境准备
1. 卸载podman
centos8默认安装了podman容器,它和docker可能有冲突需要卸载掉
1sudo yum remove podman
2. 关闭交换区
- 临时关闭
1sudo swapoff -a
- 永久关闭 把/etc/fstab中的swap注释掉
1sudo sed -i 's/.*swap.*/#&/' /etc/fstab
3. 禁用selinux
- 临时关闭
1setenforce 0
- 永久关闭
1sudo sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
4. 关闭防火墙
1sudo systemctl stop firewalld.service
2sudo systemctl disable firewalld.service
二、安装K8S
1. 配置系统基本安装源
1sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
2. 添加K8S安装源
将如下内容保存到:/etc/yum.repos.d/kubernetes.repo
1[kubernetes]
2name=Kubernetes
3baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
4enabled=1
5gpgcheck=1
6repo_gpgcheck=1
7gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
由于目前阿里镜像中还没有CentOS8的kubernetes,但是可以使用CentOS7的安装包,所以上面是使用的kubernetes-el7-x86_64,如果有CentOS8的,则为kubernetes-el8-x86_64。
3. 安装docker
1sudo yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools
2sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3yum -y install docker-ce
如果出现错误:
1错误:
2 问题: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
3 - cannot install the best candidate for the job
4 - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
5 - package containerd.io-1.2.13-3.1.el7.x86_64 is excluded
6 - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
7 - package containerd.io-1.2.2-3.el7.x86_64 is excluded
8 - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
9 - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
10 - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
11(尝试添加 '--skip-broken' 来跳过无法安装的软件包 或 '--nobest' 来不只使用最佳选择的软件包)
可以通过如下方式来解决:
1wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
2yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm
为了docker加速pull,可以设置阿里云加速:
1sudo mkdir -p /etc/docker
2sudo vim /etc/docker/daemon.json
设置为如下内容:
1{
2 "registry-mirrors" : ["https://mj9kvemk.mirror.aliyuncs.com"]
3}
4. 安装kubectl、kubelet、kubeadm
安装kubectl、kubelet、kubeadm,设置kubelet开机启动,启动kubelet。
1sudo yum install -y kubectl kubelet kubeadm
2sudo systemctl enable kubelet
3sudo systemctl start kubelet
查看K8S版本
1kubeadm version
2kubectl version --client
3kubelet --version
可以看到kubelet的版本为1.18.5。
5. 初始化kubernetes集群
1kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.5 --pod-network-cidr=10.18.0.0/16
运行后出现问题:
1[root@localhost admin]# kubeadm init --apiserver-advertise-address=0.0.0.0 \
2--apiserver-cert-extra-sans=127.0.0.1 \
3--image-repository=registry.aliyuncs.com/google_containers \
4--ignore-preflight-errors=all \
5--kubernetes-version=v1.18.5 \
6--service-cidr=10.10.0.0/16 \
7--pod-network-cidr=10.18.0.0/16
8W0702 16:23:11.951553 16395 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
9[init] Using Kubernetes version: v1.18.5
10[preflight] Running pre-flight checks
11 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
12[preflight] Pulling images required for setting up a Kubernetes cluster
13[preflight] This might take a minute or two, depending on the speed of your internet connection
14[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
- [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. 出现[WARNING IsDockerSystemdCheck],是由于docker的Cgroup Driver和kubelet的Cgroup Driver不一致导致的,此处选择修改docker的和kubelet一致
查看docker信息:
1docker info | grep Cgroup
可以看到驱动为Cgroup,需要改为systemd。 编辑文件/usr/lib/systemd/system/docker.service 在ExecStart命令中添加
1--exec-opt native.cgroupdriver=systemd
然后重启docker,再查看信息,可以看到已经变为systemd了
1systemctl daemon-reload
2systemctl restart docker
3docker info | grep Cgroup
此时再执行下面的命令进行初始化:
1kubeadm init --apiserver-advertise-address=0.0.0.0 \
2--apiserver-cert-extra-sans=127.0.0.1 \
3--image-repository=registry.aliyuncs.com/google_containers \
4--ignore-preflight-errors=all \
5--kubernetes-version=v1.18.5 \
6--service-cidr=10.10.0.0/16 \
7--pod-network-cidr=10.18.0.0/16
由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。 但是还是出现问题:
1kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all --kubernetes-version=v1.18.5 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.18.0.0/16
2W0702 17:47:00.104450 61229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
3[init] Using Kubernetes version: v1.18.5
4[preflight] Running pre-flight checks
5 [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
6 [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
7 [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
8 [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
9 [WARNING Port-10250]: Port 10250 is in use
10 [WARNING Port-2379]: Port 2379 is in use
11 [WARNING Port-2380]: Port 2380 is in use
12 [WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
13[preflight] Pulling images required for setting up a Kubernetes cluster
14[preflight] This might take a minute or two, depending on the speed of your internet connection
15[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
16 [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.5: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.5 not found: manifest unknown: manifest unknown
17, error: exit status 1
18 [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.5: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.5 not found: manifest unknown: manifest unknown
19, error: exit status 1
20 [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.5: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.5 not found: manifest unknown: manifest unknown
21, error: exit status 1
22 [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-proxy:v1.18.5: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-proxy:v1.18.5 not found: manifest unknown: manifest unknown
23, error: exit status 1
24[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
25[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
26[kubelet-start] Starting the kubelet
27[certs] Using certificateDir folder "/etc/kubernetes/pki"
28[certs] Using existing ca certificate authority
29[certs] Using existing apiserver certificate and key on disk
30[certs] Using existing apiserver-kubelet-client certificate and key on disk
31[certs] Using existing front-proxy-ca certificate authority
32[certs] Using existing front-proxy-client certificate and key on disk
33[certs] Using existing etcd/ca certificate authority
34[certs] Using existing etcd/server certificate and key on disk
35[certs] Using existing etcd/peer certificate and key on disk
36[certs] Using existing etcd/healthcheck-client certificate and key on disk
37[certs] Using existing apiserver-etcd-client certificate and key on disk
38[certs] Using the existing "sa" key
39[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
40[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
41[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
42[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
43[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
44[control-plane] Using manifest folder "/etc/kubernetes/manifests"
45[control-plane] Creating static Pod manifest for "kube-apiserver"
46[control-plane] Creating static Pod manifest for "kube-controller-manager"
47W0702 17:47:17.258210 61229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
48[control-plane] Creating static Pod manifest for "kube-scheduler"
49W0702 17:47:17.261156 61229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
50[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
51[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
应该是阿里云镜像中还没有1.18.5的相关文件,改为1.18.2试试:
1kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all --kubernetes-version=v1.18.2 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.18.0.0/16
成功了:
1[root@localhost admin]# kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all --kubernetes-version=v1.18.2 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.18.0.0/16
2W0702 17:47:41.592876 62703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
3[init] Using Kubernetes version: v1.18.2
4[preflight] Running pre-flight checks
5 [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
6 [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
7 [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
8 [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
9 [WARNING Port-10250]: Port 10250 is in use
10 [WARNING Port-2379]: Port 2379 is in use
11 [WARNING Port-2380]: Port 2380 is in use
12 [WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
13[preflight] Pulling images required for setting up a Kubernetes cluster
14[preflight] This might take a minute or two, depending on the speed of your internet connection
15[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
16[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
17[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
18[kubelet-start] Starting the kubelet
19[certs] Using certificateDir folder "/etc/kubernetes/pki"
20[certs] Using existing ca certificate authority
21[certs] Using existing apiserver certificate and key on disk
22[certs] Using existing apiserver-kubelet-client certificate and key on disk
23[certs] Using existing front-proxy-ca certificate authority
24[certs] Using existing front-proxy-client certificate and key on disk
25[certs] Using existing etcd/ca certificate authority
26[certs] Using existing etcd/server certificate and key on disk
27[certs] Using existing etcd/peer certificate and key on disk
28[certs] Using existing etcd/healthcheck-client certificate and key on disk
29[certs] Using existing apiserver-etcd-client certificate and key on disk
30[certs] Using the existing "sa" key
31[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
32[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
33[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
34[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
35[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
36[control-plane] Using manifest folder "/etc/kubernetes/manifests"
37[control-plane] Creating static Pod manifest for "kube-apiserver"
38[control-plane] Creating static Pod manifest for "kube-controller-manager"
39W0702 17:49:34.509168 62703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
40[control-plane] Creating static Pod manifest for "kube-scheduler"
41W0702 17:49:34.510843 62703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
42[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
43[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
44[apiclient] All control plane components are healthy after 12.003628 seconds
45[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
46[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
47[upload-certs] Skipping phase. Please see --upload-certs
48[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"
49[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
50[bootstrap-token] Using token: l21jwf.pjzezg1xmopqoj0p
51[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
52[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
53[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
54[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
55[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
56[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
57[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
58[addons] Applied essential addon: CoreDNS
59[addons] Applied essential addon: kube-proxy
60
61Your Kubernetes control-plane has initialized successfully!
62
63To start using your cluster, you need to run the following as a regular user:
64
65 mkdir -p $HOME/.kube
66 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
67 sudo chown $(id -u):$(id -g) $HOME/.kube/config
68
69You should now deploy a pod network to the cluster.
70Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
71 https://kubernetes.io/docs/concepts/cluster-administration/addons/
72
73Then you can join any number of worker nodes by running the following on each as root:
74
75kubeadm join 192.168.1.134:6443 --token l21jwf.pjzezg1xmopqoj0p \
76 --discovery-token-ca-cert-hash sha256:0b1062f2ec73f8c35c1bfecd857a287b128aba7ae5c0673ea604c9ac7c296a95
执行提示中的命令:
1mkdir -p $HOME/.kube
2sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3sudo chown $(id -u):$(id -g) $HOME/.kube/config
再执行:
1kubectl get node
2kubectl get pod --all-namespaces
node节点为NotReady,因为coredns pod没有启动,缺少网络pod。
6. 安装calico网络
1kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
过一会再查看信息,节点已经处于Ready状态了。
三、安装kubernetes-dashboard
将官方的 recommended.yaml文件下载下来。可以使用下面的命令下载,如果国内无法访问https://raw.githubusercontent.com,则可以在前面的链接中复制内容再保存。
1wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
由于官方没使用nodeport,所以需要修改文件,添加两行配置。
1kind: Service
2apiVersion: v1
3metadata:
4 labels:
5 k8s-app: kubernetes-dashboard
6 name: kubernetes-dashboard
7 namespace: kubernetes-dashboard
8spec:
9 type: NodePort #添加这行
10 ports:
11 - port: 443
12 targetPort: 8443
13 nodePort: 30000 #添加这行
14 selector:
15 k8s-app: kubernetes-dashboard
如图所示:
然后创建POD,并查看kubernetes-dashboard
1kubectl create -f recommended.yaml
2kubectl get svc -n kubernetes-dashboard
此时,可以使用浏览器访问:https://192.168.1.134:30000/#/login,其中的IP是本机的IP。
这里有两种登录方式:
- 使用Token登录
- 使用Kubeconfig登录
使用Token登录
1. 创建token
1kubectl create sa dashboard-admin -n kube-system
2. 授权token 访问权限
1kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
3. 获取token
1ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
2DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
3echo ${DASHBOARD_LOGIN_TOKEN}
4. 登录
输入Token后,登录页面如下图所示:
需要注意的是Token默认有效期是24小时,过期需要重新生成token。
有了kubernetes-dashboard的帮助,就可以可视化的部署与监控应用了。
常用Token命令:
- 查看Token
1kubeadm token list
- 创建Token
1kubeadm token create
- 删除 Token
1kubeadm token delete TokenXXX
- 初始化master节点时,node节点加入集群命令
1kubeadm token create --print-join-command
或者:
1token=$(kubeadm token generate)
2kubeadm token create $token --print-join-command --ttl=0
四、使用DashBoard部署容器
以部署nginx和mysql为例,介绍如何利用K8s DashBoard部署容器。
1.示例:部署nginx
在Web页面依次按下图操作
在点击“Deploy"后等待一段时间,当状态颜色都变为绿色则表示成功。 在此步可能会出现错误:
10/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
这是因为k8s出于安全考虑默认情况下无法在master节点上部署pod,在Linux控制台执行下面的命令解决:
1kubectl taint nodes --all node-role.kubernetes.io/master-
还有可能出现错误:
1Back-off restarting failed container
在deployment申明镜像的后面加上命令 command: [ “/bin/bash”, “-ce”, “tail -f /dev/null” ]
成功页面:
然后在下面找到“Services”栏,可以看到暴露给外部的端口,此时我们可以使用这个端口访问nginx了。
看到nginx的欢迎页面,说明成功了。
2.示例:部署Mysql
与部署nginx类似,但是有一点不一样的就是Mysql的容器需要指定一个初始化的密码,所以需要用到“高级选项”:
在高级选项中填入环境变量: 名字:MYSQL_ROOT_PASSWORD 值:123456 这样就设置了Mysql的root账号密码。
部署成功后可以试试Mysql:
可以看到成功访问Mysql。 我再试试外部访问,首先查看对外端口:
五、删除容器
如果想要删除已经部署的容器,可以在页面中点下图所示按钮,在弹出的菜单中选择删除即可。
参考: https://blog.csdn.net/vs2008ASPNET/article/details/104119331/ https://blog.csdn.net/sq4521/article/details/105873575 https://www.kubernetes.org.cn/7189.html https://www.ywcsb.vip/blog/94.html
- 原文作者:Witton
- 原文链接:https://wittonbell.github.io/posts/2020/2020-07-02-CentOS8下超详细安装配置kubernetesK8S/
- 版权声明:本作品采用知识共享署名-非商业性使用-禁止演绎 4.0 国际许可协议. 进行许可,非商业转载请注明出处(作者,原文链接),商业转载请联系作者获得授权。