0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

Kubernetes的集群部署

汽車電子技術(shù) ? 來(lái)源:碼農(nóng)與軟件時(shí)代 ? 作者: 碼農(nóng)與軟件時(shí)代 ? 2023-02-15 10:35 ? 次閱讀

一、集群部署簡(jiǎn)介

1、Kubeadm

Kubeadm是一種Kubernetes集群部署工具,通過(guò)kubeadm init命令創(chuàng)建master節(jié)點(diǎn),通過(guò) kubeadm join命令把node節(jié)點(diǎn)加入到集群中。

kubeadm init大約會(huì)做這些事情:

①預(yù)檢測(cè):檢查系統(tǒng)狀態(tài)(Linux cgroups、10250/10251/10252端口是否可用 ),給出警告、錯(cuò)誤信息,并會(huì)退出 kubeadm init命令執(zhí)行;

②生成證書:生成的證書放在/etc/kubernetes/pki目錄,以便訪問(wèn)kubernetes時(shí)使用;

③生成各組件YAML文件;

④安裝集群最小可用插件。

其他Node節(jié)點(diǎn)使用kubeadm init生成的token,執(zhí)行kubeadm join命令,就可以加入集群了。Node節(jié)點(diǎn)要先安裝kubelet、kubeadm。

有關(guān)kubeadm init和kubeadm join命令的解釋,參考:

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-join

2、Kubelet

kubelet是用來(lái)操作Pod和容器的組件,運(yùn)行在集群的所有節(jié)點(diǎn)上,需要直接安裝在宿主機(jī)上。安裝過(guò)程中,kubeadm調(diào)用kubelet實(shí)現(xiàn)kubeadm init的工作。

3、Kubectl

kubectl是Kubernetes集群的命令行工具,通過(guò)kubectl能夠?qū)罕旧磉M(jìn)行管理,并能夠在集群上進(jìn)行容器化應(yīng)用的安裝部署。

二、集群安裝

1、環(huán)境準(zhǔn)備

Ubuntu 18.04 LTS CPU 2核 內(nèi)存4G 硬盤 20G

同樣規(guī)格:3臺(tái)設(shè)備,hostname分別為master、node1、node2

2、Master節(jié)點(diǎn)安裝

(1)設(shè)置hostname

root@k8s:/# hostnamectl --static set-hostname master
root@k8s:/# hostnamectl
   Static hostname: master
Transient hostname: k8s
         Icon name: computer-vm
           Chassis: vm
        Machine ID: e5c0d0f18ba04c0a8722ab9fff662987
           Boot ID: 74af5268dfe74f23b3dee608ab2afe41
    Virtualization: kvm
  Operating System: Ubuntu 18.04.2 LTS
            Kernel: Linux 4.15.0-122-generic
      Architecture: x86-64

(2)關(guān)閉系統(tǒng)Swap:執(zhí)行命令swapoff-a 。

(3)安裝docker社區(qū)版

apt-get update
apt-get -y install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt-get -y update
apt-get -y install docker-ce

(4)安裝 kubelet 、kubeadm 、kubectl工具

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat </etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

3、Node1節(jié)點(diǎn)安裝

重復(fù)Master節(jié)點(diǎn)安裝的(1)~(4)

4、創(chuàng)建集群

(1)Master節(jié)點(diǎn)執(zhí)行kubeadm init指令

kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16

成功日志的信息如下:

root@k8s:~# kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.503605 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: e16km1.69phwhcdjaulf060
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


Alternatively, if you are the root user, you can run:


  export KUBECONFIG=/etc/kubernetes/admin.conf


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 \\
  --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c

kubeadm init 結(jié)束時(shí),加入集群的命令信息會(huì)被打印出來(lái),直接拷貝到需要加入的節(jié)點(diǎn)執(zhí)行就可以了。

上面是比較順利的步驟,實(shí)際操作過(guò)程中可能會(huì)遇到的如下問(wèn)題:

Unfortunately, an error has occurred:
      timed out waiting for the condition


    This error is likely caused by:
      - The kubelet is not running
      - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)


  If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'


  Additionally, a control plane component may have crashed or exited when started by the container runtime.
  To troubleshoot, list all containers using your preferred container runtimes CLI.


  Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'

原因?yàn)椋?/strong>

Cgroup Driver的值不同,docker的Cgroup Driver: cgroupfs,而kubernetes為systemd。

root@k8s:/# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.7.1-docker)
  scan: Docker Scan (Docker Inc., v0.12.0)


Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 7
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.15.0-122-generic
 Operating System: Ubuntu 18.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.852GiB
 Name: master
 ID: TNT3:2XIE:OCQQ:EZXX:DOVR:HJ7G:ASDH:XYFE:VZSO:YA5R:O2TU:IUVO
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

修改的方法為:在/etc/docker/daemon.json文件中(沒(méi)文件則創(chuàng)建)增加如下內(nèi)容:

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重啟docker

systemctl restart docker

再次執(zhí)行kube-init,報(bào)錯(cuò):

[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
  [ERROR Port-6443]: Port 6443 is in use
  [ERROR Port-10259]: Port 10259 is in use
  [ERROR Port-10257]: Port 10257 is in use
  [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
  [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
  [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
  [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
  [ERROR Port-10250]: Port 10250 is in use
  [ERROR Port-2379]: Port 2379 is in use
  [ERROR Port-2380]: Port 2380 is in use
  [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

修改方法為:

root@k8s:~# netstat -nplt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      3289/kube-controlle 
tcp        0      0 0.0.0.0:6001            0.0.0.0:*               LISTEN      998/Xtightvnc       
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      3311/kube-scheduler 
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      601/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      948/sshd            
tcp        0      0 0.0.0.0:39451           0.0.0.0:*               LISTEN      1052/xfce4-session  
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      2593/kubelet        
tcp        0      0 30.0.1.180:2379         0.0.0.0:*               LISTEN      3300/etcd           
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      3300/etcd           
tcp        0      0 30.0.1.180:2380         0.0.0.0:*               LISTEN      3300/etcd           
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      3300/etcd           
tcp        0      0 0.0.0.0:5901            0.0.0.0:*               LISTEN      998/Xtightvnc       
tcp        0      0 127.0.0.1:36431         0.0.0.0:*               LISTEN      2593/kubelet        
tcp6       0      0 :::21                   :::*                    LISTEN      701/vsftpd          
tcp6       0      0 :::22                   :::*                    LISTEN      948/sshd            
tcp6       0      0 :::10250                :::*                    LISTEN      2593/kubelet        
tcp6       0      0 :::6443                 :::*                    LISTEN      3349/kube-apiserver 
tcp6       0      0 :::35407                :::*                    LISTEN      1052/xfce4-session  
root@k8s:~# kill -9 3349 3311 3289 2593 3300 
root@k8s:~# netstat -nplt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:6001            0.0.0.0:*               LISTEN      998/Xtightvnc       
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      601/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      948/sshd            
tcp        0      0 0.0.0.0:39451           0.0.0.0:*               LISTEN      1052/xfce4-session  
tcp        0      0 0.0.0.0:5901            0.0.0.0:*               LISTEN      998/Xtightvnc       
tcp6       0      0 :::21                   :::*                    LISTEN      701/vsftpd          
tcp6       0      0 :::22                   :::*                    LISTEN      948/sshd            
tcp6       0      0 :::35407                :::*                    LISTEN      1052/xfce4-session

修訂后,kubeadm init執(zhí)行成功。

# 此時(shí)Master節(jié)點(diǎn)的狀態(tài)為NotReady。
root@k8s:~# kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   6m13s   v1.23.1


# 拉取鏡像的信息
root@k8s:~# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6


# coredns 處于Pending狀態(tài),原因?yàn)?a  target="_blank">網(wǎng)絡(luò)插件這未安裝
root@k8s:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-ghvmj          0/1     Pending   0          14m
kube-system   coredns-6d8c4cb4d-p45mv          0/1     Pending   0          14m
kube-system   etcd-master                      1/1     Running   0          15m
kube-system   kube-apiserver-master            1/1     Running   0          15m
kube-system   kube-controller-manager-master   1/1     Running   0          15m
kube-system   kube-proxy-xswwz                 1/1     Running   0          14m
kube-system   kube-scheduler-master            1/1     Running   0          15m


root@k8s:~# journalctl -f -u kubelet.service
-- Logs begin at Sun 2019-05-05 16:25:08 CST. --
Dec 30 20:44:59 master kubelet[6501]: E1230 16:44:59.669150    6501 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

**(2)Master節(jié)點(diǎn)安裝flannel網(wǎng)絡(luò)插件 **

執(zhí)行指令安裝flannel插件:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

安裝flannel后,master節(jié)點(diǎn)處于ready狀態(tài)。

flannel安裝:
root@k8s:~# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2021-12-30 20:47:48--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5177 (5.1K) [text/plain]
Saving to: akube-flannel.ymla


kube-flannel.yml                       100%[===========================================================================>]   5.06K  --.-KB/s    in 0s      


2021-12-30 20:47:49 (21.4 MB/s) - akube-flannel.ymla saved [5177/5177]


root@k8s:~# kubectl apply -f  kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created


root@k8s:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-ghvmj          1/1     Running   0          25m
kube-system   coredns-6d8c4cb4d-p45mv          1/1     Running   0          25m
kube-system   etcd-master                      1/1     Running   0          25m
kube-system   kube-apiserver-master            1/1     Running   0          25m
kube-system   kube-controller-manager-master   1/1     Running   0          25m
kube-system   kube-flannel-ds-ql282            1/1     Running   0          66s
kube-system   kube-proxy-xswwz                 1/1     Running   0          25m
kube-system   kube-scheduler-master            1/1     Running   0          25m


root@k8s:~# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   26m   v1.23.1

(3)Node節(jié)點(diǎn)加入到集群中

執(zhí)行指令,將node1加入集群

kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c

這個(gè)命令在Master節(jié)點(diǎn)初始化成功的日志中,也可以在Master節(jié)點(diǎn)執(zhí)行命令獲取:

kubeadm token create --print-join-command

成功的日志信息:

root@k8s:~# kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1230 20:19:34.532570   26262 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

此時(shí)的節(jié)點(diǎn)信息為:

root@k8s:~# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   61m   v1.23.1
node1    Ready                     13m   v1.23.1


root@k8s:~# kubectl label nodes node1 node-role.kubernetes.io/node=
node/node1 labeled
root@k8s:~# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   67m   v1.23.1
node1    Ready    node                   18m   v1.23.1

可能遇到的問(wèn)題:Master節(jié)點(diǎn)的問(wèn)題1,依然會(huì)遇見,采用相同的解決方法。

(3)Node2執(zhí)行與node1相同的操作

root@k8s:/# kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1230 23:22:10.274581   28114 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


root@k8s:~# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   7h2m    v1.23.1
node1    Ready    node                   6h13m   v1.23.1
node2    Ready    node                   3m13s   v1.23.1

三、集群相關(guān)信息

1、kubernetes組件部署信息

# kubernetes組件基本上運(yùn)行在POD中
root@k8s:~# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-6d8c4cb4d-ghvmj          1/1     Running   0          17h   10.244.0.2   master   <none>           <none>
kube-system   coredns-6d8c4cb4d-p45mv          1/1     Running   0          17h   10.244.0.3   master   <none>           <none>
kube-system   etcd-master                      1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-apiserver-master            1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-controller-manager-master   1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-flannel-ds-8qt6p            1/1     Running   0          16h   30.0.1.160   node1    <none>           <none>
kube-system   kube-flannel-ds-ql282            1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-flannel-ds-zkt47            1/1     Running   0          10h   30.0.1.47    node2    <none>           <none>
kube-system   kube-proxy-pb9gn                 1/1     Running   0          10h   30.0.1.47    node2    <none>           <none>
kube-system   kube-proxy-xswwz                 1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-proxy-zdfp5                 1/1     Running   0          16h   30.0.1.160   node1    <none>           <none>
kube-system   kube-scheduler-master            1/1     Running   0          17h   30.0.1.180   master   <none>           <none>


# kublet直接安裝在宿主機(jī)上,不以docker形式運(yùn)行
root@k8s:~# systemctl status kubelet.service
a kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           aa10-kubeadm.conf
   Active: active (running) since Thu 2021-12-30 16:23:24 CST; 17h ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 6501 (kubelet)
    Tasks: 16 (limit: 4702)
   CGroup: /system.slice/kubelet.service
           aa6501 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/li


root@k8s:~# docker ps
CONTAINER ID   IMAGE                                               COMMAND                  CREATED        STATUS        PORTS     NAMES
28bfaeeaadf1   a4ca41631cc7                                        "/coredns -conf /etca|"   17 hours ago   Up 17 hours             k8s_coredns_coredns-6d8c4cb4d-p45mv_kube-system_4ce03d3c-1660-4975-8450-408515ec6a02_0
57a535a41123   a4ca41631cc7                                        "/coredns -conf /etca|"   17 hours ago   Up 17 hours             k8s_coredns_coredns-6d8c4cb4d-ghvmj_kube-system_1a67722e-a15f-4bf0-bbd7-e2af542d2621_0
7be45271357a   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 17 hours ago   Up 17 hours             k8s_POD_coredns-6d8c4cb4d-p45mv_kube-system_4ce03d3c-1660-4975-8450-408515ec6a02_0
79776dc797f4   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 17 hours ago   Up 17 hours             k8s_POD_coredns-6d8c4cb4d-ghvmj_kube-system_1a67722e-a15f-4bf0-bbd7-e2af542d2621_0
424b5047009f   e6ea68648f0c                                        "/opt/bin/flanneld -a|"   17 hours ago   Up 17 hours             k8s_kube-flannel_kube-flannel-ds-ql282_kube-system_9cb2439b-e8f4-422f-a72d-83370e75043e_0
51bea3cfeef7   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 17 hours ago   Up 17 hours             k8s_POD_kube-flannel-ds-ql282_kube-system_9cb2439b-e8f4-422f-a72d-83370e75043e_0
e6149ade3a29   b46c42588d51                                        "/usr/local/bin/kubea|"   18 hours ago   Up 18 hours             k8s_kube-proxy_kube-proxy-xswwz_kube-system_12dac07f-e07e-4eff-becc-7b40a92f3adb_0
3c365b2342a0   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_kube-proxy-xswwz_kube-system_12dac07f-e07e-4eff-becc-7b40a92f3adb_0
b60a3b02f427   25f8c7f3da61                                        "etcd --advertise-cla|"   18 hours ago   Up 18 hours             k8s_etcd_etcd-master_kube-system_5d83471f981b1644e30c11cc642c68f7_0
abd1e3377560   b6d7abedde39                                        "kube-apiserver --ada|"   18 hours ago   Up 18 hours             k8s_kube-apiserver_kube-apiserver-master_kube-system_df535ce9e2ccfb931f8e46a9b80a6218_0
df5e2a226999   f51846a4fd28                                        "kube-controller-mana|"   18 hours ago   Up 18 hours             k8s_kube-controller-manager_kube-controller-manager-master_kube-system_85ff8159d8c894c53981716f8927f187_0
b45d17ab969f   71d575efe628                                        "kube-scheduler --aua|"   18 hours ago   Up 18 hours             k8s_kube-scheduler_kube-scheduler-master_kube-system_77a51208064a0e9b17209ee62638dfcd_0
3cf0d75ad0f0   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_kube-apiserver-master_kube-system_df535ce9e2ccfb931f8e46a9b80a6218_0
6b447aa2fd93   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_etcd-master_kube-system_5d83471f981b1644e30c11cc642c68f7_0
f7f9a3cd677f   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_kube-scheduler-master_kube-system_77a51208064a0e9b17209ee62638dfcd_0
20e0b291d166   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_kube-controller-manager-master_kube-system_85ff8159d8c894c53981716f8927f187_0

2、網(wǎng)段:每個(gè)kubernetes node從中分配一個(gè)子網(wǎng)片段。

(1)Master節(jié)點(diǎn)

root@k8s:~# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1400
FLANNEL_IPMASQ=true

(2)Node1節(jié)點(diǎn)

root@k8s:~# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=1400
FLANNEL_IPMASQ=true

(3)Node2節(jié)點(diǎn)

root@k8s:/# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.2.1/24
FLANNEL_MTU=1400
FLANNEL_IPMASQ=true

3、Kubernetes節(jié)點(diǎn)進(jìn)程

(1)MASTER節(jié)點(diǎn)

root@k8s:~# ps -el | grep kube
4 S     0  6224  6152  0  80   0 - 188636 futex_ ?       00:05:00 kube-scheduler
4 S     0  6275  6196  1  80   0 - 206354 ep_pol ?       00:23:02 kube-controller
4 S     0  6287  6181  5  80   0 - 278080 futex_ ?       01:19:40 kube-apiserver
4 S     0  6501     1  3  80   0 - 487736 futex_ ?       00:46:38 kubelet
4 S     0  6846  6818  0  80   0 - 187044 futex_ ?       00:00:26 kube-proxy

(2)Node節(jié)點(diǎn)

# node1
root@k8s:~# ps -el | grep kube
4 S     0 22869 22845  0  80   0 - 187172 futex_ ?       00:00:23 kube-proxy
4 S     0 26395     1  2  80   0 - 505977 futex_ ?       00:28:10 kubelet
# node2
root@k8s:/# ps -el | grep kube
4 S     0 28227     1  1  80   0 - 487480 futex_ ?       00:17:26 kubelet
4 S     0 28724 28696  0  80   0 - 187044 futex_ ?       00:00:17 kube-proxy
聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問(wèn)題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
  • MASTER
    +關(guān)注

    關(guān)注

    0

    文章

    101

    瀏覽量

    11241
  • init
    +關(guān)注

    關(guān)注

    0

    文章

    16

    瀏覽量

    3418
  • node
    +關(guān)注

    關(guān)注

    0

    文章

    23

    瀏覽量

    5919
  • kubernetes
    +關(guān)注

    關(guān)注

    0

    文章

    223

    瀏覽量

    8675
收藏 人收藏

    評(píng)論

    相關(guān)推薦

    阿里云上Kubernetes集群聯(lián)邦

    摘要: kubernetes集群讓您能夠方便的部署管理運(yùn)維容器化的應(yīng)用。但是實(shí)際情況中經(jīng)常遇到的一些問(wèn)題,就是單個(gè)集群通常無(wú)法跨單個(gè)云廠商的多個(gè)Region,更不用說(shuō)支持跨跨域不同的云
    發(fā)表于 03-12 17:10

    使用Helm 在容器服務(wù)k8s集群一鍵部署wordpress

    摘要: Helm 是啥? 微服務(wù)和容器化給復(fù)雜應(yīng)用部署與管理帶來(lái)了極大的挑戰(zhàn)。Helm是目前Kubernetes服務(wù)編排領(lǐng)域的唯一開源子項(xiàng)目,做為Kubernetes應(yīng)用的一個(gè)包管理工具,可理解
    發(fā)表于 03-29 13:38

    Kubernetes Ingress 高可靠部署最佳實(shí)踐

    摘要: 在Kubernetes集群中,Ingress作為集群流量接入層,Ingress的高可靠性顯得尤為重要,今天我們主要探討如何部署一套高性能高可靠的Ingress接入層。簡(jiǎn)介
    發(fā)表于 04-17 14:35

    阿里云宣布推出Serverless Kubernetes服務(wù) 30秒即可完成應(yīng)用部署

    Serverless形態(tài)。開發(fā)者可在5秒內(nèi)創(chuàng)建集群、30秒部署應(yīng)用上線。用戶無(wú)需管理集群基礎(chǔ)設(shè)施,還可根據(jù)應(yīng)用實(shí)際消耗資源按量付費(fèi),此舉意在進(jìn)一步降低容器技術(shù)的使用門檻,簡(jiǎn)化容器平臺(tái)運(yùn)維的復(fù)雜度。該服
    發(fā)表于 05-03 15:38

    kubernetes集群配置

    基于v1104版本手動(dòng)搭建高可用kubernetes 集群
    發(fā)表于 08-19 08:07

    Kubernetes 從懵圈到熟練:集群服務(wù)的三個(gè)要點(diǎn)和一種實(shí)現(xiàn)

    照進(jìn)現(xiàn)實(shí)前邊兩節(jié),我們看到了,Kubernetes 集群的服務(wù),本質(zhì)上是負(fù)載均衡,即反向代理;同時(shí)我們知道了,在實(shí)際實(shí)現(xiàn)中,這個(gè)反向代理,并不是部署集群某一個(gè)節(jié)點(diǎn)上,而是作為
    發(fā)表于 09-24 15:35

    kubernetes v112二進(jìn)制方式集群部署

    kubernetes v112 二進(jìn)制方式集群部署
    發(fā)表于 05-05 16:30

    redis集群的如何部署

    redis集群部署(偽分布式)
    發(fā)表于 05-29 17:13

    請(qǐng)問(wèn)鴻蒙系統(tǒng)上可以部署kubernetes集群嗎?

    鴻蒙系統(tǒng)上可以部署kubernetes集群
    發(fā)表于 06-08 11:16

    如何部署基于Mesos的Kubernetes集群

    的內(nèi)核。把Kubernetes運(yùn)行在Mesos集群之上,可以和其他的框架共享集群資源,提高集群資源的利用率。 本文是Kubernetes和M
    發(fā)表于 10-09 18:04 ?0次下載
    如何<b class='flag-5'>部署</b>基于Mesos的<b class='flag-5'>Kubernetes</b><b class='flag-5'>集群</b>

    Kubernetes 集群的功能

    Telepresence 是一個(gè)開源工具,可讓您在本地運(yùn)行單個(gè)服務(wù),同時(shí)將該服務(wù)連接到遠(yuǎn)程 Kubernetes 集群。
    的頭像 發(fā)表于 09-05 10:58 ?1014次閱讀

    Kubernetes集群的關(guān)閉與重啟

    在日常對(duì) Kubernetes 集群運(yùn)行維護(hù)的過(guò)程中,您可能需要臨時(shí)的關(guān)閉或者是重啟 Kubernetes 集群對(duì)集群進(jìn)行維護(hù),本文將介紹如
    的頭像 發(fā)表于 11-07 09:50 ?9689次閱讀

    Kubernetes集群部署一個(gè)ChatGPT機(jī)器人

    Robusta 有一個(gè)用于集成的 UI,也有一個(gè)預(yù)配置的 Promethus 系統(tǒng),如果你還沒(méi)有自己的 K8s 集群,并且想嘗試一下這個(gè) ChatGPT 機(jī)器人,你可以使用 Robusta 現(xiàn)有的!
    的頭像 發(fā)表于 03-07 09:33 ?1244次閱讀

    使用Jenkins和單個(gè)模板部署多個(gè)Kubernetes組件

    在持續(xù)集成和部署中,我們通常需要部署多個(gè)實(shí)例或組件到Kubernetes集群中。通過(guò)Jenkins的管道腳本,我們可以自動(dòng)化這個(gè)過(guò)程。在本文中,我將演示如何使用Jenkins Pipe
    的頭像 發(fā)表于 01-02 11:40 ?631次閱讀
    使用Jenkins和單個(gè)模板<b class='flag-5'>部署</b>多個(gè)<b class='flag-5'>Kubernetes</b>組件

    使用Velero備份Kubernetes集群

    Velero 是 heptio 團(tuán)隊(duì)(被 VMWare 收購(gòu))開源的 Kubernetes 集群備份、遷移工具。
    的頭像 發(fā)表于 08-05 15:43 ?287次閱讀
    使用Velero備份<b class='flag-5'>Kubernetes</b><b class='flag-5'>集群</b>