解决kubernetes:v1.18.6 get cs127.0.0.1 connection refused错误


在我们正常安装kubernetes1.18.6之后,可能会出现一下错误:

[root@k8s-master manifests]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {“health”:”true”}
[root@k8s-master manifests]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {“health”:”true”}

出现这种情况,是/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0,在文件中注释掉就可以了

kube-controller-manager.yaml文件修改:注释掉27行

1 apiVersion: v1
2 kind: Pod
3 metadata:
4 creationTimestamp: null
5 labels:
6 component: kube-controller-manager
7 tier: control-plane
8 name: kube-controller-manager
9 namespace: kube-system
10 spec:
11 containers:
12 – command:
13 – kube-controller-manager
14 – –allocate-node-cidrs=true
15 – –authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
16 – –authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
17 – –bind-address=127.0.0.1
18 – –client-ca-file=/etc/kubernetes/pki/ca.crt
19 – –cluster-cidr=10.244.0.0/16
20 – –cluster-name=kubernetes
21 – –cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
22 – –cluster-signing-key-file=/etc/kubernetes/pki/ca.key
23 – –controllers=*,bootstrapsigner,tokencleaner
24 – –kubeconfig=/etc/kubernetes/controller-manager.conf
25 – –leader-elect=true
26 – –node-cidr-mask-size=24
27 # – –port=0
28 – –requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
29 – –root-ca-file=/etc/kubernetes/pki/ca.crt
30 – –service-account-private-key-file=/etc/kubernetes/pki/sa.key
31 – –service-cluster-ip-range=10.1.0.0/16
32 – –use-service-account-credentials=true

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
kube-scheduler.yaml配置修改:注释掉19行

1 apiVersion: v1
2 kind: Pod
3 metadata:
4 creationTimestamp: null
5 labels:
6 component: kube-scheduler
7 tier: control-plane
8 name: kube-scheduler
9 namespace: kube-system
10 spec:
11 containers:
12 – command:
13 – kube-scheduler
14 – –authentication-kubeconfig=/etc/kubernetes/scheduler.conf
15 – –authorization-kubeconfig=/etc/kubernetes/scheduler.conf
16 – –bind-address=127.0.0.1
17 – –kubeconfig=/etc/kubernetes/scheduler.conf
18 – –leader-elect=true
19 # – –port=0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
然后三台机器均重启kubelet

[root@k8s-master ]# systemctl restart kubelet.service
1
再次查看,就正常啦

[root@k8s-master manifests]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”:”true”}
————————————————

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。

原文链接:https://blog.csdn.net/m0_46435788/article/details/107806132