线上机器性能比较差,为满足kubeadm的initial检查,故只有master的配置比较高,毕竟囊中羞涩嘛!
事情的开端始于想跑一个Jenkins用于实验和学习,结果JVM直接把垃圾的Node 干趴下了,load最高跑到了29,一脸懵逼……

经过几次调整,甚至还给Node上添加了swap,结果还是无法满足需求,因而只能用别的方案……
由于master上除了一些基础调度服务,没有跑过其他的Pod,故决定将Jenkins跑在master上,死马当活马医,实在不行再“学李白,重来……”
先给master加一个label:

[root@xxxxxx yyyy]# kubectl label nodes xxxxxx.plus7s.com node-zzzz=master node/xxxxxx.plus7s.com labeled [root@xxxxxx yyyy]#
然后查看一下master的描述信息:
[root@xxxxxx yyyy]# kubectl describe node xxxxxx.plus7s.com
Name: xxxxxx.plus7s.com
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=xxxxxx.plus7s.com
kubernetes.io/os=linux
node-business=master
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"ba:00:04:8b:3b:2b"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: w.x.y.z
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 06 Jan 2020 04:52:59 +0000
Taints: node-role.kubernetes.io/master:NoSchedule

在deployment.yaml文件中增加如下字段:

nodeSelector: node-business: master tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule"
重新查看load,发现load已经降下来了:

参考资料:https://www.qikqiak.com/post/kubernetes-affinity-scheduler/