创建基于Fannel网络的Kubernetes 1.11.4

上次发布的 K8s 1.10.3 构建 的文章被喷我太懒,居然文章里还嵌了另一篇内容,而这另一篇文章里面居然还嵌了另一篇,简直不能看.

其实二进制搭建 K8s ,很多的步骤都是要重复的,并且升级版本也是,改动的只是极个别的参数,个人不是很喜欢其他的构建方式,比如16年和17年用 kubeadm 或 minikube 的时候,老会被网络问题困扰,毕竟公司的大网络环境下不能随意上代理, 而 kubespray(之前叫kargo) 的参数化配置还必须预先准备要用的程序然后再一行行填写,我也不喜欢,因此我为自己方便,早早的就搞了个 shell 的一键构建脚本,.

准备环境

本次构建同样是基于 Fannel 网络,并启用 IPVS.

Master节点的配置建议 2U2G 起,Node 节点的配置建议 2U2G 起.

各节点系统为 CentOS 7 ,需要配置各个节点的名称,IP,全部流程使用 root 权限,关闭 SELinux 与 Firewalld 服务.

#关闭SELinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#安装IPVS管理工具
yum install -y ipvsadm

节点名称和IP

三个Master节点,既为master也为node.

Node Name IP
master1 192.168.85.141
master2 192.168.85.142
master3 192.168.85.143

将每个节点的Hostname改为对应的Node Name:

# master1
hostnamectl --static set-hostname master1
# master2
hostnamectl --static set-hostname master2
# master3
hostnamectl --static set-hostname master3

在所有节点上执行以下命令,将所有节点的解析加入到/etc/hosts

cat <<EOF >>/etc/hosts
192.168.85.141 master1
192.168.85.142 master2
192.168.85.143 master3
EOF
# 重启生效
reboot

所有节点导入环境变量:

export master1IP="192.168.85.141"
export master2IP="192.168.85.142"
export master3IP="192.168.85.143"
export hostname=`hostname`
eval tmp_ip=\\${hostname}IP # 第二个 前没有反斜杠,此添加是因为博客编辑器会将双 识别为 Latex 标记符而不显示
export hostIP=tmp_ip

# 修改内核参数,开启IPVS, 默认都是支持的
modprobe ip_vs ip_vs_sh ip_vs_rr ip_vs_wrr

cat <<EOF >> /etc/sysctl.conf
net.ipv4.tcp_fastopen=3
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

# 这里先创建 Kubernetes 的工作目录
mkdir -p /etc/kubernetes/ssl/

安装运行程序

安装Docker

curl -SL https://get.docker.com | sh -
systemctl enable docker
systemctl start docker

安装完docker使用docker info | grep "Cgroup Driver"命令查看 Cgroup 的驱动,这要与 kubelet 的启动参数--cgroup-driver值一致,默认是cgroupfs.

安装cfssl

cfssl可以只在master节点上安装,因为只有master节点签发证书.

curl -SL https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -SL https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -SL https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl*

安装etcd

curl -SL https://github.com/etcd-io/etcd/releases/download/v3.2.25/etcd-v3.2.25-linux-amd64.tar.gz -o etcd.tgz
tar -zxf etcd.tgz
cd etcd-v3.2.25-linux-amd64
mv etcd* /usr/local/bin/

安装Kubernetes

curl -SL https://dl.k8s.io/v1.11.4/kubernetes-server-linux-amd64.tar.gz -o kubernetes.tgz
tar -zxf kubernetes.tgz
mv kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kubectl,kube-proxy,kube-scheduler,kubelet} /usr/local/bin

创建配置

建一个临时工作目录存放建立kubernetes集群相关的临时文件: mkdir -p /opt/kube-tmp && cd /opt/kube-tmp

配置基础证书

用cfssl创建ca证书,简化流程.

生成证书的基础信息

cat <<EOF>/opt/kube-tmp/config.json
{
    "signing": {
        "default": {
            "expiry": "87600h"
        }, 
        "profiles": {
            "kubernetes": {
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ], 
                "expiry": "87600h"
            }
        }
    }
}
EOF

修改 csr.json ,如下格式:

cat << EOF > /opt/kube-tmp/csr.json
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "{master1IP}",
        "{master2IP}",
        "${master3IP}",
        "master1",
        "master2",
        "master3",
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa",
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

“CN”:Common Name,kube-apiserver 从证书中提取该字段作为请求的User Name
“O”:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的Group

开始创建 CA:

cfssl gencert -initca csr.json | cfssljson -bare ca
# 将生成csr和pem文件
ls
# ca.csr ca-key.pem ca.pem config.json csr.json

将证书放到所有节点的Kubernetes工作目录备用:

#复制到所有节点
scp /opt/kube-tmp/{ca.csr,ca-key.pem,ca.pem,config.json,csr.json} {master1IP}:/etc/kubernetes/ssl
scp /opt/kube-tmp/{ca.csr,ca-key.pem,ca.pem,config.json,csr.json}{master2IP}:/etc/kubernetes/ssl
scp /opt/kube-tmp/{ca.csr,ca-key.pem,ca.pem,config.json,csr.json} ${master3IP}:/etc/kubernetes/ssl

配置etcd

生成 etcd 证书的基础信息:

#直接复制csr.json文件为etcd-csr.json,并修改CN字段为etcd即可
cat <<EOF >/opt/kube-tmp/etcd-csr.json
{
    "CN": "etcd", 
    "hosts": [
        "127.0.0.1", 
        "{master1IP}",        "{master2IP}", 
        "${master3IP}", 
        "master1", 
        "master2", 
        "master3", 
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa", 
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN", 
            "ST": "Guangdong", 
            "L": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

创建密钥:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

ls etcd*
# etcd.csr etcd-csr.json etcd-key.pem etcd.pem
#复制到各个节点
scp /opt/kube-tmp/{etcd.csr,etcd-key.pem,etcd.pem} {master1IP}:/etc/kubernetes/ssl
scp /opt/kube-tmp/{etcd.csr,etcd-key.pem,etcd.pem}{master2IP}:/etc/kubernetes/ssl
scp /opt/kube-tmp/{etcd.csr,etcd-key.pem,etcd.pem} ${master3IP}:/etc/kubernetes/ssl

在各个节点上创建 etcd 启动配置文件:

# 生成etcd工作目录
mkdir -p /var/lib/etcd/

# 生成启动配置
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd v3 server
Documentation=https://github.com/coreos/etcd
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
# 此目录必须在运行前创建
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
  --advertise-client-urls=https://{hostIP}:2379 \\
  --cert-file=/etc/kubernetes/ssl/etcd.pem \\
  --data-dir=/var/lib/etcd \\
  --initial-advertise-peer-urls=https://{hostIP}:2380 \\
  --initial-cluster=master1=https://{master1IP}:2380,master2=https://{master2IP}:2380,master3=https://{master3IP}:2380 \\
  --initial-cluster-state=new \\
  --initial-cluster-token=k8s-etcd-cluster \\
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \\
  --listen-client-urls=https://{hostIP}:2379,http://127.0.0.1:2379 \\
  --listen-peer-urls=https://{hostIP}:2380 \\
  --name={hostname} \\
  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \\
  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem

[Install]
WantedBy=multi-user.target
EOF

启动 etcd, 最少要两个同时启动,不然会等待其他节点超时而失败,启动后检查 etcd 集群的成员与状态:

systemctl enable etcd
systemctl start etcd
systemctl status etcd
# 如果出错使用journalctl -u etcd 排错

#查看集群
etcdctl --endpoints=https://{hostIP}:2379 \
  --cert-file=/etc/kubernetes/ssl/etcd.pem \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  member list

#检测集群是否正常
etcdctl --endpoints=https://{hostIP}:2379 \
  --cert-file=/etc/kubernetes/ssl/etcd.pem \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  cluster-health

配置kubectl

kubectl 是用于客户端上用于执行指令的,执行相关指令时它与 master 上的 kube-apiserver 通信时使用 SSL 证书进行认证,因此需要先配置证书.

生成证书基础信息:

#直接复制 csr.json 文件为 admin-csr.json ,并修改CN字段为 admin ,添加"O"和"OU"字段.
cat <<EOF> /opt/kube-tmp/admin-csr.json
{
    "CN": "admin", 
    "hosts": [
        "127.0.0.1", 
        "{master1IP}",        "{master2IP}", 
        "${master3IP}", 
        "master1", 
        "master2", 
        "master3", 
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa", 
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN", 
            "ST": "Guangdong", 
            "L": "Shenzhen",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF

“O”: 指定该证书的 Group 为 system:masters,当使用该证书请求 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限

# 生成证书和私钥
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem \
  -config=/etc/kubernetes/ssl/config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

ls admin*
# admin.csr admin-csr.json admin-key.pem admin.pem

cp /opt/kube-tmp/{admin.csr,admin-csr.json,admin-key.pem,admin.pem} /etc/kubernetes/ssl

# 生成kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true --server=https://${hostIP}:6443

kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem

kubectl config set-context kubernetes --cluster=kubernetes --user=admin
#成功会生成在~/.kube目录下生成config文件
kubectl config use-context kubernetes

ls ~/.kube/
# config
# 可以将 ~/.kube/config 文件复制到需要执行 kubectl 的机子上,目录是~/.kube/

配置kube-apiserver

先生成证书基础信息:

# 直接复制 csr.json 文件为 kubernetes-csr.json,并修改CN字段为 kubernetes ,并添加 hosts 的记录
cat <<EOF> /opt/kube-tmp/kubernetes-csr.json
{
    "CN": "kubernetes", 
    "hosts": [
        "127.0.0.1", 
        "{master1IP}",        "{master2IP}", 
        "{master3IP}",        "master1",        "master2",        "master3",        "10.254.0.1",        "kubernetes",        "kubernetes.default",        "kubernetes.default.svc",        "kubernetes.default.svc.cluster",        "kubernetes.default.svc.cluster.local"
    ],    "key": {
        "algo": "rsa",        "size": 2048
    },    "names": [
        {
            "C": "CN",            "ST": "Guangdong",            "L": "Shenzhen",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem \
  -config=/etc/kubernetes/ssl/config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# 如果成功则生成 csr 和 pem 证书
ls kubernetes*
# kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
# 复制证书到所有 master 节点
scp /opt/kube-tmp/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem}{master1IP}:/etc/kubernetes/ssl
scp /opt/kube-tmp/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem} {master2IP}:/etc/kubernetes/ssl
scp /opt/kube-tmp/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem}{master3IP}:/etc/kubernetes/ssl

# 设置 token,token 是给 kubectl 在首次连接 apiserver 的时候校验用的
token=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`

cat <<EOF>/opt/kube-tmp/token.csv
{token},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

scp token.csv{master1IP}:/etc/kubernetes/
scp token.csv {master2IP}:/etc/kubernetes/
scp token.csv{master3IP}:/etc/kubernetes/

由于 Kubernetes 1.8 中 kube-apiserver 已经将AdvancedAuditing置为 Beta,并默认开启,因此必须要创建审计的文件,这里参考官方给的格式:

cat << EOF > audit-policy.yaml
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF

scp audit-policy.yaml {master1IP}:/etc/kubernetes/
scp audit-policy.yaml{master2IP}:/etc/kubernetes/
scp audit-policy.yaml ${master3IP}:/etc/kubernetes/

配置 kube-apiserver 启动文件:

# 先创建 Kubernetes 日志目录
mkdir -p /var/log/kubernetes

cat << EOF > /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity

ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address={hostIP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=10 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/kubernetes/audit.log \\
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --enable-admission-plugins=DefaultStorageClass,LimitRanger,NamespaceLifecycle,NodeRestriction,ResourceQuota,ServiceAccount \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/kubernetes/ssl/etcd.pem \\
  --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \\
  --etcd-servers=https://{master1IP}:2379,https://{master2IP}:2379,https://{master3IP}:2379 \\
  --enable-bootstrap-token-auth \\
  --kubelet-https=true \\
  --secure-port=6443 \\
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --service-node-port-range=80-32000 \\
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --token-auth-file=/etc/kubernetes/token.csv

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
#出现错误使用 journalctl -u kube-apiserver 排错

参数--admission-control已经在 Kubernetes 1.10 中被列为 Deprecte, 用--enable-admission-plugins--disable-admission-plugins替代; 参数--insecure-bind-address也已经被列为 Deprecate,因此不推荐在配置中添加此参数.

配置kube-controller-manager

创建 kube-controller-manager 启动文件:

cat << EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
ExecStart=/usr/local/bin/kube-controller-manager \\
  --allocate-node-cidrs=true \\
  --cluster-cidr=10.244.0.0/16 \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --leader-elect=true \\
  --master=http://${hostIP}:8080 \\
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem

# server-cluster-ip-range 的值要与 kube-apiserver 中的一致,它是分配给 Service 的 IP,不能与cluster-cidr 的IP 段重合; 而 cluster-cidr 则要与 flannel 配置的一致,它是分配给 Pod 的IP.

[Install]
WantedBy=multi-user.target
EOF

# 启动 kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
# 出现错误使用journalctl -u kube-controller-manager 排错

配置kube-scheduler

创建 kube-scheduler 启动文件:

cat <<EOF> /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity

ExecStart=/usr/local/bin/kube-scheduler \
  --master=http://${hostIP}:8080

[Install]
WantedBy=multi-user.target
EOF

# 启动 kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
# 出现错误使用journalctl -u kube-scheduler排错

使用命令kubectl get cs验证master节点是否成功建立:

NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}

到这里,Master 的节点构建 OK 了.

添加Node

Node节点只需要有docker,kube-proxy,kubelet即可

添加请求认证

称为 Node 节点必须要经过 Master 节点的认证,此处为 Node 添加一个名为 kubelet-bootstrap 的 User,绑定一个Role.

绑定Role:

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

创建kubeconfig文件,kubeconfig 文件是 node 节点向 master 节点认证时的凭证:

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://{hostIP}:6443 \
  --kubeconfig=bootstrap.kubeconfig

# token是配置kube-apiserver时生成的那个
kubectl config set-credentials kubelet-bootstrap \
  --token={token} \
  --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

kubectl config use-context default \
  --kubeconfig=bootstrap.kubeconfig

# 分发 kubeconfig 文件给node节点使用
scp bootstrap.kubeconfig {master1IP}:/etc/kubernetes/
scp bootstrap.kubeconfig{master2IP}:/etc/kubernetes/
scp bootstrap.kubeconfig ${master3IP}:/etc/kubernetes/

配置kubelet

kubelet的启动配置:

# 先创建 kubelet 工作目录
mkdir -p /var/lib/kubelet

cat << EOF > /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
WorkingDirectory=/var/lib/kubelet

ExecStart=/usr/local/bin/kubelet \\
  --allow-privileged=true \\
  --cert-dir=/etc/kubernetes/ssl \\
  --cgroup-driver=cgroupfs \\
  --cluster_dns=10.254.0.2 \\
  --cluster_domain=cluster.local. \\
  --cni-bin-dir=/opt/cni/bin \\
  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
  --fail-swap-on=false \\
  --hostname-override=${hostIP} \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --max-pods=256 \\
  --network-plugin=cni \\
  --pod-infra-container-image=gcr.io/google_containers/pause:3.0 \\
  --serialize-image-pulls=false

# 参数 cluster_dns 需要预先定义 预分配的 dns 地址,然后对 kubedns 配置中的地址修改成一致即可.
# gcr.io/google_containers/pause:3.0 这个镜像可能出于GFW的原因在国内拉不到,可以先用中转的方式导入.
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
# 出现错误使用journalctl -u kubelet排错
# 在所有node节点都类似这样配置

要启用 IPVS,则 kubelet 中的参数 --hairpin-mode 要为默认的混杂模式 promiscuous-bridge,而在 1.10 中IPVS已经默认启用,因此 --feature-gates 中的 SupportIPVSProxyMode参数可以不用显式指定了.

授权csr 给node

# 查看当前添加的node信息,此时的 node 并未验证,显示为 Pending
kubectl get csr

# 为所有的node授权
nodeNames=(kubectl get csr | awk 'NR>1 {print1}')
for nodeName in {nodeNames}
do
  kubectl certificate approve{nodeName}
done

# 根据当前网络和机器的性能,需要几秒后使用以下命令可以查看到节点状态已经为 Ready 了
kubectl get csr,nodes
# Ready 后在/etc/kubernetes 目录下便会生成 kubeconfig 文件 kubelet.kubeconfig

配置kube-proxy

创建kebe-proxy证书基本信息:

# 直接复制 csr.json 的内容,修改其中的 CN 值为 system:kube-proxy
cat <<EOF>/opt/kube-tmp/kube-proxy-csr.json
{
    "CN": "system:kube-proxy", 
    "hosts": [
        "127.0.0.1", 
        "{master1IP}",        "{master2IP}", 
        "${master3IP}", 
        "master1", 
        "master2", 
        "master3", 
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa", 
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN", 
            "ST": "Guangdong", 
            "L": "Shenzhen",
            "O": "system:master",
            "OU": "System"
        }
    ]
}
EOF

指定 CN 值为 system:kube-proxy
kube-apiserver 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限.

# 生成 证书,只用生成一次就可以给所有node节点使用
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/opt/kube-tmp/config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# 成功则会生成csr文件和pem证书,将这些证书传给其他node节点
ls kube-proxy*
# kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
scp /opt/kube-tmp/{kube-proxy.pem,kube-proxy-key.pem} {master1IP}:/etc/kubernetes/ssl
scp /opt/kube-tmp/{kube-proxy.pem,kube-proxy-key.pem}{master2IP}:/etc/kubernetes/ssl
scp /opt/kube-tmp/{kube-proxy.pem,kube-proxy-key.pem} ${master3IP}:/etc/kubernetes/ssl

创建 kueb-proxy的kubeconfig 文件:

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true --server=https://{hostIP}:6443 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 到这步会生成kube-proxy.kubeconfig给所有node节点的kube-proxy服务使用,可以将其放到指定的 /etc/kubernetes 目录统一配置指定
scp kube-proxy.kubeconfig{master1IP}:/etc/kubernetes/
scp kube-proxy.kubeconfig {master2IP}:/etc/kubernetes/
scp kube-proxy.kubeconfig{master3IP}:/etc/kubernetes/

创建 kube-proxy 的启动文件:

# 创建keube-proxy 的工作目录
mkdir -p /var/lib/kube-proxy

cat <<EOF> /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=kubernetes kube-proxy
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
  --bind-address={hostIP} \\
  --hostname-override={hostIP} \\
  --cluster-cidr=10.244.0.0/16 \\
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
  --masquerade-all \\
  --ipvs-min-sync-period=30s \\
  --ipvs-scheduler=rr \\
  --ipvs-sync-period=1m

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
# 出现错误使用journalctl -u kube-proxy排错

配置flannel网络

使用 daemonset 的模式部署 flannel. 所以在 kubelet 的启动配置中要指定--network-plugin=cni,然后在每个 Node 节点上安装 cni 程序, cni 程序目录参数--cni-bin-dir可以在 kubelet 的启动配置中指定,默认是/opt/cni/bin:

CNI 的更新频率比较低, 出的 bug 少所以很稳定,几乎都是功能性更新,因此这里就用最新的版本 :

curl -SL https://github.com/containernetworking/plugins/releases/download/v0.7.4/cni-plugins-amd64-v0.7.4.tgz -o cni.tgz
mkdir -p /opt/cni/bin
tar -zxf cni.tgz -C /opt/cni/bin/

此处要注意的是,之前 kubelet 中配置的--cluster-cidr的参数是10.244.0.0./16,对应的 flannel 网络的默认 IP 段.
这里直接使用 flannel 的默认配置创建:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

如果 flannel 一直处于 Init 的状态, 使用命令:kubectl -n kube-system logs po/kube-flannel-ds-xxxx -c install-cni 参考日志,错误如果是cp: can't create '/etc/cni/net.d/10-flannel.conflist': Permission denied,则是 SELINUX 未关闭.

当 flannel 成功运行后,kubectl get nodes 应该可以看到所有节点的状态都已经 Ready 了.

配置 CoreDNS

首先下载 CoreDNS 的 daemonSet 描述文件,然后修改其中的一些配置,主要是将CLUSTER_DNS_IP替换为 kubelet 中指定的DNSIP:

curl -SL -o coredns.yaml https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed

sed  -e "s/CLUSTER_DNS_IP/10.254.0.2/g" -e "s/CLUSTER_DOMAIN/cluster.local/g" -e "s/REVERSE_CIDRS/in-addr.arpa ip6.arpa/g" -e "s/FEDERATIONS//g" -e "s/STUBDOMAINS//g" -e "s/UPSTREAMNAMESERVER/\/etc\/resolv.conf/g" -i coredns.yaml

# 部署coredns
kubectl apply -f coredns.yaml

查看是否正确运行:

kubectl get pods -n kube-system -o wide

正常则显示类似如下,IP 在划分的段内:

coredns-6f4dc7fdf7-8nt62   1/1       Running   0          8m        10.244.2.3       192.168.85.143   <none>

验证 CoreDNS 是否生效:

cat <<EOF>>busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  containers:
  - name: busybox
    image: busybox:1.26.2
    command: ["sh","-c","while true; do sleep 1; done" ]
EOF

kubectl apply -f busybox.yaml
kubectl exec -it busybox nslookup kubernetes
# 如果成功解析则显示类似如下信息
# Server:    10.254.0.2
# Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local

# Name:      kubernetes
# Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local

集群至此部署完毕.