创建基于Fannel网络的Kubernetes 1.6.8

准备环境

Master节点的配置建议 2U2G 起,Node 节点的配置建议 2U2G 起.

需要配置各个节点的名称,IP,全部流程使用 root 权限,关闭 SELinux 与 Firewalld 服务.

节点名称和IP

三个Master节点,既为master也为node,一个纯Node节点,系统为 CentOS 7.

Node Name IP
master1 192.168.85.141
master2 192.168.85.142
master3 192.168.85.143

将每个节点的Hostname改为对应的Node Name:

# master1
hostnamectl --static set-hostname master1
# master2
hostnamectl --static set-hostname master2
# master3
hostnamectl --static set-hostname master3

在所有节点上执行以下命令,将所有节点的解析加入到/etc/hosts

cat << EOF >>/etc/hosts
192.168.85.141 master1
192.168.85.142 master2
192.168.85.143 master3
EOF
# 重启生效
reboot

所有节点导入环境变量:

export master1IP="192.168.85.141"
export master2IP="192.168.85.142"
export master3IP="192.168.85.143"
export hostname=`hostname`
eval tmp_ip=\\${hostname}IP # 第二个 前没有反斜杠,此添加是因为博客编辑器会将双 识别为 Latex 标记符而不显示
export hostIP=tmp_ip

安装运行程序

安装Docker

curl -SL https://get.docker.com | sh -
systemctl enable docker
systemctl start docker

安装cfssl

cfssl可以只在master节点上安装,因为只有master节点签发证书.

curl -SL https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -SL https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -SL https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl*

安装etcd

curl -SL https://github.com/etcd-io/etcd/releases/download/v3.2.5/etcd-v3.2.5-linux-amd64.tar.gz -o etcd.tgz
tar -zxf etcd.tgz
cd etcd-v3.2.5-linux-amd64
mv etcd* /usr/local/bin/

安装Kubernetes

curl -SL https://dl.k8s.io/v1.6.8/kubernetes-server-linux-amd64.tar.gz -o kubernetes.tgz
tar -zxf kubernetes.tgz
mv kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kubectl,kube-proxy,kube-scheduler,kubelet} /usr/local/bin

创建配置

配置基础证书

用cfssl创建ca证书,简化流程.

生成证书的基础信息

cat <config.json
{
    "signing": {
        "default": {
            "expiry": "87600h"
        }, 
        "profiles": {
            "kubernetes": {
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ], 
                "expiry": "87600h"
            }
        }
    }
}
EOF

修改 csr.json ,如下格式:

cat << EOF > csr.json
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "{master1IP}",
        "{master2IP}",
        "${master3IP}",
        "master1",
        "master2",
        "master3",
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa",
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

“CN”:Common Name,kube-apiserver 从证书中提取该字段作为请求的User Name
“O”:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的Group

开始创建 CA:

cfssl gencert -initca csr.json | cfssljson -bare ca
# 将生成csr和pem文件
ls
# ca.csr ca-key.pem ca.pem config.json csr.json

将证书放到所有节点的固定目录待用,比如/etc/kubernetes/ssl/:

mkdir -p /etc/kubernetes/ssl/
#复制到所有节点
scp /opt/ssl/{ca.csr,ca-key.pem,ca.pem,config.json,csr.json} {master1IP}:/etc/kubernetes/ssl
scp /opt/ssl/{ca.csr,ca-key.pem,ca.pem,config.json,csr.json}{master2IP}:/etc/kubernetes/ssl
scp /opt/ssl/{ca.csr,ca-key.pem,ca.pem,config.json,csr.json} ${master3IP}:/etc/kubernetes/ssl

配置etcd

生成 etcd 证书的基础信息:

#直接复制csr.json文件为etcd-csr.json,并修改CN字段为etcd即可
cat <etcd-csr.json
{
    "CN": "etcd", 
    "hosts": [
        "127.0.0.1", 
        "{master1IP}",        "{master2IP}", 
        "${master3IP}", 
        "master1", 
        "master2", 
        "master3", 
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa", 
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN", 
            "ST": "Guangdong", 
            "L": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

创建密钥:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

ls etcd*
# etcd.csr etcd-csr.json etcd-key.pem etcd.pem
#复制到各个节点
scp /opt/ssl/{etcd.csr,etcd-key.pem,etcd.pem} {master1IP}:/etc/kubernetes/ssl
scp /opt/ssl/{etcd.csr,etcd-key.pem,etcd.pem}{master2IP}:/etc/kubernetes/ssl
scp /opt/ssl/{etcd.csr,etcd-key.pem,etcd.pem} ${master3IP}:/etc/kubernetes/ssl

在各个节点上创建 etcd 启动配置文件:

# 生成etcd工作目录
mkdir -p /var/lib/etcd/

# 生成启动配置
cat </usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd v3 server
Documentation=https://github.com/coreos/etcd
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
# 此目录必须在运行前创建
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
  --name={hostname} \\
  --cert-file=/etc/kubernetes/ssl/etcd.pem \\
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \\
  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \\
  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --initial-advertise-peer-urls=https://{hostIP}:2380 \\
  --listen-peer-urls=https://{hostIP}:2380 \\
  --listen-client-urls=https://{hostIP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://{hostIP}:2379 \\
  --initial-cluster-token=k8s-etcd-cluster \\
  --initial-cluster=master1=https://{master1IP}:2380,master2=https://{master2IP}:2380,master3=https://{master3IP}:2380 \\
  --initial-cluster-state=new \\
  --data-dir=/var/lib/etcd

[Install]
WantedBy=multi-user.target
EOF

启动 etcd, 最少要两个同时启动,不然会等待其他节点超时而失败,启动后检查 etcd 集群的成员与状态:

systemctl enable etcd
systemctl start etcd
systemctl status etcd
# 如果出错使用journalctl -u etcd 排错

#查看集群
etcdctl --endpoints=https://{hostIP}:2379 \
  --cert-file=/etc/kubernetes/ssl/etcd.pem \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  member list

etcdctl --endpoints=https://{hostIP}:2379 \
  --cert-file=/etc/kubernetes/ssl/etcd.pem \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  cluster-health

配置kubectl

kubectl是客户端额命令,执行相关指令时它与 master 上的 kube-apiserver 通信的,因此需要配置证书,先生成证书基础信息:

#直接复制 csr.json 文件为 admin-csr.json ,并修改CN字段为 admin ,添加"O"和"OU"字段.
cat < admin-csr.json
{
    "CN": "admin", 
    "hosts": [
        "127.0.0.1", 
        "{master1IP}",        "{master2IP}", 
        "${master3IP}", 
        "master1", 
        "master2", 
        "master3", 
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa", 
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN", 
            "ST": "Guangdong", 
            "L": "Shenzhen",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF

“O”: 指定该证书的 Group 为 system:masters,当使用该证书请求 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限


# 生成证书和私钥 cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin ls admin* # admin.csr admin-csr.json admin-key.pem admin.pem # 将所有证书复制到所有master的/etc/kubernetes/ssl scp /opt/ssl/{admin.csr,admin-csr.json,admin-key.pem,admin.pem} {master1IP}:/etc/kubernetes/ssl scp /opt/ssl/{admin.csr,admin-csr.json,admin-key.pem,admin.pem}{master2IP}:/etc/kubernetes/ssl scp /opt/ssl/{admin.csr,admin-csr.json,admin-key.pem,admin.pem} {master3IP}:/etc/kubernetes/ssl # 生成kubeconfig kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true --server=https://{hostIP}:6443 kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem kubectl config set-context kubernetes --cluster=kubernetes --user=admin #成功会生成在~/.kube目录下生成config文件 kubectl config use-context kubernetes ls ~/.kube/ # config # 可以将 ~/.kube/config 文件复制到需要执行 kubectl 的机子上,目录是~/.kube/

配置kube-apiserver

先生成证书基础信息:

# 直接复制 csr.json 文件为 kubernetes-csr.json,并修改CN字段为 kubernetes ,并添加 hosts 的记录
cat < kubernetes-csr.json
{
    "CN": "kubernetes", 
    "hosts": [
        "127.0.0.1", 
        "{master1IP}",        "{master2IP}", 
        "{master3IP}",        "master1",        "master2",        "master3",        "10.254.0.1",        "kubernetes",        "kubernetes.default",        "kubernetes.default.svc",        "kubernetes.default.svc.cluster",        "kubernetes.default.svc.cluster.local"
    ],    "key": {
        "algo": "rsa",        "size": 2048
    },    "names": [
        {
            "C": "CN",            "ST": "Guangdong",            "L": "Shenzhen",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem \
  -config=/etc/kubernetes/ssl/config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# 如果成功则生成 csr 和 pem 证书
ls kubernetes*
# kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
# 复制证书到所有 master 节点
scp /opt/ssl/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem}{master1IP}:/etc/kubernetes/ssl
scp /opt/ssl/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem} {master2IP}:/etc/kubernetes/ssl
scp /opt/ssl/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem}{master3IP}:/etc/kubernetes/ssl

# 设置 token,token 是给 kubectl 在首次连接 apiserver 的时候校验用的
token=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`

cat <token.csv
{token},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

scp token.csv{master1IP}:/etc/kubernetes/
scp token.csv {master2IP}:/etc/kubernetes/
scp token.csv{master3IP}:/etc/kubernetes/

配置 kube-apiserver 启动文件:

# 先创建 Kubernetes 日志目录
mkdir -p /var/log/kubernetes

cat </usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity

ExecStart=/usr/local/bin/kube-apiserver \\
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --advertise-address={hostIP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=10 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/kubernetes/audit.log \\
  --authorization-mode=RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/kubernetes/ssl/etcd.pem \\
  --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \\
  --etcd-servers=https://{master1IP}:2379,https://{master2IP}:2379,https://{master3IP}:2379 \\
  --experimental-bootstrap-token-auth \\
  --insecure-bind-address=${hostIP} \\
  --kubelet-https=true \\
  --runtime-config rbac.authorization.k8s.io/v1alpha1 \\
  --secure-port=6443 \\
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --service-node-port-range=80-32000 \\
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --token-auth-file=/etc/kubernetes/token.csv

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
#出现错误使用 journalctl -u kube-apiserver 排错

配置kube-controller-manager

创建 kube-controller-manager 启动文件:

cat << EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
ExecStart=/usr/local/bin/kube-controller-manager \\
  --allocate-node-cidrs=true \\
  --cluster-cidr=10.244.0.0/16 \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --leader-elect=true \\
  --master=http://${hostIP}:8080 \\
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem

# server-cluster-ip-range 的值要与 kube-apiserver 中的一致,它是分配给 Service 的 IP,不能与cluster-cidr 的IP 段重合; 而 cluster-cidr 则要与 flannel 配置的一致,它是分配给 Pod 的IP.

[Install]
WantedBy=multi-user.target
EOF

# 启动 kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
# 出现错误使用journalctl -u kube-controller-manager 排错

配置kube-scheduler

创建 kube-scheduler 启动文件:

cat < /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity

ExecStart=/usr/local/bin/kube-scheduler \
  --master=http://${hostIP}:8080

[Install]
WantedBy=multi-user.target
EOF

# 启动 kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
# 出现错误使用journalctl -u kube-scheduler排错

使用命令kubectl get cs验证master节点是否成功建立:

NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}

到这里,超不多已经站在成功面前了.

添加Node

节点只需要有docker,flannel,kube-proxy,kubelet即可

添加请求认证

绑定Role:

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

创建kubeconfig文件:

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://{hostIP}:6443 \
  --kubeconfig=bootstrap.kubeconfig

# token是配置kube-apiserver时生成的那个
kubectl config set-credentials kubelet-bootstrap \
  --token={token} \
  --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

kubectl config use-context default \
  --kubeconfig=bootstrap.kubeconfig

# kubeconfig文件给node节点使用
scp bootstrap.kubeconfig {master1IP}:/etc/kubernetes/
scp bootstrap.kubeconfig{master2IP}:/etc/kubernetes/
scp bootstrap.kubeconfig ${master3IP}:/etc/kubernetes/

配置kubelet

创建 kubelet 启动文件:

# 先创建 kubelet 工作目录
mkdir -p /var/lib/kubelet

cat < /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
WorkingDirectory=/var/lib/kubelet

ExecStart=/usr/local/bin/kubelet \\
  --allow-privileged=true \\
  --cert-dir=/etc/kubernetes/ssl \\
  --cluster_dns=10.254.0.2 \\
  --cluster_domain=cluster.local. \\
  --cni-bin-dir=/opt/cni/bin \\
  --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
  --hostname-override=${hostIP} \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --network-plugin=cni \\
  --pod-infra-container-image=gcr.io/google_containers/pause:3.0 \\
  --require-kubeconfig \\
  --serialize-image-pulls=false

# 参数 cluster_dns 需要预先定义 预分配的 dns 地址,然后对 kubedns 配置中的地址修改成一致即可.
# gcr.io/google_containers/pause 这个镜像可能出于GFW的原因在国内拉不到,可以先用中转的方式导入.
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
# 出现错误使用journalctl -u kubelet排错
# 在所有node节点都类似这样配置

到此节点创建完毕,但不是创建了节点就能直接用的,为了安全,还要给节点授权

# 查看当前添加的node信息,此时的node并未验证
kubectl get csr

# 为所有的node授权
nodeNames=(kubectl get csr | awk 'NR>1 {print1}')
for nodeName in {nodeNames}
do
  kubectl certificate approve{nodeName}
done

# 根据当前网络和机器的性能,需要几秒后使用以下命令可以查看到节点状态以及为 Ready 了
kubectl get nodes
# Ready 后在/etc/kubernetes 目录下便会生成 kubeconfig 文件 kubelet.kubeconfig

配置kube-proxy

创建kebe-proxy证书基本信息:

# 直接复制 csr.json 的内容,修改其中的 CN 值为 system:kube-proxy
cat </opt/ssl/kube-proxy-csr.json
{
    "CN": "system:kube-proxy", 
    "hosts": [
        "127.0.0.1", 
        "{master1IP}",        "{master2IP}", 
        "${master3IP}", 
        "master1", 
        "master2", 
        "master3", 
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa", 
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN", 
            "ST": "Guangdong", 
            "L": "Shenzhen",
            "O": "system:master",
            "OU": "System"
        }
    ]
}
EOF

指定 CN 值为 system:kube-proxy
kube-apiserver 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限.

# 生成 证书,只用生成一次就可以给所有node节点使用
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/opt/ssl/config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# 成功则会生成csr文件和pem证书,将这些证书传给其他node节点
ls kebe-proxy*
# kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
scp /opt/ssl/{kube-proxy.pem,kube-proxy-key.pem} {master1IP}:/etc/kubernetes/ssl
scp /opt/ssl/{kube-proxy.pem,kube-proxy-key.pem}{master2IP}:/etc/kubernetes/ssl
scp /opt/ssl/{kube-proxy.pem,kube-proxy-key.pem} ${master3IP}:/etc/kubernetes/ssl

创建 kueb-proxy的kubeconfig 文件:

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true --server=https://{hostIP}:6443 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 到这步会生成kube-proxy.kubeconfig给所有node节点的kube-proxy服务使用,可以将其放到指定的 /etc/kubernetes 目录统一配置指定
scp kube-proxy.kubeconfig{master1IP}:/etc/kubernetes/
scp kube-proxy.kubeconfig {master2IP}:/etc/kubernetes/
scp kube-proxy.kubeconfig{master3IP}:/etc/kubernetes/

创建 kube-proxy 的启动文件:

# 创建keube-proxy 的工作目录
mkdir -p /var/lib/kube-proxy

cat < /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=kubernetes kube-proxy
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
  --bind-address={hostIP} \\
  --hostname-override={hostIP} \\
  --cluster-cidr=10.244.0.0/16 \\
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
# 出现错误使用journalctl -u kube-proxy排错

配置flannel网络

此处使用 daemonset 的模式部署 flannel. 因此在 kubelet 的启动配置中需要指定--network-plugin=cni,然后在每个 Node 节点上安装 cni 程序, cni 程序目录参数--cni-bin-dir可以在 kubelet 的启动配置中指定,默认是/opt/cni/bin:

curl -SL https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz -o cni.tgz
mkdir -p /opt/cni/bin
tar -zxf cni.tgz -C /opt/cni/bin/

此处要注意的是,之前 kubelet 中配置的--cluster-cidr的参数是10.244.0.0./16,对应的 flannel 网络的默认 IP 段.
这里直接使用flannel的默认配置创建:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml

配置kubedns

下载配置文件:

curl -SL https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-cm.yaml -o kubedns-cm.yaml
curl -SL https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-sa.yaml -o kubedns-sa.yaml
curl -SL https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-controller.yaml.base -o kubedns-controller.yaml
curl -SL https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-svc.yaml.base -o kubedns-svc.yaml

# 替换 kubedns-svc.yaml 中的 spec.clusterIP 值为 kube-apiserver 中定义的dns ip
sed -e "s/__PILLAR__DNS__SERVER__/10.254.0.2/g" -i kubedns-svc.yaml
# 替换 kubedns-controller.yaml 中的 __PILLAR__DNS__DOMAIN__ 中定义的dns domain
sed -e "s/__PILLAR__DNS__DOMAIN__/cluster.local/g" -i kubedns-controller.yaml
#删除 kubedns-controller.yaml 第92行的 __PILLAR__FEDERATIONS__DOMAIN__MAP__
# 应用所有资源
kubectl apply -f .

验证 flannel 是否生效

kubectl get pods -n kube-system -o wide

如果成功则 kube-dns 的 pod IP是指定的 10.244.0.0/16 段内:

NAME                           READY     STATUS    RESTARTS   AGE       IP               NODE
po/kube-dns-3468831164-l27st   3/3       Running   0          9m        10.244.1.2       192.168.85.142

验证 dns 是否生效

cat <>busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  containers:
  - name: busybox
    image: busybox:1.26.2
    command: ["sh","-c","while true; do sleep 1; done" ]
EOF

kubectl apply -f busybox.yaml
kubectl exec -it busybox nslookup kubernetes

# 如果成功解析则显示类似如下信息
# Server:    10.254.0.2
# Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local

# Name:      kubernetes
# Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local

至此,一个可用的集群创建完毕.