创建基于Fannel网络的Kubernetes 1.5.4

Kubernetes 是个基于容器的调度编排工具,源于谷歌,现在全球的开发者在为其贡献代码.它极大的方便了分布式部署服务的难点,我认为这应该是将来服务发布部署主流的方向.

Kubernetes现在底层调度的是Docker.去年我接触了Docker,Docker的容器给我的感受就是它确实是比 KVM 模式便捷许多,并且 KVM 有的资源分配与环境隔离Docker都已经实现的较为完全了,相比于 KVM,其优势就是任务的快速创建和资源的快速回收.Kubernetes则是在此之上统一管理调度这些任务和资源.

而Kubernetes的难点在于其部署的过程极为复杂,并且网络配置也较为困难.我看了一天,也才略微懂了皮毛.

参考Kubernetes.io

准备环境

需要准备好各个节点的名称,IP,以及需要的docker,etcd,kubernetes等运行程序,全部流程使用root权限,关闭SELinux与Firewalld服务.

节点名称和IP

三个节点,既为master也为node,运行系统为 CentOS 7.

Node Name IP
node1 192.168.200.21
node2 192.168.200.22
node3 192.168.200.23

将每个节点的Hostname改为对应的Node Name:

# node-1
hostnamectl --static set-hostname node1
# node-2
hostnamectl --static set-hostname node2
# node-3
hostnamectl --static set-hostname node3

在所有节点上执行以下命令,将所有节点的解析加入到/etc/hosts

cat <<EOF >>/etc/hosts
192.168.200.21 node1
192.168.200.22 node2
192.168.200.23 node3
EOF
# 重启生效
reboot

所有节点导入环境变量:

export node1IP="192.168.200.21"
export node2IP="192.168.200.22"
export node3IP="192.168.200.23"
export hostname=`hostname`
eval tmp_ip=\\${hostname}IP # 第二个 前没有反斜杠,此添加是因为博客编辑器会将双 识别为 Latex 标记符而不显示
export hostIP=tmp_ip

安装运行程序

除了cfssl只用在master安装,其他程序都需要在三个节点上安装.

安装Docker

curl -SL https://raw.githubusercontent.com/docker/docker-install/master/install.sh | sh -
systemctl enable docker
systemctl start docker

安装cfssl

cfssl可以只在master节点上安装,因为只有master节点签发证书.

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl*

安装etcd

wget https://github.com/etcd-io/etcd/releases/download/v2.3.8/etcd-v2.3.8-linux-amd64.tar.gz -O etcd.tgz
tar -zxf etcd.tgz
cd etcd-v2.3.8-linux-amd64
mv etcd* /usr/local/bin/

安装flanneld

wget https://github.com/coreos/flannel/releases/download/v0.6.2/flannel-v0.6.2-linux-amd64.tar.gz -O flannel.tgz
tar -zxf flannel.tgz
mv flanneld mk-docker-opts.sh /usr/local/bin

安装Kubernetes

wget https://dl.k8s.io/v1.5.4/kubernetes-server-linux-amd64.tar.gz -O kubernetes.tgz
tar -zxf kubernetes.tgz
mv kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-dns,kube-proxy,kube-scheduler,kubelet} /usr/local/bin

创建配置

配置基础证书

用cfssl创建ca证书,简化流程.

生成证书的基础信息

mkdir /opt/ssl && cd /opt/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json

修改config.json,添加kubernetes的profile如下格式:

{
    "signing": {
        "default": {
            "expiry": "87600h"
        }, 
        "profiles": {
            "kubernetes": {
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ], 
                "expiry": "87600h"
            }
        }
    }
}

signing:表示可用于生成证书
server auth:表示client可以用此配置生成的证书对server提供的证书进行验证;
client auth:表示server可以用此配置生成的证书对client提供的证书进行验证;

修改 csr.json ,如下格式:

cat <<EOF>/opt/ssl/csr.json
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "{node1IP}",
        "{node2IP}",
        "${node3IP}",
        "node1",
        "node2",
        "node3",
        "kubernetes"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Zhejiang",
            "L": "Wenzhou",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

“CN”:Common Name,kube-apiserver 从证书中提取该字段作为请求的User Name
“O”:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的Group

开始创建 CA:

cfssl gencert -initca csr.json | cfssljson -bare ca
# 将生成csr和pem文件
ls
# ca.csr ca-key.pem ca.pem config.json csr.json

将证书放到所有节点的固定目录待用,比如/etc/kubernetes/ssl/:

mkdir /etc/kubernetes/ssl/
#复制到所有节点
scp /opt/ssl/{ca.csr,ca-key.pem,ca.pem} {node1IP}:/etc/kubernetes/ssl
scp /opt/ssl/{ca.csr,ca-key.pem,ca.pem}{node2IP}:/etc/kubernetes/ssl
scp /opt/ssl/{ca.csr,ca-key.pem,ca.pem} ${node3IP}:/etc/kubernetes/ssl

配置etcd

生成 etcd 证书的基础信息:

#直接复制csr.json文件为etcd-csr.json,并修改CN字段为etcd即可
cat <<EOF >etcd-csr.json
{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "{node1IP}",
        "{node2IP}",
        "${node3IP}",
        "node1",
        "node2",
        "node3",
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa",
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN",
            "ST": "Zhejiang",
            "L": "Wenzhou",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

创建密钥:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

ls etcd*
# etcd.csr etcd-csr.json etcd-key.pem etcd.pem
#复制到各个节点
scp /opt/ssl/{etcd.csr,etcd-key.pem,etcd.pem,config.json,csr.json} {node1IP}:/etc/kubernetes/ssl
scp /opt/ssl/{etcd.csr,etcd-key.pem,etcd.pem,config.json,csr.json}{node2IP}:/etc/kubernetes/ssl
scp /opt/ssl/{etcd.csr,etcd-key.pem,etcd.pem,config.json,csr.json} ${node3IP}:/etc/kubernetes/ssl

在各个节点上创建 etcd 启动配置文件:

# 生成etcd工作目录
mkdir -p /var/lib/etcd/

# 生成启动配置
cat <<EOF> /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd v2 server
Documentation=https://github.com/coreos/etcd
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
# 此目录必须在运行前创建
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
  --name={hostname} \\
  --cert-file=/etc/kubernetes/ssl/etcd.pem \\
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \\
  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \\
  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --initial-advertise-peer-urls=https://{hostIP}:2380 \\
  --listen-peer-urls=https://{hostIP}:2380 \\
  --listen-client-urls=https://{hostIP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://{hostIP}:2379 \\
  --initial-cluster-token=k8s-etcd-cluster \\
  --initial-cluster=node1=https://{node1IP}:2380,node2=https://{node2IP}:2380,node3=https://{node3IP}:2380 \\
  --initial-cluster-state=new \\
  --storage-backend=etcd2 \\
  --data-dir=/var/lib/etcd

[Install]
WantedBy=multi-user.target
EOF

启动 etcd, 最少要两个同时启动,不然会失败,启动后检查 etcd 集群的成员与状态:

systemctl enable etcd
systemctl start etcd
systemctl status etcd
# 如果出错使用journalctl -u etcd 排错

#查看集群
etcdctl --endpoints=https://{hostIP}:2379 \
  --cert-file=/etc/kubernetes/ssl/etcd.pem \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  member list

etcdctl --endpoints=https://{hostIP}:2379 \
  --cert-file=/etc/kubernetes/ssl/etcd.pem \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  cluster-health

配置Flannel

# 设置 flannel 的网段为 10.10.0.0/16
etcdctl --endpoint http://{hostIP}:2379 set /coreos.com/network/config '{"Network":"10.10.0.0/16","Backend":{"Type":"vxlan"}}'

# 配置flannel的启动文件
cat <<EOF>/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
Documentation=https://github.com/coreos/flannel
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity

ExecStart=/usr/local/bin/flanneld --iface=ens32 \\
  --etcd-endpoints=http://{hostIP}:2379 \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/kubernetes/ssl/etcd.pem \\
  --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem

ExecStartPost=/usr/local/bin/mk-docker-opts.sh \\
  -k DOCKER_NETWORK_OPTIONS \\
  -d /run/flannel/subnet.env

[Install]
WantedBy=multi-user.target
EOF

systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld
# 出现错误使用journalctl -u flanneld 排错

完成flannel后,需要是的docker网络与flannel相互通讯,所以要编辑 docker 启动配置文件.

编辑/usr/lib/systemd/system/docker.service,在ExecStart字段后添加参数${DOCKER_NETWORK_OPTIONS},然后重新加载 docker:

systemctl daemon-reload
systemctl restart docker.service
systemctl status docker.service

配置kubectl

kubectl是客户端额命令,执行相关指令时它与 master 上的 kube-apiserver 通信的,因此需要配置证书,先生成证书基础信息:

#直接复制 csr.json 文件为 admin-csr.json ,并修改CN字段为 admin ,添加"O"和"OU"字段.
cat <<EOF> admin-csr.json
{
    "CN": "admin",
    "hosts": [
        "127.0.0.1",
        "{node1IP}",
        "{node2IP}",
        "${node3IP}",
        "node1",
        "node2",
        "node3",
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa",
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN",
            "ST": "Zhejiang",
            "L": "Wenzhou",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF

“O”: 指定该证书的 Group 为 system:masters,当使用该证书请求 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;

# 生成证书和私钥
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem \
  -config=/etc/kubernetes/ssl/config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

ls admin*
# admin.csr admin-csr.json admin-key.pem admin.pem

# 将所有证书复制到所有master的/etc/kubernetes/ssl
scp /opt/ssl/{admin.csr,admin-csr.json,admin-key.pem,admin.pem} {node1IP}:/etc/kubernetes/ssl
scp /opt/ssl/{admin.csr,admin-csr.json,admin-key.pem,admin.pem}{node2IP}:/etc/kubernetes/ssl
scp /opt/ssl/{admin.csr,admin-csr.json,admin-key.pem,admin.pem} {node3IP}:/etc/kubernetes/ssl

# 生成kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true --server=https://{hostIP}:6443

kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem

kubectl config set-context kubernetes --cluster=kubernetes --user=admin
#成功会生成在~/.kube目录下生成config文件
kubectl config use-context kubernetes

ls ~/.kube/
# config
# 可以将 ~/.kube/config 文件复制到需要执行 kubectl 的机子上,目录要是~/.kube/

配置kube-apiserver

先生成证书基础信息:

# 直接复制 csr.json 文件为 kubernetes-csr.json,并修改CN字段为 kubernetes ,并添加 hosts 的记录
cat <<EOF> kubernetes-csr.json
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "{node1IP}",
        "{node2IP}",
        "{node3IP}",
        "node1",
        "node2",
        "node3",
        "10.254.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],    "key": {
        "algo": "rsa",
        "size": 2048
    },    "names": [
        {
            "C": "CN",
            "ST": "Zhejiang",
            "L": "Wenzhou",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem \
  -config=/etc/kubernetes/ssl/config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# 如果成功则生成 csr 和 pem 证书
ls kubernetes*
# kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
# 复制证书到所有 master 节点
scp /opt/ssl/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem}{node1IP}:/etc/kubernetes/ssl
scp /opt/ssl/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem} {node2IP}:/etc/kubernetes/ssl
scp /opt/ssl/{kubernetes.csr,kubernetes-csr.json,kubernetes-key.pem,kubernetes.pem}{node3IP}:/etc/kubernetes/ssl

# 设置 token,token 是给 kubectl 在首次连接 apiserver 的时候校验用的
token=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`

cat <<EOF>token.csv
{token},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

scp token.csv{node1IP}:/etc/kubernetes/
scp token.csv {node2IP}:/etc/kubernetes/
scp token.csv{node3IP}:/etc/kubernetes/

# 创建 Kubernetes 日志目录
mkdir -p /var/log/kubernetes

配置 kube-apiserver 启动文件:

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity

ExecStart=/usr/local/bin/kube-apiserver \\
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --advertise-address={hostIP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=10 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/kubernetes/audit.log \\
  --authorization-mode=RBAC \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/kubernetes/ssl/etcd.pem \\
  --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \\
  --etcd-servers=https://{node1IP}:2379,https://{node2IP}:2379,https://{node3IP}:2379 \\
  --insecure-bind-address=${hostIP} \\
  --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --service-node-port-range=80-32000 \\
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --token-auth-file=/etc/kubernetes/token.csv

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
#出现错误使用 journalctl -u kube-apiserver 排错

配置kube-controller-manager

创建 kube-controller-manager 启动文件:

cat << EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
ExecStart=/usr/local/bin/kube-controller-manager \\
  --allocate-node-cidrs=true \\
  --cluster-cidr=10.10.0.0/16 \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --master=http://${hostIP}:8080 \\
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-cluster-ip-range=10.254.0.0/16 \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem

[Install]
WantedBy=multi-user.target
EOF

# 启动 kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
# 出现错误使用journalctl -u kube-controller-manager 排错

配置kube-scheduler

创建 kube-scheduler 启动文件:

cat <<EOF>/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target
Wants=network-online.target

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity

ExecStart=/usr/local/bin/kube-scheduler \\
  --master=http://${hostIP}:8080

[Install]
WantedBy=multi-user.target
EOF

# 启动 kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
# 出现错误使用journalctl -u kube-scheduler排错

使用命令kubectl get cs验证master节点是否成功建立:

NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}

到这里,超不多已经站在成功面前了.

添加Node

节点只需要有docker,flannel,kube-proxy,kubelet即可

添加请求认证

绑定Role:

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

创建kubeconfig文件:

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://{hostIP}:6443 \
  --kubeconfig=bootstrap.kubeconfig

# token是配置kube-apiserver时生成的那个
kubectl config set-credentials kubelet-bootstrap \
  --token={token} \
  --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

kubectl config use-context default \
  --kubeconfig=bootstrap.kubeconfig

# kubeconfig文件给node节点使用
scp bootstrap.kubeconfig {node1IP}:/etc/kubernetes/
scp bootstrap.kubeconfig{node2IP}:/etc/kubernetes/
scp bootstrap.kubeconfig ${node3IP}:/etc/kubernetes/

配置kubelet

创建 kubelet 启动文件:

# 先创建 kubelet 工作目录
mkdir -p /var/lib/kubelet

cat <<EOF> /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
WorkingDirectory=/var/lib/kubelet

ExecStart=/usr/local/bin/kubelet \\
  --allow-privileged=true \\
  --cert-dir=/etc/kubernetes/ssl \\
  --cluster_dns=10.254.0.2 \\
  --cluster_domain=cluster.local. \\
  --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
  --hostname-override=${hostIP} \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --pod-infra-container-image=gcr.io/google_containers/pause:0.8.0 \\
  --require-kubeconfig \\
  --serialize-image-pulls=false

# 参数 cluster_dns 需要预先定义 预分配的 dns 地址,然后对 kubedns 配置中的地址修改成一致即可.
# gcr.io/google_containers/pause:0.8.0 这个镜像可能出于GFW的原因在国内拉不到,可以先用中转的方式导入.
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
# 出现错误使用journalctl -u kubelet排错
# 在所有node节点都类似这样配置

到此节点创建完毕,但不是创建了节点就能直接用的,为了安全,还要给节点授权

# 查看当前添加的node信息,此时的node并未验证
kubectl get csr

# 为所有的node授权
nodeNames=(kubectl get csr | awk '{print1}')
for nodeName in {nodeNames}
do
  kubectl certificate approve{nodeName}
done

# 根据当前网络和机器的性能,需要几秒后使用以下命令可以查看到节点状态以及为 Ready 了
kubectl get nodes
# Ready 后在/etc/kubernetes 目录下便会生成 kubeconfig 文件 kubelet.kubeconfig

配置kube-proxy

创建kebe-proxy证书基本信息:

# 直接复制 csr.json 的内容,修改其中的 CN 值为 system:kube-proxy
cat <<EOF>/opt/ssl/kube-proxy-csr.json
{
    "CN": "system:kube-proxy",
    "hosts": [
        "127.0.0.1",
        "{node1IP}",
        "{node2IP}",
        "${node3IP}",
        "node1",
        "node2",
        "node3",
        "kubernetes"
    ], 
    "key": {
        "algo": "rsa",
        "size": 2048
    }, 
    "names": [
        {
            "C": "CN",
            "ST": "Zhejiang",
            "L": "Wenzhou",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

# 生成 证书,只用生成一次就可以给所有node节点使用
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/opt/ssl/config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# 成功则会生成csr文件和pem证书,将这些证书传给其他node节点
ls kebe-proxy*
# kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

创建 kueb-proxy的kubeconfig 文件:

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true --server=https://{hostIP}:6443 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 到这步会生成kube-proxy.kubeconfig,可以将其放到指定的 /etc/kubernetes 目录统一配置指定
scp kube-proxy.kubeconfig{node1IP}:/etc/kubernetes/
scp kube-proxy.kubeconfig {node2IP}:/etc/kubernetes/
scp kube-proxy.kubeconfig{node3IP}:/etc/kubernetes/

创建 kube-proxy 的启动文件:

# 创建keube-proxy 的工作目录
mkdir -p /var/lib/kube-proxy

cat <<EOF> /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=kubernetes kube-proxy
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
  --bind-address={hostIP} \\
  --hostname-override={hostIP} \\
  --cluster-cidr=10.10.0.0/16 \\
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
# 出现错误使用journalctl -u kube-proxy排错

集群已经搭建,最后只需要指定集群中的DNS就能完成内部网络的控制了,这里用 kubernetes 的 Kubedns

配置kubedns

下载配置文件:

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-cm.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-sa.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-controller.yaml.base -O kubedns-controller.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-svc.yaml.base -O kubedns-svc.yaml

# 替换 kubedns-svc.yaml 中的 spec.clusterIP 值为 kube-apiserver 中定义的dns ip
sed -e "s/__PILLAR__DNS__SERVER__/10.254.0.2/g" -i kubedns-svc.yaml
# 替换 kubedns-controller.yaml 中的 __PILLAR__DNS__DOMAIN__ 中定义的dns domain
sed -e "s/__PILLAR__DNS__DOMAIN__/cluster.local/g" -i kubedns-controller.yaml

# 应用所有资源
kubectl apply -f .

至此,一个可用的集群创建完毕.