简介
目前Kubernetes为Ubuntu提供的kube-up脚本,不支持15.10以及16.04这两个使用systemd作为init系统的版本。
这里详细介绍一下如何以非Docker方式在Ubuntu16.04集群上手动安装部署Kubernetes的过程。
手动的部署过程,可以很容易写成自动部署的脚本。同时了解整个部署过程,对深入理解Kubernetes的架构及各功能模块也会很有帮助。
环境信息
版本信息
主机信息
安装Docker
每台主机上安装最新版Docker Engine(目前是1.12) - https://docs.docker.com/engine/installation/linux/ubuntulinux/
部署etcd集群
我们将在3台主机上安装部署etcd集群
下载etcd
在部署机上下载etcd
ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz
tar xzf etcd.tar.gz -C /tmp
cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64
for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h 'sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*'; done
配置etcd服务
在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改红色粗体处的IP地址)
/opt/config/etcd.conf
sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
sudo cat <<EOF | sudo tee /opt/config/etcd.conf
ETCD_DATA_DIR=/var/lib/etcd
ETCD_NAME=$(hostname)
ETCD_INITIAL_CLUSTER=master=http://172.16.203.133:2380,node01=http://172.16.203.134:2380,node02=http://172.16.203.135:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://172.16.203.133:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.16.203.133:2380
ETCD_ADVERTISE_CLIENT_URLS=http://172.16.203.133:2379
ETCD_LISTEN_CLIENT_URLS=http://172.16.203.133:2379
GOMAXPROCS=$(nproc)
EOF
/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target
[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target
然后在每台主机上运行
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
下载Flannel
FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
curl -L https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz
tar xzf flannel.tar.gz -C /tmp
编译K8s
在部署机上编译K8s,需要安装docker engine(1.12)和go(1.6.2)
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release-skip-tests
tar xzf _output/release-stage/full/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /tmp
Note
除了linux/amd64,默认还会为其他平台做交叉编译。为了减少编译时间,可以修改hack/lib/golang.sh,把KUBE_SERVER_PLATFORMS, KUBE_CLIENT_PLATFORMS和KUBE_TEST_PLATFORMS中除linux/amd64以外的其他平台注释掉。
部署K8s Master
复制程序文件
cd /tmp
scp kubernetes/server/bin/kube-apiserver \
kubernetes/server/bin/kube-controller-manager \
kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@172.16.203.133:~/kube
scp flannel-${FLANNEL_VERSION}/flanneld user@172.16.203.133:~/kube
ssh -t user@172.16.203.133 'sudo mv ~/kube/* /opt/bin/'
创建证书
在master主机上 ,运行如下命令创建证书
mkdir -p /srv/kubernetes/
cd /srv/kubernetes
export MASTER_IP=172.16.203.133
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000
配置kuber-apiserver服务
我们使用如下的Service以及Flannel的网段:
SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16
FLANNEL_NET=192.168.0.0/16
在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--etcd-servers=http://172.16.203.133:2379, http://172.16.203.134:2379, http://172.16.203.135:2379 \
--logtostderr=true \
--allow-privileged=false \
--service-cluster-ip-range=172.18.0.0/16 \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \
--service-node-port-range=30000-32767 \
--advertise-address=172.16.203.133 \
--client-ca-file=/srv/kubernetes/ca.crt \
--tls-cert-file=/srv/kubernetes/server.crt \
--tls-private-key-file=/srv/kubernetes/server.key
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kuber-controller-manager服务
在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--root-ca-file=/srv/kubernetes/ca.crt \
--service-account-private-key-file=/srv/kubernetes/server.key \
--logtostderr=true
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kuber-controller-manager服务
在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
--logtostderr=true \
--master=127.0.0.1:8080
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置flanneld服务
在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service
[Service]
User=root
ExecStart=/opt/bin/flanneld \
--etcd-endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" \
--iface=172.16.203.133 \
--ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=65536
启动服务
/opt/bin/etcdctl --endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" mk /coreos.com/network/config \
'{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}'
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl enable kube-controller-manager
sudo systemctl enable kube-scheduler
sudo systemctl enable flanneld
sudo systemctl start kube-apiserver
sudo systemctl start kube-controller-manager
sudo systemctl start kube-scheduler
sudo systemctl start flanneld
修改Docker服务
source /run/flannel/subnet.env
sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service
rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [[ "$rc" -eq "0" ]]; then
ip link set dev docker0 down
ip link delete docker0
fi
sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker
部署K8s Node
复制程序文件
cd /tmp
for h in k8s-master k8s-node01 k8s-node02; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;done
for h in k8s-master k8s-node01 k8s-node02; do ssh -t user@$h 'sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done
配置Flanned以及修改Docker服务
参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址
配置kubelet服务
/lib/systemd/system/kubelet.service,注意修改IP地址
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/opt/bin/kubelet \
--hostname-override=172.16.203.133 \
--api-servers=http://172.16.203.133:8080 \
--logtostderr=true
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
启动服务
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
配置kube-proxy服务
/lib/systemd/system/kube-proxy.service,注意修改IP地址
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/opt/bin/kube-proxy \
--hostname-override=172.16.203.133 \
--master=http://172.16.203.133:8080 \
--logtostderr=true
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy
配置验证K8s
生成配置文件
在部署机上运行
KUBE_USER=admin
KUBE_PASSWORD=$(python -c 'import string,random; print("".join(random.SystemRandom().choice(string.ascii_letters + string.digits) for _ in range(16)))')
DEFAULT_KUBECONFIG="${HOME}/.kube/config"
mkdir -p $(dirname "${KUBECONFIG}")
touch "${KUBECONFIG}"
CONTEXT=ubuntu
KUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=http://172.16.203.133:8080 --insecure-skip-tls-verify=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --username=${KUBE_USER} --password=${KUBE_PASSWORD}
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}" --cluster="${CONTEXT}"
验证
$ kubectl get nodes
NAME STATUS AGE
172.16.203.133 Ready 2h
172.16.203.134 Ready 2h
172.16.203.135 Ready 2h
$cat <<EOF > nginx.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
EOF
$kubectl create -f nginx.yml
$kubectl get pods -l run=my-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
my-nginx-1636613490-9ibg1 1/1 Running 0 13m 192.168.31.2 172.16.203.134
my-nginx-1636613490-erx98 1/1 Running 0 13m 192.168.56.3 172.16.203.133
$kubectl expose deployment/my-nginx
$kubectl get service my-nginx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx 172.18.28.48 <none> 80/TCP 37s
在三台主机上访问pod或者service的IP地址,都可以访问到nginx服务
$ curl http://172.18.28.48
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
用户认证和安全
我们在最后一步生成kube配置文件的时候,创建了用户名和密码,但并没在apiserver上启用(使用--basic-auth-file参数),也就是说,只要能访问到172.16.203.133:8080,就可以操作k8s集群。如果是内部系统,并且配置好访问规则,也是可以接受的
为了增强安全性,可以启用证书认证,有两种方式:同时启用minion和客户端与master之间的认证,或者只启用客户端与master之间的证书认证。
minion节点的证书生成和配置可以参考http://kubernetes.io/docs/getting-started-guides/scratch/#security-models以及http://kubernetes.io/docs/getting-started-guides/ubuntu-calico/的相关部分。
这里我们看一下如何启用客户端与master之间的证书认证。使用这种方式也相对安全,minion节点和master一般在同一个数据中心,可以把对HTTP 8080的访问限制在数据中心内部,而客户端只能使用证书通过HTTPS访问api server。
创建客户端证书
在master主机上运行如下命令
cd /srv/kubernetes
export CLINET_IP=172.16.203.1
openssl genrsa -out client.key 2048
openssl req -new -key client.key -subj "/CN=${CLINET_IP}" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000
把client.crt和client.key复制到部署机,然后运如下命令,生成kube配置文件
DEFAULT_KUBECONFIG="${HOME}/.kube/config"
mkdir -p $(dirname "${KUBECONFIG}")
touch "${KUBECONFIG}"
CONTEXT=ubuntu
KUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}
KUBE_CERT=client.crt
KUBE_KEY=client.key
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=https://172.16.203.133:6443 --insecure-skip-tls-verify=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --client-certificate=${KUBE_CERT} --client-key=${KUBE_KEY} --embed-certs=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}" --cluster="${CONTEXT}"