博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Ubuntu 16.04下搭建kubernetes集群环境
阅读量:7045 次
发布时间:2019-06-28

本文共 16155 字,大约阅读时间需要 53 分钟。

简介

目前Kubernetes为Ubuntu提供的kube-up脚本,不支持15.10以及16.04这两个使用systemd作为init系统的版本。

这里详细介绍一下如何以非Docker方式在Ubuntu16.04集群上手动安装部署Kubernetes的过程。

手动的部署过程,可以很容易写成自动部署的脚本。同时了解整个部署过程,对深入理解Kubernetes的架构及各功能模块也会很有帮助。

环境信息

版本信息

组件 版本
etcd 2.3.1
Flannel 0.5.5
Kubernetes 1.3.4

 

 

 

 

主机信息

主机 IP OS
k8s-master 172.16.203.133 Ubuntu 16.04
k8s-node01 172.16.203.134 Ubuntu 16.04
k8s-node02 172.16.203.135 Ubuntu 16.04

 

 

 

 

安装Docker

每台主机上安装最新版Docker Engine(目前是1.12) - https://docs.docker.com/engine/installation/linux/ubuntulinux/

部署etcd集群

我们将在3台主机上安装部署etcd集群

下载etcd

在部署机上下载etcd

ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}ETCD="etcd-v${ETCD_VERSION}-linux-amd64"curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz tar xzf etcd.tar.gz -C /tmp cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64 for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h 'sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*'; done

 配置etcd服务

在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改红色粗体处的IP地址)

/opt/config/etcd.conf

sudo mkdir -p /var/lib/etcd/sudo mkdir -p /opt/config/sudo  cat <

 /lib/systemd/system/etcd.service

[Unit]Description=Etcd ServerDocumentation=https://github.com/coreos/etcdAfter=network.target[Service]User=rootType=simpleEnvironmentFile=-/opt/config/etcd.confExecStart=/opt/bin/etcdRestart=on-failureRestartSec=10sLimitNOFILE=40000[Install]WantedBy=multi-user.target

 然后在每台主机上运行

sudo systemctl daemon-reload sudo systemctl enable etcd sudo systemctl start etcd

下载Flannel

FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}curl -L  https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gztar xzf  flannel.tar.gz -C /tmp

编译K8s

在部署机上编译K8s,需要安装docker engine(1.12)和go(1.6.2)

git clone https://github.com/kubernetes/kubernetes.gitcd kubernetesmake release-skip-teststar xzf _output/release-stage/full/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /tmp

Note

除了linux/amd64,默认还会为其他平台做交叉编译。为了减少编译时间,可以修改hack/lib/golang.sh,把KUBE_SERVER_PLATFORMS, KUBE_CLIENT_PLATFORMS和KUBE_TEST_PLATFORMS中除linux/amd64以外的其他平台注释掉。 

部署K8s Master

复制程序文件

cd /tmpscp kubernetes/server/bin/kube-apiserver \     kubernetes/server/bin/kube-controller-manager \     kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@172.16.203.133:~/kube scp flannel-${FLANNEL_VERSION}/flanneld user@172.16.203.133:~/kube
ssh -t user@172.16.203.133 'sudo mv ~/kube/* /opt/bin/'

创建证书

在master主机上 ,运行如下命令创建证书

mkdir -p /srv/kubernetes/cd /srv/kubernetesexport MASTER_IP=172.16.203.133openssl genrsa -out ca.key 2048openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crtopenssl genrsa -out server.key 2048openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csropenssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000 

配置kube-apiserver服务

我们使用如下的Service以及Flannel的网段:

SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16

FLANNEL_NET=192.168.0.0/16

在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下

[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]User=rootExecStart=/opt/bin/kube-apiserver \ --insecure-bind-address=0.0.0.0 \ --insecure-port=8080 \ --etcd-servers=http://172.16.203.133:2379, http://172.16.203.134:2379, http://172.16.203.135:2379 \ --logtostderr=true \  --allow-privileged=false \ --service-cluster-ip-range=172.18.0.0/16 \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \ --service-node-port-range=30000-32767 \ --advertise-address=172.16.203.133 \ --client-ca-file=/srv/kubernetes/ca.crt \ --tls-cert-file=/srv/kubernetes/server.crt \ --tls-private-key-file=/srv/kubernetes/server.keyRestart=on-failureType=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target

配置kube-controller-manager服务

在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下

[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]User=rootExecStart=/opt/bin/kube-controller-manager \  --master=127.0.0.1:8080 \  --root-ca-file=/srv/kubernetes/ca.crt \  --service-account-private-key-file=/srv/kubernetes/server.key \  --logtostderr=trueRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

配置kuber-scheduler服务

在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下

[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]User=rootExecStart=/opt/bin/kube-scheduler \  --logtostderr=true \  --master=127.0.0.1:8080 Restart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

配置flanneld服务

在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下

[Unit]Description=FlanneldDocumentation=https://github.com/coreos/flannelAfter=network.targetBefore=docker.service[Service]User=rootExecStart=/opt/bin/flanneld \  --etcd-endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" \  --iface=172.16.203.133 \  --ip-masqRestart=on-failureType=notifyLimitNOFILE=65536

启动服务

/opt/bin/etcdctl --endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" mk /coreos.com/network/config \   '{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}'sudo systemctl daemon-reload sudo systemctl enable kube-apiserversudo systemctl enable kube-controller-managersudo systemctl enable kube-schedulersudo systemctl enable flanneldsudo systemctl start kube-apiserversudo systemctl start kube-controller-managersudo systemctl start kube-schedulersudo systemctl start flanneld

修改Docker服务 

source /run/flannel/subnet.envsudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.servicerc=0ip link show docker0 >/dev/null 2>&1 || rc="$?"if [[ "$rc" -eq "0" ]]; thenip link set dev docker0 downip link delete docker0fisudo systemctl daemon-reloadsudo systemctl enable dockersudo systemctl restart docker

部署K8s Node

复制程序文件

cd /tmpfor h in k8s-master k8s-node01 k8s-node02; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@$h:~/kube; donefor h in k8s-master k8s-node01 k8s-node02; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;donefor h in k8s-master k8s-node01 k8s-node02; do ssh -t user@$h 'sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done

配置Flanned以及修改Docker服务

参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址

配置kubelet服务

/lib/systemd/system/kubelet.service,注意修改IP地址

[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]ExecStart=/opt/bin/kubelet \  --hostname-override=172.16.203.133 \  --api-servers=http://172.16.203.133:8080 \  --logtostderr=trueRestart=on-failureKillMode=process[Install]WantedBy=multi-user.target

启动服务

sudo systemctl daemon-reloadsudo systemctl enable kubeletsudo systemctl start kubelet

配置kube-proxy服务

/lib/systemd/system/kube-proxy.service,注意修改IP地址

[Unit]Description=Kubernetes ProxyAfter=network.target[Service]ExecStart=/opt/bin/kube-proxy  \  --hostname-override=172.16.203.133 \  --master=http://172.16.203.133:8080 \  --logtostderr=trueRestart=on-failure[Install]WantedBy=multi-user.target

启动服务

sudo systemctl daemon-reloadsudo systemctl enable kube-proxysudo systemctl start kube-proxy

配置验证K8s

生成配置文件

在部署机上运行

KUBE_USER=adminKUBE_PASSWORD=$(python -c 'import string,random; print("".join(random.SystemRandom().choice(string.ascii_letters + string.digits) for _ in range(16)))')DEFAULT_KUBECONFIG="${HOME}/.kube/config"mkdir -p $(dirname "${KUBECONFIG}")touch "${KUBECONFIG}"CONTEXT=ubuntuKUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=http://172.16.203.133:8080 --insecure-skip-tls-verify=trueKUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --username=${KUBE_USER} --password=${KUBE_PASSWORD}KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}"  --cluster="${CONTEXT}" 

验证

$ kubectl get nodesNAME             STATUS    AGE172.16.203.133   Ready     2h172.16.203.134   Ready     2h172.16.203.135   Ready     2h$cat <
nginx.ymlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: my-nginxspec: replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80EOF$kubectl create -f nginx.yml$kubectl get pods -l run=my-nginx -o wideNAME READY STATUS RESTARTS AGE IP NODEmy-nginx-1636613490-9ibg1 1/1 Running 0 13m 192.168.31.2 172.16.203.134my-nginx-1636613490-erx98 1/1 Running 0 13m 192.168.56.3 172.16.203.133$kubectl expose deployment/my-nginx$kubectl get service my-nginxNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEmy-nginx 172.18.28.48
80/TCP 37s

在三台主机上访问pod或者service的IP地址,都可以访问到nginx服务

$ curl http://172.18.28.48Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.

Commercial support is available atnginx.com.

Thank you for using nginx.

 

用户认证和安全

我们在最后一步生成kube配置文件的时候,创建了用户名和密码,但并没在apiserver上启用(使用--basic-auth-file参数),也就是说,只要能访问到172.16.203.133:8080,就可以操作k8s集群。如果是内部系统,并且配置好访问规则,也是可以接受的

为了增强安全性,可以启用证书认证,有两种方式:同时启用minion和客户端与master之间的认证,或者只启用客户端与master之间的证书认证。

minion节点的证书生成和配置可以参考http://kubernetes.io/docs/getting-started-guides/scratch/#security-models以及http://kubernetes.io/docs/getting-started-guides/ubuntu-calico/的相关部分。

这里我们看一下如何启用客户端与master之间的证书认证。使用这种方式也相对安全,minion节点和master一般在同一个数据中心,可以把对HTTP 8080的访问限制在数据中心内部,而客户端只能使用证书通过HTTPS访问api server。

创建客户端证书

在master主机上运行如下命令

cd /srv/kubernetesexport CLINET_IP=172.16.203.1openssl genrsa -out client.key 2048openssl req -new -key client.key -subj "/CN=${CLINET_IP}" -out client.csropenssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000

把client.crt和client.key复制到部署机,然后运如下命令,生成kube配置文件

DEFAULT_KUBECONFIG="${HOME}/.kube/config"mkdir -p $(dirname "${KUBECONFIG}")touch "${KUBECONFIG}"CONTEXT=ubuntuKUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}KUBE_CERT=client.crtKUBE_KEY=client.keyKUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=https://172.16.203.133:6443 --insecure-skip-tls-verify=trueKUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --client-certificate=${KUBE_CERT} --client-key=${KUBE_KEY} --embed-certs=trueKUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}"  --cluster="${CONTEXT}"

部署附件组件

部署DNS

DNS_SERVER_IP="172.18.8.8"DNS_DOMAIN="cluster.local"DNS_REPLICAS=1KUBE_APISERVER_URL=http://172.16.203.133:8080cat <
skydns.ymlapiVersion: v1kind: ReplicationControllermetadata: name: kube-dns-v17.1 namespace: kube-system labels: k8s-app: kube-dns version: v17.1 kubernetes.io/cluster-service: "true"spec: replicas: $DNS_REPLICAS selector: k8s-app: kube-dns version: v17.1 template: metadata: labels: k8s-app: kube-dns version: v17.1 kubernetes.io/cluster-service: "true" spec: containers: - name: kubedns image: gcr.io/google_containers/kubedns-amd64:1.5 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube-dns" - --domain=$DNS_DOMAIN. - --dns-port=10053 - --kube-master-url=$KUBE_APISERVER_URL ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - name: dnsmasq image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3 args: - --cache-size=1000 - --no-resolv - --server=127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz-amd64:1.1 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 50Mi requests: cpu: 10m # Note that this container shouldn't really need 50Mi of memory. The # limits are set higher than expected pending investigation on #29688. # The extra memory was stolen from the kubedns container to keep the # net memory requested by the pod constant. memory: 50Mi args: - -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null - -port=8080 - -quiet ports: - containerPort: 8080 protocol: TCP dnsPolicy: Default # Don't use cluster DNS.---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS"spec: selector: k8s-app: kube-dns clusterIP: $DNS_SERVER_IP ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCPEOFkubectl create -f skydns.yml

 然后,修该各节点的kubelet.service,添加--cluster-dns=172.18.8.8以及--cluster-domain=cluster.local

部署Dashboard

echo <<'EOF' > kube-dashboard.ymlkind: DeploymentapiVersion: extensions/v1beta1metadata:  labels:    app: kubernetes-dashboard    version: v1.1.0  name: kubernetes-dashboard  namespace: kube-systemspec:  replicas: 1  selector:    matchLabels:      app: kubernetes-dashboard  template:    metadata:      labels:        app: kubernetes-dashboard    spec:      containers:      - name: kubernetes-dashboard        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0        imagePullPolicy: Always        ports:        - containerPort: 9090          protocol: TCP        args:        - --apiserver-host=http://172.16.203.133:8080        livenessProbe:          httpGet:            path: /            port: 9090          initialDelaySeconds: 30          timeoutSeconds: 30---kind: ServiceapiVersion: v1metadata:  labels:    app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kube-systemspec:  type: ClusterIP  ports:  - port: 80    targetPort: 9090  selector:    app: kubernetes-dashboardEOFkubectl create -f kube-dashboard.yml

转载地址:http://opzol.baihongyu.com/

你可能感兴趣的文章
雷军定AI+IoT为小米核心战略,牵手宜家推进生态布局
查看>>
移动开发的罗曼蒂克消亡史
查看>>
独家解读 | 滴滴机器学习平台架构演进之路
查看>>
Kotlin语言1.0Beta发布,JetBrain介绍其设计理念
查看>>
继爆款超级账本后,IBM再次推出新产品
查看>>
“亲切照料”下的领域驱动设计
查看>>
在大规模系统中使用Scala
查看>>
资本冬天已至,开发者却可以着眼未来
查看>>
中国互联网公司开源项目调研报告
查看>>
百度启动高管退休计划,总裁张亚勤今年十月退休
查看>>
Entity Framework 6.3 和EF Core 3.0路线图
查看>>
《敏捷时代》作者访谈录
查看>>
Scrum Guides 2017年最新修改
查看>>
Cling旨在提供一款高性能的C++ REPL
查看>>
关于《在Windows与.NET平台上的持续交付实践》的问答录
查看>>
TensorFlow模型的签名推荐与快速上线\n
查看>>
改变的六条规则
查看>>
GitHub是如何改进自身的DNS架构的
查看>>
IntelliJ IDEA 2018.3 新版本发布,支持 Java 12及Spring Boot增强等特性
查看>>
阿里重磅发布大规模图神经网络平台AliGraph,架构算法解读
查看>>