您的位置:首页 > 其它

在Ubuntu16.04集群上手工部署Kubernetes

2017-04-24 11:15 459 查看
目前Kubernetes为Ubuntu提供的kube-up脚本,不支持15.10以及16.04这两个使用systemd作为init系统的版本。

这里详细介绍一下如何以非Docker方式在Ubuntu16.04集群上手动安装部署Kubernetes的过程。

环境信息

组件版本
etcd2.3.1
Flannel0.5.5
kubernetes1.5.1
主机信息

主机IPOS
k8s-master172.12.24.36ubuntu 16.04
k8s-node01172.12.24.37ubuntu 16.04
k8s-node02172.12.24.38ubuntu 16.04
安装docker

sudo apt-get install docker.io

安装Go

sudo apt-get install golang-go

部署etcd集群

我们将在3台机器上安装etcd集群

1.在部署机器上下载etcd

ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}

ETCD="etcd-v${ETCD_VERSION}-Linux-amd64"

curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz

2.解压etcd到tmp目录

tar xzf etcd.tar.gz -C /tmp

3.进入解压目录

cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64

4.创建etcd集群

for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done

for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h 'sudo mkdir -p
/opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf
$home/kube/*'; done

注意:

1、这种创建集群的方式,每台集群的登录用户必须一致,如user@$h中的user指当前机器的登录用户。

2、sudo方式必须是免密码方式,否则执行当中报错。

修改为免密码方式:

sudo chmod u+w /etc/sudoers

sudo vim /etc/sudoers

修改sudoers文件中的内容

%sudo ALL=(ALL:ALL) ALL

修改为

%sudo ALL=(ALL:ALL) NOPASSWD:ALL

3、该设置只需要在master节点执行,其他节点将会配置好

配置etcd服务

在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改红色粗体处的IP地址)

1.创建/opt/config/etcd.conf文件

sudo mkdir -p /var/lib/etcd/

sudo mkdir -p /opt/config/

sudo cat <<EOF | sudo tee /opt/config/etcd.conf

ETCD_DATA_DIR=/var/lib/etcd

ETCD_NAME=$(hostname)

ETCD_INITIAL_CLUSTER=k8s-master=http://172.12.24.36:2380,k8s-node01=http://172.12.24.37:2380,k8s-node02=http://172.12.24.38:2380

ETCD_INITIAL_CLUSTER_STATE=new

ETCD_LISTEN_PEER_URLS=http://172.12.24.36:2380

ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.12.24.36:2380

ETCD_ADVERTISE_CLIENT_URLS=http://172.12.24.36:2379

ETCD_LISTEN_CLIENT_URLS=http://172.12.24.36:2379

GOMAXPROCS=$(nproc)

EOF

2.创建 /lib/systemd/system/etcd.service文件

sudo cat <<EOF | sudo tee /lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

Documentation=https://github.com/coreos/etcd

After=network.target

[Service]

User=root

Type=simple

EnvironmentFile=-/opt/config/etcd.conf

ExecStart=/opt/bin/etcd

Restart=on-failure

RestartSec=10s

LimitNOFILE=40000

[Install]

WantedBy=multi-user.target

EOF

每台机器重新加载启动etcd服务

sudo systemctl daemon-reload

sudo systemctl enable etcd

sudo systemctl start etcd

下载Flannel

1.设定版本变量

FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}

2.下载并重命名

curl -L https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz

3.解压到tmp目录

tar xzf flannel.tar.gz -C /tmp

编译K8s

1.通过Git克隆kubernetes源码,注意需要克隆的分支

git clone -b v1.5.1 https://github.com/kubernetes/kubernetes.git
2.进入源码目录

cd kubernetes

3.编译打包源码

make release-skip-tests

4.将编译打包完成的包解压到tmp目录

tar xzf _output/release-stage/full/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /tmp

Note

除了linux/amd64,默认还会为其他平台做交叉编译。为了减少编译时间,可以修改hack/lib/golang.sh,把KUBE_SERVER_PLATFORMS,
KUBE_CLIENT_PLATFORMS和KUBE_TEST_PLATFORMS中除linux/amd64以外的其他平台注释掉。

部署K8s Master

1.切换到tmp目录

cd /tmp

2.将tmp中的文件远程复制到kubernetes相关文件目录中

scp kubernetes/server/bin/kube-apiserver
kubernetes/server/bin/kube-controller-manager
kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet
kubernetes/server/bin/kube-proxy user@172.12.24.36:~/kube

3.将tmp中的文件远程复制到flannel相关的文件目录中

scp flannel-${FLANNEL_VERSION}/flanneld user@172.12.24.36:~/kube

4.通过ssh方式连接服务器,移动文件目录

ssh -t user@172.12.24.36 'sudo mv ~/kube/* /opt/bin/'

注意:user为当前机器的登录用户

创建证书

在master主机上 ,运行如下命令创建证书

1.创建目录

sudo mkdir -p /srv/kubernetes/

2.切换目录

cd /srv/kubernetes

3.设置环境变量

export MASTER_IP=172.12.24.36

4.创建证书

sudo openssl genrsa -out ca.key 2048

sudo openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt

sudo openssl genrsa -out server.key 2048

sudo openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr

sudo openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000

配置kube-apiserver服务

我们使用如下的Service以及Flannel的网段:

SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16

FLANNEL_NET=192.168.0.0/16

在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,

sudo vim /lib/systemd/system/kube-apiserver.service

内容如下

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

After=network.target

[Service]

User=root

ExecStart=/opt/bin/kube-apiserver \

--insecure-bind-address=0.0.0.0 \

--insecure-port=8080 \

--etcd-servers=http://172.12.24.36:2379, http://172.12.24.37:2379, http://172.12.24.38:2379 \

--logtostderr=true \

--allow-privileged=false \

--service-cluster-ip-range=172.18.0.0/16 \

--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \

--service-node-port-range=30000-32767 \

--advertise-address=172.12.24.36 \

--client-ca-file=/srv/kubernetes/ca.crt \

--tls-cert-file=/srv/kubernetes/server.crt \

--tls-private-key-file=/srv/kubernetes/server.key

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

配置kube-controller-manager服务

在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,

sudo vim /lib/systemd/system/kube-controller-manager.service

内容如下

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/kubernetes/kubernetes

[Service]

User=root

ExecStart=/opt/bin/kube-controller-manager \

--master=127.0.0.1:8080 \

--root-ca-file=/srv/kubernetes/ca.crt \

--service-account-private-key-file=/srv/kubernetes/server.key \

--logtostderr=true

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

配置kuber-scheduler服务

在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,

sudo vim /lib/systemd/system/kube-scheduler.service

内容如下

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/kubernetes/kubernetes

[Service]

User=root

ExecStart=/opt/bin/kube-scheduler \

--logtostderr=true \

--master=127.0.0.1:8080

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

配置flanneld服务

在master主机上,创建/lib/systemd/system/flanneld.service文件,

sudo vim /lib/systemd/system/flanneld.service

内容如下

[Unit]

Description=Flanneld

Documentation=https://github.com/coreos/flannel

After=network.target

Before=docker.service

[Service]

User=root

ExecStart=/opt/bin/flanneld \

--etcd-endpoints="http://172.12.24.36:2379,http://172.12.24.37:2379,http://172.12.24.38:2379" \

--iface=172.12.24.36 \

--ip-masq

Restart=on-failure

Type=notify

LimitNOFILE=65536

启动服务

/opt/bin/etcdctl
--endpoints="http://172.12.24.54:2379,http://172.12.24.56:2379,http://172.12.24.60:2379"
mk /coreos.com/network/config '{"Network":"192.168.0.0/16", "Backend":
{"Type": "vxlan"}}'

sudo systemctl daemon-reload

sudo systemctl enable kube-apiserver

sudo systemctl enable kube-controller-manager

sudo systemctl enable kube-scheduler

sudo systemctl enable flanneld

sudo systemctl start kube-apiserver

sudo systemctl start kube-controller-manager

sudo systemctl start kube-scheduler

sudo systemctl start flanneld

修改Docker服务

source /run/flannel/subnet.env

sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H
fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H
unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET}
--mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service

rc=0

ip link show docker0 >/dev/null 2>&1 || rc="$?"

if [[ "$rc" -eq "0" ]]; then

ip link set dev docker0 down

ip link delete docker0

fi

sudo systemctl daemon-reload

sudo systemctl enable docker

sudo systemctl restart docker

部署K8s Node

复制程序文件

cd /tmp

for h in k8s-master k8s-node01 k8s-node02; do scp
kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy
user@$h:~/kube; done

for h in k8s-master k8s-node01 k8s-node02; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;done

for h in k8s-master k8s-node01 k8s-node02; do ssh -t user@$h 'sudo mkdir
-p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done

配置Flanned以及修改Docker服务

参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址

配置kubelet服务

/lib/systemd/system/kubelet.service,注意修改IP地址

[Unit]

Description=Kubernetes Kubelet

After=docker.service

Requires=docker.service

[Service]

ExecStart=/opt/bin/kubelet \

--hostname-override=172.12.24.37 \

--api-servers=http://172.12.24.36:8080 \

--logtostderr=true

Restart=on-failure

KillMode=process

[Install]

WantedBy=multi-user.target

启动服务

sudo systemctl daemon-reload

sudo systemctl enable kubelet

sudo systemctl start kubelet

配置kube-proxy服务

/lib/systemd/system/kube-proxy.service,注意修改IP地址[Unit]

Description=Kubernetes Proxy

After=network.target

[Service]

ExecStart=/opt/bin/kube-proxy \

--hostname-override=172.12.24.37 \

--master=http://172.12.24.36:8080 \

--logtostderr=true

Restart=on-failure

[Install]

WantedBy=multi-user.target

启动服务

sudo systemctl daemon-reload

sudo systemctl enable kube-proxy

sudo systemctl start kube-proxy

部署附件组件

部署DNS

DNS_SERVER_IP="172.18.12.12"

DNS_DOMAIN="cluster.local"

DNS_REPLICAS=1

KUBE_APISERVER_URL=http://172.12.24.36:8080

cat <<EOF > skydns.yml

apiVersion: v1

kind: ReplicationController

metadata:

name: kube-dns-v17.1

namespace: kube-system

labels:

k8s-app: kube-dns

version: v17.1

kubernetes.io/cluster-service: "true"

spec:

replicas: $DNS_REPLICAS

selector:

k8s-app: kube-dns

version: v17.1

template:

metadata:

labels:

k8s-app: kube-dns

version: v17.1

kubernetes.io/cluster-service: "true"

spec:

containers:

- name: kubedns

image: gcr.io/google_containers/kubedns-amd64:1.5

resources:

# TODO: Set memory limits when we've profiled the Container for large

# clusters, then set request = limit to keep this container in

# guaranteed class. Currently, this container falls into the

# "burstable" category so the kubelet doesn't backoff from restarting it.

limits:

cpu: 100m

memory: 170Mi

requests:

cpu: 100m

memory: 70Mi

livenessProbe:

httpGet:

path: /healthz

port: 8080

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

readinessProbe:

httpGet:

path: /readiness

port: 8081

scheme: HTTP

# we poll on pod startup for the Kubernetes master service and

# only setup the /readiness HTTP server once that's available.

initialDelaySeconds: 30

timeoutSeconds: 5

args:

# command = "/kube-dns"

- --domain=$DNS_DOMAIN.

- --dns-port=10053

- --kube-master-url=$KUBE_APISERVER_URL

ports:

- containerPort: 10053

name: dns-local

protocol: UDP

- containerPort: 10053

name: dns-tcp-local

protocol: TCP

- name: dnsmasq

image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3

args:

- --cache-size=1000

- --no-resolv

- --server=127.0.0.1#10053

ports:

- containerPort: 53

name: dns

protocol: UDP

- containerPort: 53

name: dns-tcp

protocol: TCP

- name: healthz

image: gcr.io/google_containers/exechealthz-amd64:1.1

resources:

# keep request = limit to keep this container in guaranteed class

limits:

cpu: 10m

memory: 50Mi

requests:

cpu: 10m

# Note that this container shouldn't really need 50Mi of memory. The

# limits are set higher than expected pending investigation on #29688.

# The extra memory was stolen from the kubedns container to keep the

# net memory requested by the pod constant.

memory: 50Mi

args:

- -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null

- -port=8080

- -quiet

ports:

- containerPort: 8080

protocol: TCP

dnsPolicy: Default # Don't use cluster DNS.

---

apiVersion: v1

kind: Service

metadata:

name: kube-dns

namespace: kube-system

labels:

k8s-app: kube-dns

kubernetes.io/cluster-service: "true"

kubernetes.io/name: "KubeDNS"

spec:

selector:

k8s-app: kube-dns

clusterIP: $DNS_SERVER_IP

ports:

- name: dns

port: 53

protocol: UDP

- name: dns-tcp

port: 53

protocol: TCP

EOF

kubectl create -f skydns.yml

然后,修该各节点的kubelet.service,添加--cluster-dns=172.18.8.8以及--cluster-domain=cluster.local

部署Dashboard

echo <<'EOF' > kube-dashboard.yml

kind: Deployment

apiVersion: extensions/v1beta1

metadata:

labels:

app: kubernetes-dashboard

version: v1.1.0

name: kubernetes-dashboard

namespace: kube-system

spec:

replicas: 1

selector:

matchLabels:

app: kubernetes-dashboard

template:

metadata:

labels:

app: kubernetes-dashboard

spec:

containers:

- name: kubernetes-dashboard

image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0

imagePullPolicy: Always

ports:

- containerPort: 9090

protocol: TCP

args:

- --apiserver-host=http://172.12.24.36:8080

livenessProbe:

httpGet:

path: /

port: 9090

initialDelaySeconds: 30

timeoutSeconds: 30

---

kind: Service

apiVersion: v1

metadata:

labels:

app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

type: ClusterIP

ports:

- port: 80

targetPort: 9090

selector:

app: kubernetes-dashboard

EOF

kubectl create -f kube-dashboard.yml
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: