Up and running with Kubernetes.io and Raspberry Pis
2017-01-11 17:32
796 查看
转自:http://kubecloud.io/getting-up-and-running-with-kubernetes-io/
The following post is based on this link written
by Arjen Wassink.
Arjen demonstrated the Kubernetes.io cluster at Devoxx 2015 together with Ray
Tsang. The talk about Kubernetes and the demonstration of the Raspberry Pi cluster is embedded below.
You need the following for this guide:
A couple of Raspberry Pis including power supply, sd-cards etc.
HypriotOS image installed. (See the Running
docker on your Raspberry Pi guide)
First, we need to setup the master node. To get started we need to download Arjen
Wassink great install scripts. Thank you for the hard work! Get the stuff by running the following command:
Next run the following command to update the package lists from the repositories
For unpacking the zip-file, we need to get unzip.
Unzip the downloaded zip-file by using unzip.
The last thing we need to do is running the install-script for the master node. Type in the following command and press enter!
Be aware that this can take a while.
The install-script will install 5 services, namely:
Now we need to verify that everything went as expected. To do this run the following command and you should see 2 docker daemons running as shown in the output below:
Next we have to be sure that
up and running. It should look something like the following, where
up and running.
Lastly we need to check that the hyperkube kubelet, apiserver, scheduler, controller and proxy are running. Now you should be able to see the hyperkube
To do this, type in the following.
Now we are ready to deploy our first pod. First we need to go grab the command line tool for accessing the Kubernetes cluster. Type in the following command
In order to access the command-line tool change the permission of
In order to see available nodes, use the following command
Let's try to run a simple pod, namely the
which is just a simple webserver displaying a static page. Run the pod as follows
Now we can check that the pod is running by entering the command below
Now, we have a pod running locally, but only locally. Export the pod to the outside by running the
The
specifies the external port our pod will be accessible through. Remember to update the external-ip to your masters ip.
To check that pod is now exposed, we can run the
Go to the ip-address you specified (in our case:
and check to see if everything is running!
![](http://kubecloud.io/content/images/2016/01/xSk-rmbillede-2016-01-28-kl--13-54-44.png.pagespeed.ic.o8bNiOSiW8.png)
You can also verify this through the commandline:
Now that our master node is up and running, we continue to setup our worker nodes. On a new node execute the following commands.
IMPORTANT: Change the ip-address in /etc/kubernetes/k8s.conf to match the master node's ip-address before running the following command.
The install script setup everything needed in order to be a worker-node. This involves installing 4 services, which is quite similar to what the master install script was doing. The biggest difference is that the
is not running and the kubelet service is configured as a worker node.
To see verify that all nodes are registered correctly, run the following.
The last thing we will be going through in this post is how to scale a pod. We have 4 Raspberry Pis in our cluster, but you can choose the number you please.
And lastly we can check that we got 4 busyboxes running in our cluster
Now you have a Kubernetes cluster running with some worker nodes. Stay tuned for a guide to get up and running with Kubernetes
dashboard.
For more information about the installation procedure for Kubernetes, please check out the Getting
started guide and the link written by Arjen
Wassink.
The following post is based on this link written
by Arjen Wassink.
Arjen demonstrated the Kubernetes.io cluster at Devoxx 2015 together with Ray
Tsang. The talk about Kubernetes and the demonstration of the Raspberry Pi cluster is embedded below.
Prerequisites
You need the following for this guide:A couple of Raspberry Pis including power supply, sd-cards etc.
HypriotOS image installed. (See the Running
docker on your Raspberry Pi guide)
Installing Kubernetes on the master node
First, we need to setup the master node. To get started we need to download ArjenWassink great install scripts. Thank you for the hard work! Get the stuff by running the following command:
$ curl -L -o k8s-on-rpi.zip https://github.com/awassink/k8s-on-rpi/archive/master.zip1
Next run the following command to update the package lists from the repositories
$ apt-get update1
For unpacking the zip-file, we need to get unzip.
$ apt-get install unzip1
Unzip the downloaded zip-file by using unzip.
$ unzip k8s-on-rpi.zip1
The last thing we need to do is running the install-script for the master node. Type in the following command and press enter!
Be aware that this can take a while.
$ ./k8s-on-rpi-master/install-k8s-master.sh1
The install-script will install 5 services, namely:
docker-bootstrap.service,
k8s-etcd.service,
k8s-flannel.service,
docker.serviceand
k8s-master.service.
Now we need to verify that everything went as expected. To do this run the following command and you should see 2 docker daemons running as shown in the output below:
$ ps -ef|grep docker root 2097 1 49 10:10 ? 00:16:24 /usr/bin/docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --storage-driver=overlay --storage-opt dm.basesize=10G --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap root 2464 1 30 10:29 ? 00:04:02 /usr/bin/docker -d -bip=10.0.82.1/24 -mtu=1472 -H fd:// -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver overlay --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hypriot root 2551 2464 0 10:29 ? 00:00:01 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3376 -container-ip 10.0.82.3 -container-port 3376 root 2557 2464 0 10:29 ? 00:00:07 /swarm manage --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server.pem --tlskey=/etc/docker/server-key.pem -H tcp://0.0.0.0:3376 --strategy spread token://6102302A23718A7353E035CBF88A957D root 2673 1 0 10:34 ? 00:00:01 docker run --name=k8s-master --net=host --pid=host --privileged -v /sys:/sys:ro -v /var/run:/var/run:rw -v /:/rootfs:ro -v /dev:/dev -v /var/lib/docker/:/var/lib/docker:rw -v /var/lib/kubelet/:/var/lib/kubelet:rw gcr.io/google_containers/hyperkube-arm:v1.1.2 /hyperkube kubelet --v=2 --address=0.0.0.0 --enable-server --allow-privileged=true --pod_infra_container_image=gcr.io/google_containers/pause-arm:2.0 --api-servers=http://localhost:8080 --hostname-override=127.0.0.1 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --containerized --config=/etc/kubernetes/manifests-multi root 3150 3073 0 10:42 pts/0 00:00:00 grep docker1234567
Next we have to be sure that
flanneland
etcdare
up and running. It should look something like the following, where
flanneldand
etcdare
up and running.
$ docker -H unix:///var/run/docker-bootstrap.sock ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c672d66e50d2 andrewpsuedonym/etcd:2.1.1 "/bin/etcd --addr=127" 10 minutes ago Up 10 minutes k8s-etcd 11849faccb41 andrewpsuedonym/flanneld "flanneld --etcd-endp" 14 minutes ago Up 14 minutes k8s-flannel1234
Lastly we need to check that the hyperkube kubelet, apiserver, scheduler, controller and proxy are running. Now you should be able to see the hyperkube
kubelet,
apiserver,
scheduler,
controllerand
proxy.
To do this, type in the following.
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e36ac4216c56 gcr.io/google_containers/hyperkube-arm:v1.1.2 "/hyperkube controlle" 10 minutes ago Up 10 minutes k8s_controller-manager.7042038a_k8s-master-127.0.0.1_default_43160049df5e3b1c5ec7bcf23d4b97d0_edb76cf2 e491a0a5cf40 gcr.io/google_containers/hyperkube-arm:v1.1.2 "/hyperkube apiserver" 10 minutes ago Up 10 minutes k8s_apiserver.f4ad1bfa_k8s-master-127.0.0.1_default_43160049df5e3b1c5ec7bcf23d4b97d0_cf8cf205 cec2f49600e2 gcr.io/google_containers/hyperkube-arm:v1.1.2 "/hyperkube scheduler" 10 minutes ago Up 10 minutes k8s_scheduler.d905fc61_k8s-master-127.0.0.1_default_43160049df5e3b1c5ec7bcf23d4b97d0_1020815f 161dffd94cac gcr.io/google_containers/pause-arm:2.0 "/pause" 10 minutes ago Up 10 minutes k8s_POD.d853e10f_k8s-master-127.0.0.1_default_43160049df5e3b1c5ec7bcf23d4b97d0_7d39c0d1 4ea725efd73c gcr.io/google_containers/hyperkube-arm:v1.1.2 "/hyperkube proxy --m" 11 minutes ago Up 11 minutes k8s-master-proxy f4a7330da6f7 gcr.io/google_containers/hyperkube-arm:v1.1.2 "/hyperkube kubelet -" 11 minutes ago Up 11 minutes k8s-master12345678
Deploying the first pod
Now we are ready to deploy our first pod. First we need to go grab the command line tool for accessing the Kubernetes cluster. Type in the following command$ curl -fsSL -o /usr/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/arm/kubectl1
In order to access the command-line tool change the permission of
/usr/bin/kubectl
$ chmod 755 /usr/bin/kubectl1
In order to see available nodes, use the following command
$ kubectl get nodes1
Let's try to run a simple pod, namely the
hypriot/rpi-busybox-httpd,
which is just a simple webserver displaying a static page. Run the pod as follows
$ kubectl run busybox --image=hypriot/rpi-busybox-httpd1
Now we can check that the pod is running by entering the command below
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE busybox-v12rw 1/1 Running 0 15m 127.0.0.1 k8s-master-127.0.0.1 3/3 Running 1 28m 127.0.0.11234
Now, we have a pod running locally, but only locally. Export the pod to the outside by running the
exposecommand.
The
--portoption
specifies the external port our pod will be accessible through. Remember to update the external-ip to your masters ip.
$ kubectl expose rc busybox --port=90 --target-port=80 --external-ip=<ip-address-master-node>1
To check that pod is now exposed, we can run the
kubectl get svcwhich displays the services running and the ports at which they are accessible at
$ kubectl get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE busybox 10.0.0.242 192.168.1.21 90/TCP run=busybox 15m kubernetes 10.0.0.1 <none> 443/TCP <none> 29m1234
Go to the ip-address you specified (in our case:
http://192.168.1.21:90)
and check to see if everything is running!
![](http://kubecloud.io/content/images/2016/01/xSk-rmbillede-2016-01-28-kl--13-54-44.png.pagespeed.ic.o8bNiOSiW8.png)
You can also verify this through the commandline:
$ curl http://10.0.0.242:90 <html> <head><title>Pi armed with Docker by Hypriot</title> <body style="width: 100%; background-color: black;"> <div id="main" style="margin: 100px auto 0 auto; width: 800px;"> <img src="pi_armed_with_docker.jpg" alt="pi armed with docker" style="width: 800px"> </div> </body> </html>123456789
Setting up worker nodes
Now that our master node is up and running, we continue to setup our worker nodes. On a new node execute the following commands.$ curl -L -o k8s-on-rpi.zip https://github.com/awassink/k8s-on-rpi/archive/master.zip1
$ apt-get update1
$ apt-get install unzip1
$ unzip k8s-on-rpi.zip1
$ mkdir /etc/kubernetes1
$ cp k8s-on-rpi-master/rootfs/etc/kubernetes/k8s.conf /etc/kubernetes/k8s.conf1
IMPORTANT: Change the ip-address in /etc/kubernetes/k8s.conf to match the master node's ip-address before running the following command.
$ ./k8s-on-rpi-master/install-k8s-worker.sh1
The install script setup everything needed in order to be a worker-node. This involves installing 4 services, which is quite similar to what the master install script was doing. The biggest difference is that the
etcdservice
is not running and the kubelet service is configured as a worker node.
To see verify that all nodes are registered correctly, run the following.
$ kubectl get nodesNAME LABELS STATUS AGE123456
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready 3h
192.168.1.22 kubernetes.io/hostname=192.168.1.22 Ready 2h
192.168.1.23 kubernetes.io/hostname=192.168.1.23 Ready 2h
192.168.1.24 kubernetes.io/hostname=192.168.1.24 Ready 2h
Scaling the pod
The last thing we will be going through in this post is how to scale a pod. We have 4 Raspberry Pis in our cluster, but you can choose the number you please.$ kubectl scale --replicas=4 rc/busybox1
And lastly we can check that we got 4 busyboxes running in our cluster
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE busybox-2oc8z 1/1 Running 1 2h 192.168.1.22 busybox-82efy 1/1 Running 0 2h 192.168.1.24 busybox-gw797 1/1 Running 0 2h 192.168.1.23 busybox-v12rw 1/1 Running 1 3h 127.0.0.1 k8s-master-127.0.0.1 3/3 Running 5 3h 127.0.0.11234567
Now you have a Kubernetes cluster running with some worker nodes. Stay tuned for a guide to get up and running with Kubernetes
dashboard.
For more information about the installation procedure for Kubernetes, please check out the Getting
started guide and the link written by Arjen
Wassink.
相关文章推荐
- Asp.net MVC 中Controller返回值类型ActionResult
- Asp +Js 无刷新分页
- Asp +Js 无刷新分页
- ASP.NET后台页面跳转
- ASP.NET MVC-轻松理解Routing(路由)
- ASP.Net: EshineASPNet教程-商店门店地图展示
- MVC3 Razor视图引擎-基础语法
- asp.net目录跳转
- Asp.net MVC中关于@Html标签Label、Editor使用
- asp.net常用的命名空间及含义
- ASP.NET Zero--5.配置权限
- ASP.NET 后台获取backbone提交的数据
- asp .net mvc 记事
- 基于Asp.Net webApi owin oauth2的实现
- ASP.Net: EshineASPNet教程-多语言与单一登录
- 如何在ASP.NET Core应用中实现与第三方IoC/DI框架的整合?
- Spring(十二)AspectJ框架开发AOP(基于注解)
- 树莓派raspberry pi3开机自动启动自定义图形界面程序
- iOS 一个小小的弹性动画CASpringAnimation
- [转]ASP.NET Core 1 Deploy to IIS