Calico Docker整合使用
2016-02-01 19:14
701 查看
一、使用vagrant创建coreos虚拟机(vagrant virtual box vagrant-scp plugin)
Vagranfile内容如下:# -*- mode: ruby -*- # # vi: set ft=ruby : require 'fileutils' require 'open-uri' require 'tempfile' require 'yaml' Vagrant.require_version ">= 1.6.0" $vm_num = 2 $vm_memory = 1024 #$shared_folders = {'./binary' => '/kubernetes'} $shared_folders = {} CONFIG = File.expand_path("config.rb") if File.exist?(CONFIG) require CONFIG end def vmIP(num) return "172.12.7.#{num+50}" end vmIPs = [*1..$vm_num].map{ |i| vmIP(i) } Vagrant.configure("2") do |config| # always use Vagrant's insecure key config.ssh.insert_key = false config.vm.box = "coreos-alpha-928.0.0" config.vm.provider :virtualbox do |vb| vb.cpus = 1 vb.gui = false end (1..$vm_num).each do |i| config.vm.define vm_name = "calico%d" % i do |host| host.vm.hostname = vm_name host.vm.provider :virtualbox do |vb| vb.memory = $vm_memory end host.vm.network :private_network, ip: vmIP(i) end end end
配置好config.vm.box,启动两个coreos虚拟机实体,IP地址分配为:172.12.7.51、172.12.7.52
在Vagrant file目录中执行vagrant up启动两台虚拟机
二、安装calico
将两台虚拟机etcd组成集群编写/etc/systemd/system/etcd2.service,其内容如下:
172.12.7.51上的etcd2.service:
[Unit] Description=etcd Conflicts=etcd2.service [Service] User=etcd PermissionsStartOnly=true Environment=ETCD_DATA_DIR=/var/lib/etcd2 Environment=ETCD_NAME=%m ExecStart=/bin/etcd2 --name=infra0 \ --initial-advertise-peer-urls=http://172.12.7.51:2380 \ --listen-peer-urls=http://172.12.7.51:2380 \ --listen-client-urls=http://172.12.7.51:2379,http://127.0.0.1:2379 \ --advertise-client-urls=http://172.12.7.51:2379 \ --initial-cluster-token=etcd-cluster-1 \ --initial-cluster=infra0=http://172.12.7.51:2380,infra1=http://172.12.7.52:2380 \ --initial-cluster-state=new Restart=always RestartSec=10s LimitNOFILE=40000
172.12.7.52上的etcd2.service:
[Unit] Description=etcd Conflicts=etcd2.service [Service] User=etcd PermissionsStartOnly=true Environment=ETCD_DATA_DIR=/var/lib/etcd2 Environment=ETCD_NAME=%m ExecStart=/bin/etcd2 --name=infra1 \ --initial-advertise-peer-urls=http://172.12.7.52:2380 \ --listen-peer-urls=http://172.12.7.52:2380 \ --listen-client-urls=http://172.12.7.52:2379,http://127.0.0.1:2379 \ --advertise-client-urls=http://172.12.7.52:2379 \ --initial-cluster-token=etcd-cluster-1 \ --initial-cluster=infra0=http://172.12.7.51:2380,infra1=http://172.12.7.52:2380 \ --initial-cluster-state=new Restart=always RestartSec=10s LimitNOFILE=40000
编写/etc/systemd/system/docker.service,其内容如下:
172.12.7.51上的docker.service:
[Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=docker.socket early-docker.target network.target etcd2.service Requires=docker.socket early-docker.target etcd2.service [Service] EnvironmentFile=-/run/flannel_docker_opts.env MountFlags=slave LimitNOFILE=1048576 LimitNPROC=1048576 ExecStart=/usr/lib/coreos/dockerd daemon --cluster-store=etcd://127.0.0.1:2379 --host=fd:// $DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ [Install] WantedBy=multi-user.target
172.12.7.52上的docker.service:
[Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=docker.socket early-docker.target network.target etcd2.service Requires=docker.socket early-docker.target etcd2.service [Service] EnvironmentFile=-/run/flannel_docker_opts.env MountFlags=slave LimitNOFILE=1048576 LimitNPROC=1048576 ExecStart=/usr/lib/coreos/dockerd daemon --cluster-store=etcd://127.0.0.1:2379 --host=fd:// $DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ [Install] WantedBy=multi-user.target
在两台虚拟机上以root用户分别按顺序执行,启动etcd2与docker服务
systemctl daemon-reload systemctl start etcd2.service systemctl start docker.service
创建/opt/bin目录
mkdir -p /opt/bin && cd /opt/bin
下载calico客户端工具,并修改为可执行权限
wget http://www.projectcalico.org/latest/calicoctl chmod +x calicoctl
环境准备完成
三、启动calico服务
在两台虚拟机上分别执行以下命令(此命令将pull calico/node 与calico/node-libnetwork容器):calicoctl node --libnetwork
当容器pull完成之后,执行
docker ps会看到如下效果:
core@calico2 ~ $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ab7f0b4889c6 calico/node-libnetwork:v0.7.0 "./start.sh" 46 minutes ago Up 46 minutes calico-libnetwork 5a2f980698db calico/node:v0.15.0 "/sbin/start_runit" 46 minutes ago Up 46 minutes calico-node
创建网络
ipam驱动选择请参考https://github.com/projectcalico/calico-containers/blob/master/docs/calico-with-docker/docker-network-plugin/README.md的Select the IPAM driver章节
创建地址池:
calicoctl pool add 192.168.0.0/16
创建三个网络:
docker network create --driver calico --ipam-driver calico net1 docker network create --driver calico --ipam-driver calico net2 docker network create --driver calico --ipam-driver calico net3
在172.12.7.51上执行,(基于net1、net2创建3个容器):
docker run --net net1 --name workload-A -tid busybox docker run --net net2 --name workload-B -tid busybox docker run --net net1 --name workload-C -tid busybox
在172.12.7.52上执行(基于net3、net1创建2个容器):
docker run --net net3 --name workload-D -tid busybox docker run --net net1 --name workload-E -tid busybox
理论上workload-A、workload-C、workload-E互通,其余的都不能互通。
四、验证
测试workload-A、workload-C之间网络,在172.12.7.51上执行:docker exec workload-A ping -c 4 workload-C.net1 core@calico1 ~ $ docker exec workload-A ping -c 4 workload-C.net1 PING workload-C.net1 (192.168.0.2): 56 data bytes 64 bytes from 192.168.0.2: seq=0 ttl=63 time=0.069 ms 64 bytes from 192.168.0.2: seq=1 ttl=63 time=0.090 ms 64 bytes from 192.168.0.2: seq=2 ttl=63 time=0.077 ms 64 bytes from 192.168.0.2: seq=3 ttl=63 time=0.062 ms --- workload-C.net1 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.062/0.074/0.090 ms
core@calico1 ~ $ docker exec workload-A ping -c 4 workload-E.net1 PING workload-E.net1 (192.168.0.65): 56 data bytes 64 bytes from 192.168.0.65: seq=0 ttl=62 time=0.894 ms 64 bytes from 192.168.0.65: seq=1 ttl=62 time=0.757 ms 64 bytes from 192.168.0.65: seq=2 ttl=62 time=0.758 ms 64 bytes from 192.168.0.65: seq=3 ttl=62 time=0.764 ms --- workload-E.net1 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.757/0.793/0.894 ms
以上测试验证了猜测workload-A、workload-C、workload-E,如果如果有兴趣可以测试下workload-C、workload-E之间,结果也是一样
测试workload-A与workload-B的网络
docker exec workload-A ping -c 4 `docker inspect --format "{{ .NetworkSettings.Networks.net2.IPAddress }}" workload-B` core@calico1 ~ $ docker exec workload-A ping -c 4 `docker inspect --format "{{ .NetworkSettings.Networks.net2.IPAddress }}" workload-B` PING 192.168.0.1 (192.168.0.1): 56 data bytes --- 192.168.0.1 ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss
测试workload-A与workload-D的网络
先获取workload-D的ip地址为192.168.0.64,在172.12.7.52上执行:
docker inspect --format "{{ .NetworkSettings.Networks.net3.IPAddress }}" workload-D core@calico2 ~ $ docker inspect --format "{{ .NetworkSettings.Networks.net3.IPAddress }}" workload-D 192.168.0.64
在172.12.7.51上执行
docker exec workload-A ping -c 4 192.168.0.64 core@calico1 ~ $ docker exec workload-A ping -c 4 192.168.0.64 PING 192.168.0.64 (192.168.0.64): 56 data bytes --- 192.168.0.64 ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss
以上测试也在预料之中。
相关文章推荐
- docker容器的网络信息查看
- 在windows下的安装Docker的教程
- 8个你可能不知道的Docker知识
- 在Docker中自动化部署Ruby on Rails的教程
- 搭建基于Docker的PHP开发环境的详细教程
- 利用OpenVSwitch在多台主机上部署Docker的教程
- ubuntu14.04+docker的安装及使用
- Docker 清理命令集锦
- 再Docker中架设完整的WordPress站点全攻略
- 基于 Docker 开发 NodeJS 应用
- 使用Docker来加速构建Android应用的基本部署思路解析
- 在CoreOS上搭建一个WordPress程序操作实例
- 在Docker上部署Python的Flask框架的教程
- 在Docker上开始部署Python应用的教程
- 详解在Python和IPython中使用Docker
- 使用IPython来操作Docker容器的入门指引
- OSX下brew安装docker(boot2docker)
- docker 设置TLS远程访问
- mesos + marathon + docker部署
- docker-registry server部署