Cloud in Action: Migrate OpenStack from Linux Bridge to Open vSwitch
2017-11-18 23:23
776 查看
Cloud in Action: Migrate OpenStack from Linux Bridge to Open vSwitch
薛国锋 xueguofeng2011@gmail.com
Open vSwitch supports most of the features you would find on a physical switch, providing some advanced features like RSTP support, VXLANs, OpenFlow, and supports multiple vlans on a single bridge. Today I am going to migrate my OpenStack lab environment from Linux Bridge Agent to Open vSwitch Agent and make it possible for the future integration with SDN Controller - OpenDaylight. We will make the configuration adjustment on top of the lab environment of last time: http://8493144.blog.51cto.com/8483144/1977139
We will just create a minimum POC for the purpose of learning about OpenStack andOpen vSwitch, not for production system installions:
1)The controller nodes runs all the serivces – Dashboard, Networking, Compute, Image and Identity, while the compute nodes only run Nova-compute and Neutron-OpenvSwitch-Agent.
2)The management and data networks are integrated by eth0 in this environment, which means the management traffic and the VxLAN traffic among VMs are mixed.
3)All the traffic of tenant would go from compute nodes to the controller node first through VxLAN tunnels, and then go to the DC GW via its vRouter.
https://docs.openstack.org/newton/networking-guide/deploy-ovs-selfservice.html
https://docs.openstack.org/ocata/networking-guide/deploy-ovs-provider.html#deploy-ovs-provider
Delete the Linux bridge agents in the database:
neutron agent-delete8c69e233-75d4-4ded-bcce-81c48193f18a
neutron agent-delete94e62fbc-f6a8-4dc6-8870-11fb362869f1
neutron agent-deleted0b66ca5-aba8-4e81-9c30-dbe79d6d6f94
Create the privder and self-service networks:
. admin-openrcopenstacknetwork create --share --external --provider-physical-network provider--provider-network-type flat xgf_provideropenstacksubnet create --network xgf_provider --allocation-poolstart=192.168.100.200,end=192.168.100.220 --dns-nameserver 10.0.1.1 --gateway192.168.100.111 --subnet-range 192.168.100.0/24 xgf_sub_provider demo-openrcopenstacknetwork create xgf_selfservice_1openstacksubnet create --network xgf_selfservice_1 --dns-nameserver 10.0.1.1 --gateway192.168.101.111 --subnet-range 192.168.101.0/24 xgf_sub_selfservice_1openstackrouter create demo_routerneutronrouter-interface-add demo_router xgf_sub_selfservice_1neutronrouter-gateway-set demo_router xgf_provider . admin-openrcopenstacknetwork create xgf_selfservice_2openstacksubnet create --network xgf_selfservice_2 --dns-nameserver 10.0.1.1 --gateway192.168.102.111 --subnet-range 192.168.102.0/24 xgf_sub_selfservice_2openstackrouter create admin_routerneutronrouter-interface-add admin_router xgf_sub_selfservice_2neutronrouter-gateway-set admin_router xgf_provider
Launch 4 VMs and check OVS:
薛国锋 xueguofeng2011@gmail.com
Open vSwitch supports most of the features you would find on a physical switch, providing some advanced features like RSTP support, VXLANs, OpenFlow, and supports multiple vlans on a single bridge. Today I am going to migrate my OpenStack lab environment from Linux Bridge Agent to Open vSwitch Agent and make it possible for the future integration with SDN Controller - OpenDaylight. We will make the configuration adjustment on top of the lab environment of last time: http://8493144.blog.51cto.com/8483144/1977139
We will just create a minimum POC for the purpose of learning about OpenStack andOpen vSwitch, not for production system installions:
1)The controller nodes runs all the serivces – Dashboard, Networking, Compute, Image and Identity, while the compute nodes only run Nova-compute and Neutron-OpenvSwitch-Agent.
2)The management and data networks are integrated by eth0 in this environment, which means the management traffic and the VxLAN traffic among VMs are mixed.
3)All the traffic of tenant would go from compute nodes to the controller node first through VxLAN tunnels, and then go to the DC GW via its vRouter.
https://docs.openstack.org/newton/networking-guide/deploy-ovs-selfservice.html
https://docs.openstack.org/ocata/networking-guide/deploy-ovs-provider.html#deploy-ovs-provider
controller | compute1 | compute2 |
// Remove all instances, vRouters, Floating IPs, selfservice and provider networks via the dashboard | ||
// Stop neutron-linuxbrige-agent sudo service neutron-linuxbridge-agent stop | ||
// Remove neutron-linuxbrige-agent and its configuration and data files sudo apt-get remove neutron-linuxbridge-agent sudo apt-get purge neutron-linuxbridge-agent | ||
// Install neutron-openvswitch-agent sudo apt-get update sudo apt-get install neutron-openvswitch-agent | ||
sudo ovs-vsctl add-br br-provider sudo ovs-vsctl add-port br-provider eth1 | #sudo ovs-vsctl add-br br-provider #sudo ovs-vsctl add-port br-provider eth1 | #sudo ovs-vsctl add-br br-provider #sudo ovs-vsctl add-port br-provider eth1 |
If you want to launch VMs to the provider netowrk directly in compute nodes, br-provider is needed. | ||
sudo gedit /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:ipcc2014@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true | sudo gedit /etc/neutron/neutron.conf [DEFAULT] #core_plugin = ml2 transport_url = rabbit://openstack:ipcc2014@controller auth_strategy = keystone | sudo gedit /etc/neutron/neutron.conf [DEFAULT] #core_plugin = ml2 transport_url = rabbit://openstack:ipcc2014@controller auth_strategy = keystone |
sudo gedit /etc/neutron/plugins/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan #mechanism_drivers = linuxbridge,l2population mechanism_drivers = openvswitch,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vlan] network_vlan_ranges = provider [ml2_type_vxlan] vni_ranges = 1:1000 | ||
sudo gedit /etc/neutron/plugins/ml2/openvswitch_agent.ini [ovs] bridge_mappings = provider:br-provider local_ip = 10.0.0.11 [agent] tunnel_types = vxlan l2_population = True [securitygroup] firewall_driver = iptables_hybrid | sudo gedit /etc/neutron/plugins/ml2/openvswitch_agent.ini [ovs]#bridge_mappings = provider:br-provider local_ip = 10.0.0.31 [agent] tunnel_types = vxlan l2_population = True [securitygroup] firewall_driver = iptables_hybrid | sudo gedit /etc/neutron/plugins/ml2/openvswitch_agent.ini [ovs] #bridge_mappings = provider:br-provider local_ip = 10.0.0.32 [agent] tunnel_types = vxlan l2_population = True [securitygroup] firewall_driver = iptables_hybrid |
// bridge_mappings is to connect br-int to br-provider; wthout the setting of bridge_mapping, you cannot launch VMs to the provider netowrk in compute nodes. | ||
sudo gedit /etc/neutron/l3_agent.ini [DEFAULT] #interface_driver = linuxbridgeinterface_driver = openvswitch external_network_bridge = | ||
sudo gedit /etc/neutron/dhcp_agent.ini [DEFAULT]#interface_driver = linuxbridgeinterface_driver = openvswitchdhcp_driver = neutron.agent.linux.dhcp.Dnsmasqenable_isolated_metadata = trueforce_metadata = True | ||
sudo gedit /etc/neutron/metadata_agent.ini [DEFAULT]nova_metadata_ip = controllermetadata_proxy_shared_secret = ipcc2014 | ||
// Upgrdade the database sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron | ||
reboot |
Delete the Linux bridge agents in the database:
neutron agent-delete8c69e233-75d4-4ded-bcce-81c48193f18a
neutron agent-delete94e62fbc-f6a8-4dc6-8870-11fb362869f1
neutron agent-deleted0b66ca5-aba8-4e81-9c30-dbe79d6d6f94
Create the privder and self-service networks:
. admin-openrcopenstacknetwork create --share --external --provider-physical-network provider--provider-network-type flat xgf_provideropenstacksubnet create --network xgf_provider --allocation-poolstart=192.168.100.200,end=192.168.100.220 --dns-nameserver 10.0.1.1 --gateway192.168.100.111 --subnet-range 192.168.100.0/24 xgf_sub_provider demo-openrcopenstacknetwork create xgf_selfservice_1openstacksubnet create --network xgf_selfservice_1 --dns-nameserver 10.0.1.1 --gateway192.168.101.111 --subnet-range 192.168.101.0/24 xgf_sub_selfservice_1openstackrouter create demo_routerneutronrouter-interface-add demo_router xgf_sub_selfservice_1neutronrouter-gateway-set demo_router xgf_provider . admin-openrcopenstacknetwork create xgf_selfservice_2openstacksubnet create --network xgf_selfservice_2 --dns-nameserver 10.0.1.1 --gateway192.168.102.111 --subnet-range 192.168.102.0/24 xgf_sub_selfservice_2openstackrouter create admin_routerneutronrouter-interface-add admin_router xgf_sub_selfservice_2neutronrouter-gateway-set admin_router xgf_provider
Launch 4 VMs and check OVS:
相关文章推荐
- How to build and install Open vSwitch on a linux
- Local Speaking:Bring Linux and open source solutions to the Azure cloud
- How to build and install Open vSwitch on a linux
- Move or migrate user accounts from old Linux server to a new Linux server
- Linux bridge or Open vSwitch
- How to Install Open vSwitch on Linux
- How to Install Open vSwitch on Linux
- How to migrate from VMware and Hyper-V to OpenStack
- How to migrate from VMware and Hyper-V to OpenStack
- Migrate a Windows 2008 VM from VMware to OpenStack
- Hack Proofing Linux : A Guide to Open Source Security
- ONL(open network linux) from OCP
- Unable To Open Database After ASM Upgrade From Release 11.1 To Release 11.2
- How to check Open vSwitch version and supports OpenFlow version
- [linux]Error: failure: repodata/repomd.xml from fedora: [Errno 256] No more mirrors to try.
- solution for GRASS GIS 6.4.3 on linux failed to open the gui
- 关于错误Failed to open dex from file descriptor for zip file
- Linux进程(之)进程切换函数switch_to()解析