Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: minor wording changes

...

Before you create VXLANs with MidoNet, make sure you have the following components:

  • A switch (L2 layer 2 gateway) with a Tomahawk, Trident II+ or Trident II chipset running Cumulus Linux
  • OVSDB server (ovsdb-server), included in Cumulus Linux
  • VTEPd (ovs-vtepd), included in Cumulus Linux and supports VLAN-aware bridges

...

Before you start configuring the MidoNet tunnel zones , and VTEP binding, and connecting virtual ports to the VXLAN, you need to complete the bootstrap process on each switch to which you plan to build VXLAN tunnels. This creates the VTEP gateway and initializes the OVS database server. You only need to do the bootstrapping once, before you begin the MidoNet integration.

...

Before you start bootstrapping the integration, you need to enable the openvswitch-vtep package, since ; it is disabled by default in Cumulus Linux.

...

  • Switch name: The name of the switch that is the VTEP gateway.
  • Tunnel IP address: The datapath IP address of the VTEP.
  • Management IP address: The IP address of the switch's management interface on the switch.
Expand
titleFor example, click here ...
Code Block
cumulus@switch:~$ sudo vtep-bootstrap sw11 10.111.1.1 10.50.20.21 --no_encryption
Executed: 
 define physical switch
 ().
Executed: 
 define local tunnel IP address on the switch
 ().
Executed: 
 define management IP address on the switch
 ().
Executed: 
 restart a service
 (Killing ovs-vtepd (28170).
Killing ovsdb-server (28146).
Starting ovsdb-server.
Starting ovs-vtepd.).
Note

 

Since Because MidoNet does not have a controller, you need to use a dummy IP address (for example, 1.1.1.1) for the controller parameter in the bootstrap script. After the script completes, delete the VTEP manager, since as it is not needed and will otherwise fill the logs with inconsequential error messages:

Code Block
cumulus@switch:~$ sudo vtep-ctl del-manager

Manually Bootstrapping

If you don't do not use the bootstrap script, then you must initialize the OVS database instance manually , and create the VTEP.

Perform the following commands in order (see the automated bootstrapping example above for values):

...

At this point, the switch is ready to connect to MidoNet. The rest of the configuration is performed in the MidoNet Manager GUI , or using the MidoNet API.

...

  1. Click Tunnel Zones in the menu on the left side.
  2. Click Add.
  3. Give the tunnel zone a Name and select "VTEP" for  for the Type
  4. Click Save

Adding Hosts to a Tunnel Zone

Once After you create the tunnel zone is created, click the name of the tunnel zone to view the hosts table.

...

The tunnel zone is a construct used to define the VXLAN source address used for the tunnel. This host's address The address of this host is used for the source of the VXLAN encapsulation , and traffic will transit transits into the routing domain from this point. ThusTherefore, the host must have layer 3 reachability to the Cumulus Linux switch tunnel IP.

...

The new VTEP appears in the list below. MidoNet then initiates a connection between the OpenStack Controller and the Cumulus Linux switch. If the OVS client is successfully connected connects to the OVSDB server, the VTEP entry should display displays the switch name and VXLAN tunnel IP address, which you specified during the bootstrapping process. 

...

  1. Click Add.
  2. In the Port Name list, select the port on the Cumulus Linux switch that you are using to connect to the VXLAN segment.
  3. Specify the VLAN ID (enter 0 for untagged).
  4. In the Bridge list, select the MidoNet bridge that the instances (VMs) are using in OpenStack.
  5. Click Save.

You should see the port binding displayed in the binding table under the VTEP.  

Once After the port is bound, this automatically configures a VXLAN bridge interface, and includes the VTEP interface and the port bound to the bridge. Now the OpenStack instances (VMs) should be are able to ping the hosts connected to the bound port on the Cumulus switch.  The The Troubleshooting section below demonstrates the verification of the VXLAN data and control planes. 

...

root@os-controller:~# midonet-cli
midonet>

Now from From the MidoNet CLI, the commands explained in this section perform the same operations depicted in the previous section with the MidoNet Manager GUI.

  1. Create a tunnel zone with a name and type vtep:

    Code Block
    midonet> tunnel-zone create name sw12 type vtep
    tzone1
  2. The tunnel zone is a construct used to define the VXLAN source address used for the tunnel. This host's address The address of this host is used for the source of the VXLAN encapsulation , and traffic will transit transits into the routing domain from this point. ThusTherefor, the host must have layer 3 reachability to the Cumulus Linux switch tunnel IP.
    • First, get obtain the list of available hosts connected to the Neutron network and the MidoNet bridge.  
    • Next, get a listing of all the interfaces.  
    • Finally, add a host entry to the tunnel zone ID returned in the previous step , and specify which interface address to use.

      Code Block
      midonet> list host
      host host0 name os-compute1 alive true
      host host1 name os-network alive true 
      midonet> host host0 list interface
      iface midonet host_id host0 status 0 addresses [] mac 02:4b:38:92:dd:ce mtu 1500 type Virtual endpoint DATAPATH
      iface lo host_id host0 status 3 addresses [u'127.0.0.1', u'169.254.169.254', u'0:0:0:0:0:0:0:1'] mac 00:00:00:00:00:00 mtu 65536 type Virtual endpoint LOCALHOST
      iface virbr0 host_id host0 status 1 addresses [u'192.168.122.1'] mac 22:6e:63:90:1f:69 mtu 1500 type Virtual endpoint UNKNOWN
      iface tap7cfcf84c-26 host_id host0 status 3 addresses [u'fe80:0:0:0:e822:94ff:fee2:d41b'] mac ea:22:94:e2:d4:1b mtu 65000 type Virtual endpoint DATAPATH
      iface eth1 host_id host0 status 3 addresses [u'10.111.0.182', u'fe80:0:0:0:5054:ff:fe85:acd6'] mac 52:54:00:85:ac:d6 mtu 1500 type Physical endpoint PHYSICAL
      iface tapfd4abcea-df host_id host0 status 3 addresses [u'fe80:0:0:0:14b3:45ff:fe94:5b07'] mac 16:b3:45:94:5b:07 mtu 65000 type Virtual endpoint DATAPATH
      iface eth0 host_id host0 status 3 addresses [u'10.50.21.182', u'fe80:0:0:0:5054:ff:feef:c5dc'] mac 52:54:00:ef:c5:dc mtu 1500 type Physical endpoint PHYSICAL
      midonet> tunnel-zone tzone0 add member host host0 address 10.111.0.182
      zone tzone0 host host0 address 10.111.0.182
    Repeat this procedure for each OpenStack host connected to the Neutron network and the MidoNet bridge.
  3. Create a VTEP and assign it to the tunnel zone ID returned in the previous step. The management IP address (the destination address for the VXLAN /or remote VTEP) and the port must be the same ones you configured configure in the vtep-bootstrap script or the manual bootstrapping:

    Code Block
    midonet> vtep add management-ip 10.50.20.22 management-port 6632 tunnel-zone tzone0
    name sw12 description sw12 management-ip 10.50.20.22 management-port 6632 tunnel-zone tzone0 connection-state CONNECTED

    In this step, MidoNet initiates a connection between the OpenStack Controller and the Cumulus Linux switch. If the OVS client is successfully connected connects to the OVSDB server, the returned values should show the name and description matching the switch-name parameter specified in the bootstrap process.

    Note

    Verify the connection-state as CONNECTED, otherwise if . If ERROR is returned, you must debug. Typically this only fails if the management-ip and / or the management-port settings are wrongincorrect.

  4. The VTEP binding uses the information provided to MidoNet from the OVSDB server, providing a list of ports that the hardware VTEP can use for layer 2 attachment. This binding virtually connects the physical interface to the overlay switch, and joins it to the Neutron bridged network.

    First, get the UUID of the Neutron network behind the MidoNet bridge:

    Code Block
    midonet> list bridge
    bridge bridge0 name internal state up
    bridge bridge1 name internal2 state up
    midonet> show bridge bridge1 id
    6c9826da-6655-4fe3-a826-4dcba6477d2d

    Next, create the VTEP binding , using the UUID and the switch port being bound to the VTEP on the remote end. If there is no VLAN ID, set vlan to 0:

    Code Block
    midonet> vtep name sw12 binding add network-id 6c9826da-6655-4fe3-a826-4dcba6477d2d physical-port swp11s0 vlan 0
    management-ip 10.50.20.22 physical-port swp11s0 vlan 0 network-id 6c9826da-6655-4fe3-a826-4dcba6477d2d

At this point, the VTEP should be is connected , and the layer 2 overlay should be is operational. From the openstack instance (VM), you should be able to can ping a physical server connected to the port bound to the hardware switch VTEP.

...

In this solution, the control plane consists of the connection between the OpenStack Controller , and each Cumulus Linux switch running the ovsdb-server and vtepd daemons.

...

If the connection fails, verify IP reachability from the host to the switch. If that succeeds, it is likely that the bootstrap process did not set up port 6632. Redo the bootstrapping procedures above.

...

After creating the VTEP in MidoNet and adding an interface binding, you should see see the br-vxln and vxln interfaces on the switch. You can verify Verify that the VXLAN bridge and VTEP interface are created and UP:

...

Next, look at the bridging table for the VTEP and the forwarding entries. The bound interface and the VTEP should be are listed along with the MAC addresses of those interfaces. When the hosts attached to the bound port send data, those MACs are learned , and entered into the bridging table, as well as the OVSDB.

...

If you have verified the control plane is correct, and you still cannot get data between the OpenStack instances and the physical nodes on the switch, there may might be something wrong with the data plane. The data plane consists of the actual VXLAN encapsulated path, between one of the OpenStack nodes running the midolman service. This is typically the compute nodes, but can include the MidoNet gateway nodes. If the OpenStack instances can ping the tenant router address but cannot ping the physical device connected to the switch (or vice versa), then something is wrong in the data plane.

...

First, there must be IP reachability between the encapsulating node, and the address you bootstrapped as the tunnel IP on the switch. Verify the OpenStack host can ping the tunnel IP.  If this doesn't does not work, check the routing design , and fix the layer 3 problem first.

...

If the instance (VM) cannot ping the physical server , or the reply is not returning, look at the packets on the OpenStack node. Initiate a ping from the OpenStack instance, then using tcpdump, hopefully you can see use tcpdump to see the VXLAN data. This example displays what it looks like when it is workinga successful tcpdump

Code Block
root@os-compute1:~# tcpdump -i eth1 -l -nnn -vvv -X -e port 4789
52:54:00:85:ac:d6 > 00:e0:ec:26:50:36, ethertype IPv4 (0x0800), length 148: (tos 0x0, ttl 255, id 7583, offset 0, flags [none], proto UDP (17), length 134)
 10.111.0.182.41568 > 10.111.1.2.4789: [no cksum] VXLAN, flags [I] (0x08), vni 10008
fa:16:3e:14:04:2e > 64:ae:0c:32:f1:41, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 64058, offset 0, flags [DF], proto ICMP (1), length 84)
 10.111.102.104 > 10.111.102.2: ICMP echo request, id 15873, seq 0, length 64
 0x0000: 4500 0086 1d9f 0000 ff11 8732 0a6f 00b6 E..........2.o..
 0x0010: 0a6f 0102 a260 12b5 0072 0000 0800 0000 .o...`...r......
 0x0020: 0027 1800 64ae 0c32 f141 fa16 3e14 042e .'..d..2.A..>...
 0x0030: 0800 4500 0054 fa3a 4000 4001 5f26 0a6f ..E..T.:@.@._&.o
 0x0040: 6668 0a6f 6602 0800 f9de 3e01 0000 4233 fh.of.....>...B3
 0x0050: 7dec 0000 0000 0000 0000 0000 0000 0000 }...............
 0x0060: 0000 0000 0000 0000 0000 0000 0000 0000 ................
 0x0070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
 0x0080: 0000 0000 0000 ......
00:e0:ec:26:50:36 > 52:54:00:85:ac:d6, ethertype IPv4 (0x0800), length 148: (tos 0x0, ttl 62, id 2689, offset 0, flags [none], proto UDP (17), length 134)
 10.111.1.2.63385 > 10.111.0.182.4789: [no cksum] VXLAN, flags [I] (0x08), vni 10008
64:ae:0c:32:f1:41 > fa:16:3e:14:04:2e, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 255, id 64058, offset 0, flags [DF], proto ICMP (1), length 84)
 10.111.102.2 > 10.111.102.104: ICMP echo reply, id 15873, seq 0, length 64
 0x0000: 4500 0086 0a81 0000 3e11 5b51 0a6f 0102 E.......>.[Q.o..
 0x0010: 0a6f 00b6 f799 12b5 0072 0000 0800 0000 .o.......r......
 0x0020: 0027 1800 fa16 3e14 042e 64ae 0c32 f141 .'....>...d..2.A
 0x0030: 0800 4500 0054 fa3a 4000 ff01 a025 0a6f ..E..T.:@....%.o
 0x0040: 6602 0a6f 6668 0000 01df 3e01 0000 4233 f..ofh....>...B3
 0x0050: 7dec 0000 0000 0000 0000 0000 0000 0000 }...............
 0x0060: 0000 0000 0000 0000 0000 0000 0000 0000 ................
 0x0070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
 0x0080: 0000 0000 0000 ......

...

These commands show you the information installed in the OVSDB. This database is structured using the physical switch ID, with one or more logical switch IDs associated with it. The bootstrap process creates the physical switch , and MidoNet creates the logical switch after the control session is established.

...

These commands show the MAC addresses learned from the connected port bound to the logical switch , or the MAC addresses advertised from MidoNet.  The The unknown-dst entries are installed to satisfy the ethernet flooding of unknown unicast , and are important for learning.  

...

The ovsdb-client dump command is large , but shows all of the information and tables that are used in communication between the OVS client and server.

...