Saturday, May 28, 2016

OVN OpenStack

Install ovn-central service which will have ovn northbound and southbound database installed and use the correct schema.

dpkg -i openvswitch-common_2.5.90-1_amd64.deb
dpkg -i openvswitch-switch_2.5.90-1_amd64.deb
dpkg -i ovn-common_2.5.90-1_amd64.deb
dpkg -i ovn-central_2.5.90-1_amd64.deb

OVN database files will be at /etc/openvswitch directory by default. The above will also install openvswitch openvswitchd and its default database. There will be total three ovs database files:

  1. conf.db
  2. ovnnb_db.db
  3. ovnsb_db.db

Once ovn-central is installed, there will be ovsdb-server running for ovnnb_db and ovnsb_db. This service will also have ovn-northd running. Use the following command to start/stop the service

     service ovn-central start/stop

The above service should produce four processes like the following:

ovsdb-server --detach -vconsole:off --log-file=/var/log/openvswitch/ovsdb-server-nb.log --remote=punix:/var/run/openvswitch/ovnnb_db.sock --remote=ptcp:6641:0.0.0.0 --pidfile=/var/run/openvswitch/ovnnb_db.pid --unixctl=ovnnb_db.ctl /etc/openvswitch/ovnnb_db.db

ovsdb-server --detach -vconsole:off --log-file=/var/log/openvswitch/ovsdb-server-sb.log --remote=punix:/var/run/openvswitch/ovnsb_db.sock --remote=ptcp:6642:0.0.0.0 --pidfile=/var/run/openvswitch/ovnsb_db.pid --unixctl=ovnsb_db.ctl /etc/openvswitch/ovnsb_db.db

ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/var/run/openvswitch/ovnnb_db.sock --ovnsb-db=unix:/var/run/openvswitch/ovnsb_db.sock --no-chdir --log-file=/var/log/openvswitch/ovn-northd.log --pidfile=/var/run/openvswitch/ovn-northd.pid --detach --monitor

ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/var/run/openvswitch/ovnnb_db.sock --ovnsb-db=unix:/var/run/openvswitch/ovnsb_db.sock --no-chdir --log-file=/var/log/openvswitch/ovn-northd.log --pidfile=/var/run/openvswitch/ovn-northd.pid --detach --monitor


Use the following command to list the database running at the different port

       1. ovsdb-client list-dbs tcp:127.0.0.1:6641
       2. ovsdb-client list-dbs tcp:127.0.0.1:6642
       3. ovsdb-client list-dbs

Command #1 above shows the OVN_Northbound database, command #2 above shows the OVN_Southbound database. Command #3 above shows the database running at the default port.

To list each database schema, try the following command

    ovsdb-client get-schema tcp:127.0.0.1:6641 | python -m json.tool


Install ovn-host service onto compute node and configure it


Useful commands:

ovs-ofctl show br-int
ovs-ofctl -O OpenFlow13 dump-flows br-int 

Friday, May 13, 2016

Dynamic Typing vs Static Typing & Strongly typed vs. Weakly typed

In a dynamically typed language, a variable is simply a value bound to a name; the value has a type -- like "integer" or "string" or "list" -- but the variable itself doesn't. You could have a variable which, right now, holds a number, and later assign a string to it if you need it to change.

In a statically typed language, the variable itself has a type; if you have a variable that's an integer, you won't be able to assign any other type of value to it later. Some statically typed languages require you to write out the types of all your variables, while others will deduce many of them for you automatically. A statically typed language can catch some errors in your program before it even runs, by analyzing the types of your variables and the ways they're being used. A dynamically language can't necessarily do this, but generally you'll be writing unit tests for your code either way (since type errors are a small fraction of all the things that might go wrong in a program); as a result, programmers in dynamic languages rely on their test suites to catch these and all other errors, rather than using a dedicated type-checking compiler.

In a strongly typed language, you are simply not allowed to do anything that's incompatible with the type of data you're working with. For example, in a weakly typed language you can typically do 3 + 5 + 7 and get the result 15, because numbers can be added; similarly, you can often do 'Hello' + 'And' + 'Goodbye' and get the result "HelloAndGoodBye", because strings support concatenation. But in a strongly-typed language you can't do 'Hello' + 5 + 'Goodbye', because there's no defined way to "add" strings and numbers to each other. In a weakly typed language, the compiler or interpreter can perform behind-the-scenes conversions to make these types of operations work; for example, a weakly typed language might give you the string "Hello5Goodbye" as a result for 'Hello' + 5 + 'Goodbye'. The advantage to a strongly typed language is that you can trust what's going on: if you do something wrong, your program will generate a type error telling you where you went wrong, and you don't have to memorize a lot of arcane type-conversion rules or try to debug a situation where your variables have been silently changed without your knowledge.

Python is a strongly and dynamically typed programming language.

Some nice useful networking related commands

Send an arp command:
 
   arping 10.30.0.132
 
View arp cache:
 
   arp -n 
 
Display routing table using any of the following commands:
 
  ip route show
  route -n
  netstat -rn 
 
 
Show a route for a destination IP address:
 
  ip route get 10.0.2.14
 
Show traffic on a nic for a particular protocol:
 
  tcpdump -i eth0 -v arp    (arping)
  tcpdump -i eth0 -v icmp   (ping)
  tcpdump -i eth0 -v 


Capture packets with IP address:
 
   tcpdump -n -i eth0

More about type driver

The following content is copy/paste from this link:

https://ask.openstack.org/en/question/51388/whats-the-difference-between-flat-gre-and-vlan-neutron-network-types/

I copy it here for my own convenience. Credit goes to the original author.


A local network is a network that can only be realized on a single host. This is only used in proof-of-concept or development environments, because just about any other OpenStack environment will have multiple compute hosts and/or a separate network host.

A flat network is a network that does not provide any segmentation options. A traditional L2 ethernet network is a "flat" network. Any servers attached to this network are able to see the same broadcast traffic and can contact each other without requiring a router. flat networks are often used to attach Nova servers to an existing L2 network (this is called a "provider network").

A vlan network is one that uses VLANs for segmentation. When you create a new network in Neutron, it will be assigned a VLAN ID from the range you have configured in your Neutron configuration. Using vlan networks requires that any switches in your environment are configured to trunk the corresponding VLANs.

gre and vxlan networks are very similar. They are both "overylay" networks that work by encapsulating network traffic. Like vlan networks, each network you create receives a unique tunnel id. Unlike vlan networks, an overlay network does not require that you synchronize your OpenStack configuration with your L2 switch configuration.<added by tong> Since VLAN actually will require switches to be smart to know VLAN IDs and configurations. Where GRE tunnels do not really care about your physical network switches since the traffic is over IP layer, point to point.</added by tong>


Some additional comments to add to what larsks answered - In a flat network, everyone shares the same network segment. For example, say 2 tenants are sharing the cluster, and this segment is 10.4.128.0/20 - VM1 from tenant 1 might get assigned 10.4.128.3, VM1 from tenant 2 might get 10.4.128.4, and so on. This means that tenant 1 can see the traffic from tenant 2. Not a good thing in most cases.

In a VLAN network, tenants are separated because each is assigned to a VLAN. In OpenVSwitch plugin (or ML2 with OVS driver), OVS will in the virtual switches allocate an internal VLAN for each tenant. If you mix in a hardware plugin like the Cisco Nexus plugin, it will be asked to allocate VLANs as well. These VLANs provide separation amongst the tenants (as VLANs are designed to do). It also means that tenants can specify the same subnet and overlap in that subnet range - VM1 from tenant 1 can get assigned IP 10.4.128.3 and VM1 from tenant 2 can also get 10.4.128.3, without conflict. This makes life easier for administrators because they don't have to worry about tenants that want the same subnet and address allocations, because the VLANs keep them separate.

GRE segmenation (and VXLAN) also provides separation among tenants, and also allows overlapping subnets and IP ranges. It does this by encapsulating tenant traffic in tunnels. Say your tenant has VMs running on compute nodes A, B, and C. Neutron (along with OVS) will build a fully connected mesh of tunnels between all of these machines, and create a tunnel bridge on each of these nodes that is used to direct traffic from VMs into and out of these tunnels. If a VM on machine A wants to send packets to a VM on machine B, machine A will encapsulate the IP packets coming out of the VM using a segmentation ID that is generated for the tenant by OpenStack, and the receiving machine (B) will decapsulate the packets and route them to the destination VM using the addressing information in the ethernet frame.

GRE and VXLAN scale better than VLAN, and while VLAN based networking probably has its applications (you might be integrating with a infrastructure that is VLAN-based to begin with), I have found GRE/VXLAN based OVS setups to be easier to deploy and debug than VLAN based setups (one reason is you can use a dumb switch to connect all the physical hosts), and so my feeling is you want to start there if you have a deployment scenario that involves multiple tenants and you want to allow for overlapping network segments and IP address ranges in your tenants.

Wednesday, May 4, 2016

Neutron type driver vs mechanism driver

The following content was copy paste from this link.

http://aqorn.com/understanding-openstack-neutron-ml2-plugin/

I copy it here for my own convenience. Credit goes to the original author.

ML2 can really be broken down into two camps working together: types and mechanisms. Types typically refer to the type of network being implemented (notice nova-network isn’t an option). Mechanisms on the other hand generally refer to how that network type should be implemented.
It’s important to understand, within the greater context of this plugin, that ML2 separates network types from vendor-specific mechanisms to access those networks. Network types and mechanisms are handled as modular drivers (swappable) and supplants legacy/monolithic plugins such as openvswitch and linuxbridge.

ML2 still supports existing Layer2 agents via RPC interface however. Some examples include openvswitch, linuxbridge and Hyper-V.

The diagram below helps visualize the ML2 plugin structure:
ml2-arch1

Generally speaking, the ML2 type driver maintains the type-specific state, provides tenant network allocation, validates provider network attributes and models network segments by using provider attributes. Some ML2-supported network types include GRE, VLAN, VxLAN, Local and Flat.

In the other camp, the ML2 mechanism driver calls on the CRUD of L2 resources on vendor products such as Cisco Nexus, Arista, LinuxBridge and Open vSwitch.

Multiple mechanism drivers can access the same network simultaneously with three types of models :
  1. agent-based:  Linuxbridge, OVS
  2. controller-based: OpenDaylight
  3. ToR switch-based: Cisco Nexus, Arista, Mellanox
More information can be found here:
https://wiki.openstack.org/wiki/Neutron/ML2

ElasticSearch Command and Scripts


  • Figure out the health of clusters
    • http://192.168.1.90:9200/_cluster/health?pretty=true
  • Query nodes stats:
    • GET http://192.168.1.90:9200/_nodes/stats/fs,indices?pretty
  • Index shards
    • http://192.168.1.90:9200/leap/_search_shards 
  • List all indexes:
    • http://9.30.217.20:9200/_cat/indices?v
  • List all the index and types:
    • http://9.30.217.20:9200/_mapping?pretty=true
  • List documents from an index and type:
    • http://9.30.217.20:9200/<index_name>/<doc_type>/_search
  • Create an index with 3 shards only:
      POST http://192.168.1.90:9200/leap
      {
          "settings" : {
              "index" : {
                  "number_of_shards" : 3,
                  "number_of_replicas" : 1
              }
          }
      }