Monday, June 27, 2016

Connect two ovs bridges

Following the following steps to connect two ovs-bridges:

1. Create two devices using veth type:

     ip link add dev "int-to-prov" type veth peer name "prov-to-int"

2. Add int-to-prov port to br-int , set the type to be patch and peer altogether using the following
    two different methods:


     ovs-vsctl add-port br-int int-to-prov
     ovs-vsctl set interface int-to-prov type=patch
     ovs-vsctl set interface int-to-prov options:peer=prov-to-int

    or

    ovs-vsctl add-port br-int int-to-prov -- set interface int-to-prov type=patch options:peer=prov-to-int


3. Add the other port to the second bridge:

    ovs-vsctl add-port br-provider prov-to-int -- set interface prov-to-int type=patch options:peer=int-to-prov

4. Now bring both ports up:

    ip link set dev prov-to-int up
    ip link set dev int-to-prov up

Saturday, June 25, 2016

OpenFlow Table Basics

OpenFlow Flow Table has following three things:

Header Field, Action, Counters

Header Field:
     Each flow table header entry is made up of six components, which define the matching rules and
     other basic rules for the corresponding flow.

     Match Fields:  Used to select packets that match the values in the fields
             Ingress Port, Ethernet Source, Ethernet Destionation, Ethernet Type, VLAN ID,
             VLAN Priority, IP Source, IP Destination, IP Protocol, IP ToS Bits,
             TCP/UDP Source Ports, TCP/UDP Designation Ports


     Priority: Relative priority of table entries.
     Counters: Updated for matching packets
     Instructions:  Actions to be taken if a match occurs
     Timeouts: Maximum amount of idle time before a flow is expired by the switch
     Cookie: Opague data value chosen by the controller


Action:
     Each flow entry is associated with zero or more actions that dictate how the device handles
     matching packets. Actions in OpenFlow specification are defined as required and optional.
     Optional actions are not required to be implemented by vendors as such.

     Forward:   required actions are ALL, Controller, Local, Table, In-Port
                       optional actions are Normal, Flood
     Drop: required
     Modify Field: optional


Counters:
     Counters are maintained per-table, per-flow, per-port and per queue. There are a set of required
     counters that all the implementations should support and there are additionally optional counters.
     Here are various counters:

     Table: Active entries, Packet Lookup, Packet Matches

     Flow: Received Packets, Received Bytes, Duration (seconds), Duration(nanoseconds)

     Port: Received Packets, Transimitted Packets, Received Bytes, Transimitted Bytes,
              Receive Drops, Transmit Drops, Receive Errors, Receive Frame Alignment Errors,
              Receive Overrun Errors, Receive CRC Errors, Collisions

     Queue: Transmit Packets, Transmit Bytes, Transmit Overrun Errors
   
Examples:

     match=(eth.src[40]), action=(drop;)

a broadcast/multicast source MAC, gets dropped

     match=(vlan.present), action=(drop;)

Anything with a logical VLAN tag.

Friday, June 24, 2016

Neutron metadata agent

The Neutron metadata agent runs at hard coded (more like a standard since Amazon also use that) 169.254.169.254 in the namespace of the tenant network. So if the metadata agent runs correctly, the address is pingable from the namespace. And VMs should also be able to ping that address.

Types of Linux Users

Linux has four types of accounts:
  • Root or Super user
  • System
  • Normal
  • Network
Use the following commands to check user types:

      #id
      #id nova

Add a user without home directory, Ubuntu adding a user by default won't create home directory:

     #useradd neutron

When happens when a OVN network and its subnet get created

  1. A logical switch will be created, using command "ovn-nbctl show" to verify. 
  2. A number of ports will be automatically added, these ports are for DHCP agent instances. They normally should occupy first few IPs in the subnet, for example, if the subnet is 172.16.31.0/24, then most likely 172.16.31.2 and 172.16.31.3 will be used for two DHCP agents if neutron is configured to have two dhcp agents per network. 172.16.31.1 will be most likely reserved for gateway if default is used. The neutron config parameter to control how many dhcp agents should be for a network is "dhcp_agents_per_network"
  3. A namespace will be created on all compute nodes, the server runs ovn-northd actually will not have the namespace unless that node is also a compute node.
  4. On the compute node where dhcp agent runs, there should be a new dnsmasq instance which allocate the IPs for the subnet. 
  5. There should be one Port_Binding entry on each chassis for the new network. Use "ovn-sbctl show" to verify, this is probably due to the dhcp agent port.
  6. Port_Binding table by using "ovsdb-client dump tcp:127.0.0.1:6642" (where the southbound database is located) should show that each port will use different tunnel key.
  7. br-int bridge should have a port/interface for the dhcp server. Use "ovs-vsctl show" command to verify, here is an example:
  8.     Bridge br-int
            fail_mode: secure
            Port "tap97ef22d4-c7"
                Interface "tap97ef22d4-c7"
                    type: internal
    
  9. Anything more? 

File permissions

To view the files owned by the group "test" and user "luser' use FIND command
 
1. Find all the groups available on your system:


    cat /etc/group |cut -d: -f1


2. For finding the groups that the current user belongs to:


    groups luser test adm cdrom sudo dip plugdev lpadmin sambashare


3. For finding groups user luser belongs to:


    groups luser 
    luser : test luser adm cdrom sudo dip plugdev lpadmin sambashare


4. Now to see the files owned by group "test" in particular path or folder. Try


    find /home -group test

    find /etc -group root

5. To see all the users in the system:

    compgen -u

6. To see all the groups in the system:

    compgen -g
 

Wednesday, June 22, 2016

Linux kernel related procedures

How to list all installed linux kernels:

    dpkg --list | grep linux-image

How to install linux kernel 3.19

   
    apt-get install linux-image-generic-lts-vivid linux-headers-generic-lts-vivid
 
How to remove older kernels
 
    dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)
 
    then use the output from the above command:
 
    sudo dpkg --purge <<linux-image-4.2.0-15-generic>> 

Monday, June 20, 2016

What happens when a vm gets created by using OVN

When a new VM gets created successfully, the following happens: 
  1. a port entry gets created, the port should have mac and IP address. use "ovn-nbctl show" command to verify. 
  2. a Port_Binding entry gets created, use "ovn-sbctl show" and "ovsdb-client dump" against the southbound database to verify. 
  3. a tap Port gets created, use "ovs-vsctl show" command to verify on the compute node where the VM was hosted. The tap port should be part of the br-int bridge.
  4. From the namespace, one should be able to ping the IP address of that new VM. Regardless which compute node the ping happens, the ping should be successful.

When a network gets created:
  1. A logical switch will be created with a name like this "neutron-<neutron network id>" 
  2. In that switch, first few ports such as .2, .3 will be used for dhcp server IPs. If there are two dhcp servers per network, then .2 and .3 will be used, however ip address .1 will be reserved for the gateway which will be used when the network gets hooked up with another network.
  3. Traffic between provider network bridge and br-int are isolated via tunnel key (id).


normally northbound database is at port 6641, dump the database content using this command:
       ovsdb-client dump tcp:127.0.0.1:6641

normally southbound database is at port 6642, dump the database content using this command:
       ovsdb-client dump tcp:127.0.0.1:6642

Thursday, June 16, 2016

Remove comments and empty line from a text file

cat <file_name> | grep -v "^#" | grep -v '^$'

This command will remove empty lines and any line starts with #.

Wednesday, June 15, 2016

ElasticSearch document routing (or allocation)

When you index a document, it is stored on a single primary shard. How does Elasticsearch know which shard a document belongs to? When we create a new document, how does it know whether it should store that document on shard 1 or shard 2?

The process can’t be random, since we may need to retrieve the document in the future. In fact, it is determined by a simple formula:
 
    shard = hash(routing) % number_of_primary_shards

The routing value is an arbitrary string, which defaults to the document’s _id but can also be set to a custom value. This routing string is passed through a hashing function to generate a number, which is divided by the number of primary shards in the index to return the remainder. The remainder will always be in the range 0 to number_of_primary_shards - 1, and gives us the number of the shard where a particular document lives.

This is the reason why the number of primary shards should not change after a index has been created since change number of primary shards will cause all sort of problems.

This is how a document gets decided where to go for a index. Then there is a way for determining where a primary shard should be allocated and where the replica shard should be allocated across the cluster.

document routing + index primary shard and replica shard allocation will be the complete elasticsearch allocation which in my view a bit confusing to grasp in the beginning of the learning ElasticSearch.

Friday, June 10, 2016

ElasticSearch Settings

ElasticSearch (ES) settings start with package org.elasticsearch.common.settings.

See the following code:

    Settings t_settings = Settings.EMPTY;
    Path path = FileSystems.getDefault().getPath(args[0]);
    Settings.Builder builder = t_settings.builder();
    try {
        builder = builder.loadFromPath(path);
    } catch (IOException e) {
        e.printStackTrace();
    } 

    Settings newsettings = builder.build();
    System.out.println(newsettings.get("path.data"));

The entire node settings can be simply built using code similar to the above. Get the an empty settings object, then pass in the configuration file. According to the setting loader defined in package org.elasticsearch.common.settings.loader, the configuration file can be in any of the three formats, yml, json, or simple name value pair. the type of the file completely depends on the file extension, so you can basically have files like the following: myconf.json myconf.yml myconf.properties Use the loadFromPath to load the file, that method will return a builder object, then call the builder object build method to get a Settings object. From that point on, you can call any of the method to get the actual configuration value.

Settings is really just a SortedMap of strings, defined like this SortedMap<String, String>.

ClusterSettings:

There is also a abstract class named AbstractScopedSettings. This class represents the settings per index and per-cluster.  The scope of the class indicates if the settings are for index or node. The value of the scope can be Filtered, Dynamic, Deprecated, NodeScope and IndexScope.  AbstractScopedSettings class basically is a collection of Setting instances. Class AbstractScopedSettings gets extended by final class ClusterSettings which defines the built-in-cluster-settings by create a Collections.unmodifiableSet with many Setting instances. Here are few examples:

client.transport.nodes_sampler_interval
client.transport.ping_timeout
cluster.routing.allocation.balance.index
cluster.routing.allocation.balance.shard
cluster.routing.allocation.balance.threshold

Notes: The cluster setting configurations are also defined in the same file. There is no separate configuration files for cluster settings.

Tuesday, June 7, 2016

How to import elasticsearch project into Eclipse

ElasticSearch uses Gradle to do its build, import the entire project into eclipse is not very obvious. To do that, following the following steps:

1. Clone the project by doing this:
        git clone https://github.com/elastic/elasticsearch
2. Change to the root directory, then run the following command to generate eclipse .project and .classpath files:

        export GRADLE_HOME=/Users/tongli/gradle-2.13
        export PATH=$PATH:$GRADLE_HOME/bin
        gradle clean eclipseProject eclipseClasspath

 3. From eclipse, File->Import->General->Existing Projects into workspace, click on next, then specify where the root location is, also select search for nested projects, click on finish.


Use the following command to deal with ElasticSearch build:

gradle tasks --all
 
lists all tasks, and the dependencies for each task.

gradle clean build
lists all tasks, and the dependencies for each task.

Or reference the following page:

https://docs.gradle.org/current/userguide/eclipse_plugin.html

Friday, June 3, 2016

Deal with Disk using parted

Remove partition table:

      dd if=/dev/zero of=/dev/<<sdb>> bs=512 count=1

Replace the <<sdx>> with a real disk, such as sdb, sdc, etc

-----------------------------------------------------------------------------------------------------------
Create partition table: Using sdb as an example

    parted /dev/sdb
    mklabel gpt

Which create gpt partition table

-----------------------------------------------------------------------------------------------------------
Create a partition from the begining sector with exact size in GB

    parted /dev/sdb
    unit GB
    mkpart logical ext4 0% 40

These commands will create a partition sized 40GB from the beginning sector using ext4
file system

-----------------------------------------------------------------------------------------------------------
Create ext4 file system on a parition:

     mkfs.ext4 /dev/sdb1

Give partition a name:

     name 1 space1
     name 2 space2