Tuesday, December 13, 2016

cron automation

Use cron to setup automated log file removal or other periodic tasks in linux environment is very useful. Here are the steps to automate that.

1. Run the following command to export the current crontab:
 
crontab -u john -l > john-cron-backup.txt

2. Edit the exported john-cron-backup.txt file to your need.

3. Run the following command to restore from the edited file to make it taking effects:
    
crontab -u john john-cron-backup.txt
 
Here is an example backup file: 

MAILTO=""
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin

# m h  dom mon dow   command
0 1 * * 1 rm -r -f /var/log/**/*.gz
0 1 * * 1 rm -r -f /var/log/**/*.log.1
0 1 * * 1 rm -r -f /var/log/*.gz
 
    
 

Tuesday, December 6, 2016

How to backup and restore alive Ubuntu server

Backup and restore alive Ubuntu system is useful for developers who maintain and install systems all the time, the follow procedure should help when your system is using lvm file system.

Backup the live Ubuntu system by following the below steps:

Use df -h command to figure out what is the logical volume where the root file system is on. Also look into /dev directory for the logical volumes. In the example below, the logical volume that holds the entire system is /dev/vg00/vg00-lv01

Only do this once when the system is clean:
1. Create a logical volume from a volume group (vg01) which will store the backup:
           lvcreate -L 40G -n space vg01
2. Create the file system on the new logical volume:
           mkfs -t ext4 /dev/vg01/space
3. Create snapshot of the root logical volume:
           lvcreate --size 6G -s -n cleansystem /dev/vg00/vg00-lv01
4. Create a directory to mount the logical volume:
           mkdir /space
           mount /dev/vg01/space /space           mkdir -p /space/snap /space/backup
5. Mount snapshot and save everything
          mount /dev/vg00/cleansystem /space/snap
          cd /space/snap
          tar -pczvf /space/backup/cleansystem.tar.gz *
          umount /space/snap
          lvremove /dev/vg00/cleansystem

After these steps, a tar.gz file named cleansystem.tar.gz should be produced in /space/backup directory. This is the file should be kept for restore later.

Restore the Ubuntu sytem by following the below steps:

The follow steps are to recover the system from the saved tar.gz in /space/backup directory, assume that the logical volume which contains the backup tar.gz file has been mounted on /space.

lvcreate —size 20G -s -n resetpoint /dev/vg00/vg00-lv01

mount /dev/vg00/resetpoint /space/snap

cd /space/snap

rm -r -f *

tar -xvf /space/backup/cleansystem.tar.gz

umount /space/snap

lvconvert —merge /dev/vg00/resetpoint

Monday, November 28, 2016

crontab gotches

when you use crontab -e to add new tasks, the chances are that you found that for some reason that your scripts or command did not seem to run, the reason most likely is because that path was not set up right. Since cron job does not actually have any environment variables setup, so these command which normally resides in /bin, /sbin probably can not be found, the easy solution is to add that at the top of the crontab file when you start the following command:

    crontab -e

MAILTO=""
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin

The MAILTO will make sure that the script won't try to send out an email on the command output.

Wednesday, November 16, 2016

Useful networking commands

  
1. Add a default gateway:
 
route add default gw {IP-ADDRESS} {INTERFACE-NAME}
 
for example:
  route add default gw 9.30.217.1 br-ex
 
 
2. Find the default gateway:
 
route -n
 
The destination with 0.0.0.0 should indicate the default gateway and its interface
 
3. How to assign ip, broadcast and netmask to an interface:
 
ifconfig eth0 172.16.25.125 netmask 255.255.255.224 broadcast 172.16.25.63
 
4. Assign ip, broadcast and netmask using ip command:
 
ip address add {IP_CIDR} brd + dev {INTERFACE-NAME} 
 
for example: 
 
    ip addr add 192.168.1.50/24 brd + dev eth0 
 
5. Add a default gateway using IP command:
 
ip route add default via {GATEWAY_IP_ADDRESS} dev {INTERFACE-NAME

for example:

   ip route add default via 9.30.217.1 dev br-ex
 

Thursday, November 10, 2016

Excellent OVN related information

Dustin Spinhirne has these excellent OVN blogs, having the link below to be referenced often.

http://blog.spinhirne.com/p/blog-series.html#introToOVN

Few network related commands

To see how many bridges exist in your system:

      ovs-vsctl list-br

To verify the connectivities amongs hosts:

     netstat -antp | grep destination-IP
   
     For example, to see a host and the connectivity at ip 10.0.2.25, use this command:
  
  netstat -antp | grep 10.0.2.25
 
  Here is the results:
 
tcp        0      0 10.0.2.26:60536         10.0.2.25:5672          ESTABLISHED 6977/python
tcp        0      0 10.0.2.26:60526         10.0.2.25:5672          ESTABLISHED 31103/python
tcp        0      0 10.0.2.26:34444         10.0.2.25:6642          ESTABLISHED 6717/ovn-controller
tcp        0      0 10.0.2.26:60542         10.0.2.25:5672          ESTABLISHED 6977/python
tcp        0      0 10.0.2.26:60538         10.0.2.25:5672          ESTABLISHED 6977/python
tcp        0      0 10.0.2.26:39282         10.0.2.25:3306          ESTABLISHED 31103/python
tcp        0      0 10.0.2.26:60528         10.0.2.25:5672          ESTABLISHED 31103/python
 
5672 is the default port for rabbitmq
6642 is the port for ovn south bound db
3306 is the default port for mysql 
 
 

Monday, October 3, 2016

OpenStack Neutron metadata agent conversation

On Oct. 3rd, I had a conversation regarding the dependency of l3 agent and metadata agent. I installed neutron l3 agent, to my surprise, I found the metadata agent was also installed. Then here is what happened on neutron irc channel:

tongli: hi, I recently installed openstack neutron l3 agent on a compute node, to my surprise, the atp-get install neutron-l3-agent also installed neutron-metadata-agent.
13:36 tongli: which is not something that I expected.
13:36 tongli: can someone tell me if that is a bug or that is intended?
13:36 tongli: this is mitaka release.
kevinbenton: tongli: yeah, that's expected. The L3 agent will handle proxying metadata requests to Nova
13:42 kevinbenton: tongli: (by passing them to the metadata agent)
13:43 tongli: @kevinbenton, thanks for your answer. So it won't make much sense to have l3 agent on a node without metadata agent?
13:44 kevinbenton: tongli: no, unless you don't want to use metadata and set enable_metadata_proxy to false in the l3 agent
13:45 kevinbenton: tongli: you could also force the DHCP agent to always do metadata for you with the force_metadata option if you don't want the l3 agent to do it.

Wednesday, September 28, 2016

OpenStack Neutron Network Notes

1. When a network and its subnet get created with DHCP enabled (Linuxbridge used as example):

    A Linux name space will be created, the namespace should start with qdhcp-<network-id>, the namespace should have a tap device, the name starts with ns-<tap device name>, that device will bear an IP address of x.x.x.2 (by default), it is where the dhcp server will run on.  The other end of the tap device will be in the default name space, the name simply starts as tap<tap device name>

    A Linux bridge will be created. The bridge will be named brq< first 11 characters of the network id >. The bridge will have the tap<tap device name> for the dhcp server and the vxlan device which connects this bridge with bridges on other compute node when VM gets created on the network.

2. When a router gets created without external gateway or connect to a tenant network, there is nothing happening other than a record in neutron database. No actual network construct gets created.

3. When a router gets external gateway set:

    A Linux name space will be created, the namespace name will be qroute-<router-id>.
    A tap device will be created in that namespace, the tap device name starts qg-<first 11 characters of the port id>, this is the port that bears floating IP address (so here we consume one floating IP), qg means quantum gateway. The other end of this tap device will be in the default namespace and the name is tap<first 11 characters of the port id>, this device bears no IP address since it is hooked up in the bridge which take all the traffic to the public network which gets created for the public (provider) network.

4. When a router gets hooked up with a tenant network (add interface from a tenant sub network to router):

     A pair of tap device will be created. One end will be placed in the qrouter name space created in step 3. The name of that tap device will be qr-< first 11 characters of the port id >, qr means quantum router. This device will normally take the  first IP of that subnet which mostly is .1 IP address. The other end of the tap device is named as tap< first 11 characters of the port id >, this tap device will be in default name space and placed in the bridge which represents the network created in step 1.

Use the following command to show tap device pairs:

ip -d link show

Tuesday, September 27, 2016

Change open file limits on Ubuntu

Two things have to be done to increase the number of open files on ubuntu system:

1. Change /etc/security/limits.conf file, add the following two lines in the file:
 
   * soft nofile 4096
   * hard nofile 4096


2. Change /etc/pam.d/common-session* file, add the following line in the file:

      session required        pam_limits.so

3. (Optional)If you will be accessing the node via secure shell (ssh), you should also edit /etc/ssh/sshd_config and uncomment the following line:

    #UseLogin no

        and set its value to yes as shown here:
 
    UseLogin yes

 
 
To see the open file limits on process, do the following:
 
   cat /proc/__process_id__/limits 

Show linux system hard disk and file system

The following command will show system wide block devices and file systems:
 
    lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
 
 

Thursday, September 22, 2016

Certificates

For self signed certificates, the public half of the certificate can be obtained by using a browser (such as Firefox) exporting the certificate. The exported certificate should be in the form of pem. Now you can use the following command to convert it to .crt file.

   openssl x509 -outform der -in your-cert.pem -out your-cert.crt
 
 
If only allows system wide acceptance of the certificates, do the following:

   1. copy pem to /etc/ssl/certs

   2. Run the following command to create a file name:
      
         openssl x509 -noout -hash -in _PEM_FILE

   3. The above command will produce a hash, then create a link in /etc/ssl/certs:

         ln -s _PEM_FILE_ <theHash>.0
 

Thursday, August 18, 2016

Enable OpenStack sending log to a remote rsyslog server


On OpenStack side:

Security Rules, open the following ports: These rules are required when all your nodes running in a OpenStack cloud.

   9200 (ElasticSearch)
   9300 (ElasticSearch transport between nodes)
   9400 (for syslog) to collect via logstash
   5601 (Kibana)
   22, 80 and icmp

Config rsyslog to remote logging:
create a file in /etc/rsyslog.d, named it like 60-openstack.conf
put the following content in the that file

local6.*               @10.0.50.9:9400

NOTE: that the ip address must be the remote rsyslog server IP. When working with ElasticSearch, that IP should be the IP address of logstash server. The port 9400 should match up with the UDP port sets in /etc/logstash/conf.d/logstach.conf file. Using port lower than 1024 will require special permission. After making these changes, restart the service like this:

service rsyslog restart

Config OpenStack component to use the new log facility:

Change component file such as nova.conf, neutron.conf to use syslog like the following:

[DEFAULT]
debug = False
use_syslog = True
syslog_log_facility = LOG_LOCAL6

You can use LOG_LOCAL0 to LOG_LOCAL7 as long as the facility points to the remote logging server.

After making these changes, restart the components



This procedure is using ElasticSearch logstash as a rsyslog server. When there is no particular filter setup, you still will be able to use kibana to chart log data.  Follow these steps to produce a pie chart:

1. Use the logstash-* index
2. Click on Visualize button at the top of the kibana screen
3. Click on Pie Chart
4. Select from new search
5. Select split slices
6. Select terms from the aggregation  drop down box
7. Select syslog_program.raw from the field drop down box, leave others alone
8. Click on the run button at the options bar, a chart should be displayed

Wednesday, August 17, 2016

How to install docker client working

apt-get update
apt-get -y install docker.io
ln -sf /usr/bin/docker.io /usr/local/bin/docker 

Tuesday, August 9, 2016

OpenStack wiered errors

This is the post I will use to record some of the strange errors I have encounted.
    1. When image file gets moved to a different location, glance will fail to provide the image for a VM, but the instance somewhere will get multiple fixed IP addresses. This is a very strange problem. After going to the database, corrected the image_location value field records, restart glance services will resolve this error.
   2. There is also a problem when instance_path is not writable by libvirt-qemu user, for some reason, each VM will get two private IPs. Make sure that the libvirt-qemu user can write to instance_path which is defined in nova.conf file.
     

Monday, August 8, 2016

Setup Ansible to use openstack cloud module


Follow these steps to install ansible and shade to use openstack cloud modules.
 
sudo apt-add-repository ppa:ansible/ansible -y
sudo apt-get update && sudo apt-get install ansible -y 
sudo apt-get install python-dev python-pip libssl-dev libffi-dev -y
sudo pip install six shade --upgrade

Friday, July 22, 2016

TUN TAP VETH Devices

The following content are copied from this original post, referenced here purely for my own convenience. All credits of creating the post go to the original author. Thanks.

http://www.naturalborncoder.com/virtualization/2014/10/17/understanding-tun-tap-interfaces/

TUN Interfaces

TUN devices work at the IP level or layer three level of the network stack and are usually point-to-point connections. A typical use for a TUN device is establishing VPN connections since it gives the VPN software a chance to encrypt the data before it gets put on the wire. Since a TUN device works at layer three it can only accept IP packets and in some cases only IPv4. If you need to run any other protocol over a TUN device you’re out of luck. Additionally because TUN devices work at layer three they can’t be used in bridges and don’t typically support broadcasting.

List all network devices in the system

Go to directory /sys/class/net. This directory should contain all the physical and virtual network devices.

TAP Interfaces

TAP devices, in contrast, work at the Ethernet level or layer two and therefore behave very much like a real network adaptor. Since they are running at layer two they can transport any layer three protocol and aren’t limited to point-to-point connections. TAP devices can be part of a bridge and are commonly used in virtualization systems to provide virtual network adaptors to multiple guest machines. Since TAP devices work at layer two they will forward broadcast traffic which normally makes them a poor choice for VPN connections as the VPN link is typically much narrower than a LAN network (and usually more expensive).

Managing Virtual Interfaces

It really couldn’t be simpler to create a virtual interface:
ip tuntap add name tap0 mode tap
ip link show
The above command creates a new TAP interface called tap0 and then shows some information about  the device. You will probably notice that after creation the tap0 device reports that it is in the down state. This is by design and it will come up only when something binds it (see here). The output of the show command will look something like this:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
 link/ether 08:00:27:4a:5e:e1 brd ff:ff:ff:ff:ff:ff
3: tap0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 500
 link/ether 36:2b:9d:5c:92:78 brd ff:ff:ff:ff:ff:ff
To remove a TUN/TAP interface just replace “add” in the creation command with “del”. Note that you have to specify the mode when deleting, presumably you can create both a tun and a tap interface with the same name.

Creating Veth Pairs

A pair of connected interfaces, commonly known as a veth pair, can be created to act as virtual wiring. Essentially what you are creating is a virtual equivalent of a patch cable. What goes in one end comes out the other. The command to create a veth pair is a little more complicated than some:
ip link add ep1 type veth peer name ep2
 This will create a pair of linked interfaces called ep1 and ep2 (ep for Ethernet pair, you probably want to choose more descriptive names). When working with OpenStack, especially on a single box install, it’s common to use veth pairs to link together the internal bridges. It is also possible to add IP addresses to the interfaces, for example:

ip addr add 10.0.0.10 dev ep1
ip addr add 10.0.0.11 dev ep2
Now you can use ip address show to check the assignment of IP addresses which will output something like this:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether 08:00:27:4a:5e:e1 brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.141/24 brd 192.168.1.255 scope global eth0
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:fe4a:5ee1/64 scope link
 valid_lft forever preferred_lft forever
4: ep2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
 link/ether fa:d3:ce:c3:da:ad brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.11/32 scope global ep2
 valid_lft forever preferred_lft forever
5: ep1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
 link/ether e6:80:a3:19:2c:10 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.10/32 scope global ep1
 valid_lft forever preferred_lft forever
Using a couple of parameters on the ping command shows us the veth pair working:
ping -I 10.0.0.10 -c1 10.0.0.11
PING 10.0.0.11 (10.0.0.11) from 10.0.0.10 : 56(84) bytes of data.
64 bytes from 10.0.0.11: icmp_seq=1 ttl=64 time=0.036 ms
--- 10.0.0.11 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms
The -I parameter specifies the interface that should be used for the ping. In this case the 10.0.0.10 interface what chosen which is a pair with 10.0.0.11 and as you can see the ping is there and back in a flash. Attempting to ping anything external fails since the veth pair is essentially just a patch cable (although ping’ing eth0 works for some reason).

Thursday, July 14, 2016

How to install vagrant and vagrant-managed-server plugin

1. Download latest vagrant deb package and install it:

   https://www.vagrantup.com/downloads.html
   dpkg -i vagrant_1.8.4_x86_64.deb 

2. Install dependencies:

   apt-get -qqy install ruby-dev make

3. Install the vagrant-managed-servers plugin

   vagrant plugin install vagrant-managed-servers

Monday, June 27, 2016

Connect two ovs bridges

Following the following steps to connect two ovs-bridges:

1. Create two devices using veth type:

     ip link add dev "int-to-prov" type veth peer name "prov-to-int"

2. Add int-to-prov port to br-int , set the type to be patch and peer altogether using the following
    two different methods:


     ovs-vsctl add-port br-int int-to-prov
     ovs-vsctl set interface int-to-prov type=patch
     ovs-vsctl set interface int-to-prov options:peer=prov-to-int

    or

    ovs-vsctl add-port br-int int-to-prov -- set interface int-to-prov type=patch options:peer=prov-to-int


3. Add the other port to the second bridge:

    ovs-vsctl add-port br-provider prov-to-int -- set interface prov-to-int type=patch options:peer=int-to-prov

4. Now bring both ports up:

    ip link set dev prov-to-int up
    ip link set dev int-to-prov up

Saturday, June 25, 2016

OpenFlow Table Basics

OpenFlow Flow Table has following three things:

Header Field, Action, Counters

Header Field:
     Each flow table header entry is made up of six components, which define the matching rules and
     other basic rules for the corresponding flow.

     Match Fields:  Used to select packets that match the values in the fields
             Ingress Port, Ethernet Source, Ethernet Destionation, Ethernet Type, VLAN ID,
             VLAN Priority, IP Source, IP Destination, IP Protocol, IP ToS Bits,
             TCP/UDP Source Ports, TCP/UDP Designation Ports


     Priority: Relative priority of table entries.
     Counters: Updated for matching packets
     Instructions:  Actions to be taken if a match occurs
     Timeouts: Maximum amount of idle time before a flow is expired by the switch
     Cookie: Opague data value chosen by the controller


Action:
     Each flow entry is associated with zero or more actions that dictate how the device handles
     matching packets. Actions in OpenFlow specification are defined as required and optional.
     Optional actions are not required to be implemented by vendors as such.

     Forward:   required actions are ALL, Controller, Local, Table, In-Port
                       optional actions are Normal, Flood
     Drop: required
     Modify Field: optional


Counters:
     Counters are maintained per-table, per-flow, per-port and per queue. There are a set of required
     counters that all the implementations should support and there are additionally optional counters.
     Here are various counters:

     Table: Active entries, Packet Lookup, Packet Matches

     Flow: Received Packets, Received Bytes, Duration (seconds), Duration(nanoseconds)

     Port: Received Packets, Transimitted Packets, Received Bytes, Transimitted Bytes,
              Receive Drops, Transmit Drops, Receive Errors, Receive Frame Alignment Errors,
              Receive Overrun Errors, Receive CRC Errors, Collisions

     Queue: Transmit Packets, Transmit Bytes, Transmit Overrun Errors
   
Examples:

     match=(eth.src[40]), action=(drop;)

a broadcast/multicast source MAC, gets dropped

     match=(vlan.present), action=(drop;)

Anything with a logical VLAN tag.

Friday, June 24, 2016

Neutron metadata agent

The Neutron metadata agent runs at hard coded (more like a standard since Amazon also use that) 169.254.169.254 in the namespace of the tenant network. So if the metadata agent runs correctly, the address is pingable from the namespace. And VMs should also be able to ping that address.

Types of Linux Users

Linux has four types of accounts:
  • Root or Super user
  • System
  • Normal
  • Network
Use the following commands to check user types:

      #id
      #id nova

Add a user without home directory, Ubuntu adding a user by default won't create home directory:

     #useradd neutron

When happens when a OVN network and its subnet get created

  1. A logical switch will be created, using command "ovn-nbctl show" to verify. 
  2. A number of ports will be automatically added, these ports are for DHCP agent instances. They normally should occupy first few IPs in the subnet, for example, if the subnet is 172.16.31.0/24, then most likely 172.16.31.2 and 172.16.31.3 will be used for two DHCP agents if neutron is configured to have two dhcp agents per network. 172.16.31.1 will be most likely reserved for gateway if default is used. The neutron config parameter to control how many dhcp agents should be for a network is "dhcp_agents_per_network"
  3. A namespace will be created on all compute nodes, the server runs ovn-northd actually will not have the namespace unless that node is also a compute node.
  4. On the compute node where dhcp agent runs, there should be a new dnsmasq instance which allocate the IPs for the subnet. 
  5. There should be one Port_Binding entry on each chassis for the new network. Use "ovn-sbctl show" to verify, this is probably due to the dhcp agent port.
  6. Port_Binding table by using "ovsdb-client dump tcp:127.0.0.1:6642" (where the southbound database is located) should show that each port will use different tunnel key.
  7. br-int bridge should have a port/interface for the dhcp server. Use "ovs-vsctl show" command to verify, here is an example:
  8.     Bridge br-int
            fail_mode: secure
            Port "tap97ef22d4-c7"
                Interface "tap97ef22d4-c7"
                    type: internal
    
  9. Anything more? 

File permissions

To view the files owned by the group "test" and user "luser' use FIND command
 
1. Find all the groups available on your system:


    cat /etc/group |cut -d: -f1


2. For finding the groups that the current user belongs to:


    groups luser test adm cdrom sudo dip plugdev lpadmin sambashare


3. For finding groups user luser belongs to:


    groups luser 
    luser : test luser adm cdrom sudo dip plugdev lpadmin sambashare


4. Now to see the files owned by group "test" in particular path or folder. Try


    find /home -group test

    find /etc -group root

5. To see all the users in the system:

    compgen -u

6. To see all the groups in the system:

    compgen -g
 

Wednesday, June 22, 2016

Linux kernel related procedures

How to list all installed linux kernels:

    dpkg --list | grep linux-image

How to install linux kernel 3.19

   
    apt-get install linux-image-generic-lts-vivid linux-headers-generic-lts-vivid
 
How to remove older kernels
 
    dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)
 
    then use the output from the above command:
 
    sudo dpkg --purge <<linux-image-4.2.0-15-generic>> 

Monday, June 20, 2016

What happens when a vm gets created by using OVN

When a new VM gets created successfully, the following happens: 
  1. a port entry gets created, the port should have mac and IP address. use "ovn-nbctl show" command to verify. 
  2. a Port_Binding entry gets created, use "ovn-sbctl show" and "ovsdb-client dump" against the southbound database to verify. 
  3. a tap Port gets created, use "ovs-vsctl show" command to verify on the compute node where the VM was hosted. The tap port should be part of the br-int bridge.
  4. From the namespace, one should be able to ping the IP address of that new VM. Regardless which compute node the ping happens, the ping should be successful.

When a network gets created:
  1. A logical switch will be created with a name like this "neutron-<neutron network id>" 
  2. In that switch, first few ports such as .2, .3 will be used for dhcp server IPs. If there are two dhcp servers per network, then .2 and .3 will be used, however ip address .1 will be reserved for the gateway which will be used when the network gets hooked up with another network.
  3. Traffic between provider network bridge and br-int are isolated via tunnel key (id).


normally northbound database is at port 6641, dump the database content using this command:
       ovsdb-client dump tcp:127.0.0.1:6641

normally southbound database is at port 6642, dump the database content using this command:
       ovsdb-client dump tcp:127.0.0.1:6642

Thursday, June 16, 2016

Remove comments and empty line from a text file

cat <file_name> | grep -v "^#" | grep -v '^$'

This command will remove empty lines and any line starts with #.

Wednesday, June 15, 2016

ElasticSearch document routing (or allocation)

When you index a document, it is stored on a single primary shard. How does Elasticsearch know which shard a document belongs to? When we create a new document, how does it know whether it should store that document on shard 1 or shard 2?

The process can’t be random, since we may need to retrieve the document in the future. In fact, it is determined by a simple formula:
 
    shard = hash(routing) % number_of_primary_shards

The routing value is an arbitrary string, which defaults to the document’s _id but can also be set to a custom value. This routing string is passed through a hashing function to generate a number, which is divided by the number of primary shards in the index to return the remainder. The remainder will always be in the range 0 to number_of_primary_shards - 1, and gives us the number of the shard where a particular document lives.

This is the reason why the number of primary shards should not change after a index has been created since change number of primary shards will cause all sort of problems.

This is how a document gets decided where to go for a index. Then there is a way for determining where a primary shard should be allocated and where the replica shard should be allocated across the cluster.

document routing + index primary shard and replica shard allocation will be the complete elasticsearch allocation which in my view a bit confusing to grasp in the beginning of the learning ElasticSearch.

Friday, June 10, 2016

ElasticSearch Settings

ElasticSearch (ES) settings start with package org.elasticsearch.common.settings.

See the following code:

    Settings t_settings = Settings.EMPTY;
    Path path = FileSystems.getDefault().getPath(args[0]);
    Settings.Builder builder = t_settings.builder();
    try {
        builder = builder.loadFromPath(path);
    } catch (IOException e) {
        e.printStackTrace();
    } 

    Settings newsettings = builder.build();
    System.out.println(newsettings.get("path.data"));

The entire node settings can be simply built using code similar to the above. Get the an empty settings object, then pass in the configuration file. According to the setting loader defined in package org.elasticsearch.common.settings.loader, the configuration file can be in any of the three formats, yml, json, or simple name value pair. the type of the file completely depends on the file extension, so you can basically have files like the following: myconf.json myconf.yml myconf.properties Use the loadFromPath to load the file, that method will return a builder object, then call the builder object build method to get a Settings object. From that point on, you can call any of the method to get the actual configuration value.

Settings is really just a SortedMap of strings, defined like this SortedMap<String, String>.

ClusterSettings:

There is also a abstract class named AbstractScopedSettings. This class represents the settings per index and per-cluster.  The scope of the class indicates if the settings are for index or node. The value of the scope can be Filtered, Dynamic, Deprecated, NodeScope and IndexScope.  AbstractScopedSettings class basically is a collection of Setting instances. Class AbstractScopedSettings gets extended by final class ClusterSettings which defines the built-in-cluster-settings by create a Collections.unmodifiableSet with many Setting instances. Here are few examples:

client.transport.nodes_sampler_interval
client.transport.ping_timeout
cluster.routing.allocation.balance.index
cluster.routing.allocation.balance.shard
cluster.routing.allocation.balance.threshold

Notes: The cluster setting configurations are also defined in the same file. There is no separate configuration files for cluster settings.

Tuesday, June 7, 2016

How to import elasticsearch project into Eclipse

ElasticSearch uses Gradle to do its build, import the entire project into eclipse is not very obvious. To do that, following the following steps:

1. Clone the project by doing this:
        git clone https://github.com/elastic/elasticsearch
2. Change to the root directory, then run the following command to generate eclipse .project and .classpath files:

        export GRADLE_HOME=/Users/tongli/gradle-2.13
        export PATH=$PATH:$GRADLE_HOME/bin
        gradle clean eclipseProject eclipseClasspath

 3. From eclipse, File->Import->General->Existing Projects into workspace, click on next, then specify where the root location is, also select search for nested projects, click on finish.


Use the following command to deal with ElasticSearch build:

gradle tasks --all
 
lists all tasks, and the dependencies for each task.

gradle clean build
lists all tasks, and the dependencies for each task.

Or reference the following page:

https://docs.gradle.org/current/userguide/eclipse_plugin.html

Friday, June 3, 2016

Deal with Disk using parted

Remove partition table:

      dd if=/dev/zero of=/dev/<<sdb>> bs=512 count=1

Replace the <<sdx>> with a real disk, such as sdb, sdc, etc

-----------------------------------------------------------------------------------------------------------
Create partition table: Using sdb as an example

    parted /dev/sdb
    mklabel gpt

Which create gpt partition table

-----------------------------------------------------------------------------------------------------------
Create a partition from the begining sector with exact size in GB

    parted /dev/sdb
    unit GB
    mkpart logical ext4 0% 40

These commands will create a partition sized 40GB from the beginning sector using ext4
file system

-----------------------------------------------------------------------------------------------------------
Create ext4 file system on a parition:

     mkfs.ext4 /dev/sdb1

Give partition a name:

     name 1 space1
     name 2 space2

Saturday, May 28, 2016

OVN OpenStack

Install ovn-central service which will have ovn northbound and southbound database installed and use the correct schema.

dpkg -i openvswitch-common_2.5.90-1_amd64.deb
dpkg -i openvswitch-switch_2.5.90-1_amd64.deb
dpkg -i ovn-common_2.5.90-1_amd64.deb
dpkg -i ovn-central_2.5.90-1_amd64.deb

OVN database files will be at /etc/openvswitch directory by default. The above will also install openvswitch openvswitchd and its default database. There will be total three ovs database files:

  1. conf.db
  2. ovnnb_db.db
  3. ovnsb_db.db

Once ovn-central is installed, there will be ovsdb-server running for ovnnb_db and ovnsb_db. This service will also have ovn-northd running. Use the following command to start/stop the service

     service ovn-central start/stop

The above service should produce four processes like the following:

ovsdb-server --detach -vconsole:off --log-file=/var/log/openvswitch/ovsdb-server-nb.log --remote=punix:/var/run/openvswitch/ovnnb_db.sock --remote=ptcp:6641:0.0.0.0 --pidfile=/var/run/openvswitch/ovnnb_db.pid --unixctl=ovnnb_db.ctl /etc/openvswitch/ovnnb_db.db

ovsdb-server --detach -vconsole:off --log-file=/var/log/openvswitch/ovsdb-server-sb.log --remote=punix:/var/run/openvswitch/ovnsb_db.sock --remote=ptcp:6642:0.0.0.0 --pidfile=/var/run/openvswitch/ovnsb_db.pid --unixctl=ovnsb_db.ctl /etc/openvswitch/ovnsb_db.db

ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/var/run/openvswitch/ovnnb_db.sock --ovnsb-db=unix:/var/run/openvswitch/ovnsb_db.sock --no-chdir --log-file=/var/log/openvswitch/ovn-northd.log --pidfile=/var/run/openvswitch/ovn-northd.pid --detach --monitor

ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/var/run/openvswitch/ovnnb_db.sock --ovnsb-db=unix:/var/run/openvswitch/ovnsb_db.sock --no-chdir --log-file=/var/log/openvswitch/ovn-northd.log --pidfile=/var/run/openvswitch/ovn-northd.pid --detach --monitor


Use the following command to list the database running at the different port

       1. ovsdb-client list-dbs tcp:127.0.0.1:6641
       2. ovsdb-client list-dbs tcp:127.0.0.1:6642
       3. ovsdb-client list-dbs

Command #1 above shows the OVN_Northbound database, command #2 above shows the OVN_Southbound database. Command #3 above shows the database running at the default port.

To list each database schema, try the following command

    ovsdb-client get-schema tcp:127.0.0.1:6641 | python -m json.tool


Install ovn-host service onto compute node and configure it


Useful commands:

ovs-ofctl show br-int
ovs-ofctl -O OpenFlow13 dump-flows br-int 

Friday, May 13, 2016

Dynamic Typing vs Static Typing & Strongly typed vs. Weakly typed

In a dynamically typed language, a variable is simply a value bound to a name; the value has a type -- like "integer" or "string" or "list" -- but the variable itself doesn't. You could have a variable which, right now, holds a number, and later assign a string to it if you need it to change.

In a statically typed language, the variable itself has a type; if you have a variable that's an integer, you won't be able to assign any other type of value to it later. Some statically typed languages require you to write out the types of all your variables, while others will deduce many of them for you automatically. A statically typed language can catch some errors in your program before it even runs, by analyzing the types of your variables and the ways they're being used. A dynamically language can't necessarily do this, but generally you'll be writing unit tests for your code either way (since type errors are a small fraction of all the things that might go wrong in a program); as a result, programmers in dynamic languages rely on their test suites to catch these and all other errors, rather than using a dedicated type-checking compiler.

In a strongly typed language, you are simply not allowed to do anything that's incompatible with the type of data you're working with. For example, in a weakly typed language you can typically do 3 + 5 + 7 and get the result 15, because numbers can be added; similarly, you can often do 'Hello' + 'And' + 'Goodbye' and get the result "HelloAndGoodBye", because strings support concatenation. But in a strongly-typed language you can't do 'Hello' + 5 + 'Goodbye', because there's no defined way to "add" strings and numbers to each other. In a weakly typed language, the compiler or interpreter can perform behind-the-scenes conversions to make these types of operations work; for example, a weakly typed language might give you the string "Hello5Goodbye" as a result for 'Hello' + 5 + 'Goodbye'. The advantage to a strongly typed language is that you can trust what's going on: if you do something wrong, your program will generate a type error telling you where you went wrong, and you don't have to memorize a lot of arcane type-conversion rules or try to debug a situation where your variables have been silently changed without your knowledge.

Python is a strongly and dynamically typed programming language.

Some nice useful networking related commands

Send an arp command:
 
   arping 10.30.0.132
 
View arp cache:
 
   arp -n 
 
Display routing table using any of the following commands:
 
  ip route show
  route -n
  netstat -rn 
 
 
Show a route for a destination IP address:
 
  ip route get 10.0.2.14
 
Show traffic on a nic for a particular protocol:
 
  tcpdump -i eth0 -v arp    (arping)
  tcpdump -i eth0 -v icmp   (ping)
  tcpdump -i eth0 -v 


Capture packets with IP address:
 
   tcpdump -n -i eth0

More about type driver

The following content is copy/paste from this link:

https://ask.openstack.org/en/question/51388/whats-the-difference-between-flat-gre-and-vlan-neutron-network-types/

I copy it here for my own convenience. Credit goes to the original author.


A local network is a network that can only be realized on a single host. This is only used in proof-of-concept or development environments, because just about any other OpenStack environment will have multiple compute hosts and/or a separate network host.

A flat network is a network that does not provide any segmentation options. A traditional L2 ethernet network is a "flat" network. Any servers attached to this network are able to see the same broadcast traffic and can contact each other without requiring a router. flat networks are often used to attach Nova servers to an existing L2 network (this is called a "provider network").

A vlan network is one that uses VLANs for segmentation. When you create a new network in Neutron, it will be assigned a VLAN ID from the range you have configured in your Neutron configuration. Using vlan networks requires that any switches in your environment are configured to trunk the corresponding VLANs.

gre and vxlan networks are very similar. They are both "overylay" networks that work by encapsulating network traffic. Like vlan networks, each network you create receives a unique tunnel id. Unlike vlan networks, an overlay network does not require that you synchronize your OpenStack configuration with your L2 switch configuration.<added by tong> Since VLAN actually will require switches to be smart to know VLAN IDs and configurations. Where GRE tunnels do not really care about your physical network switches since the traffic is over IP layer, point to point.</added by tong>


Some additional comments to add to what larsks answered - In a flat network, everyone shares the same network segment. For example, say 2 tenants are sharing the cluster, and this segment is 10.4.128.0/20 - VM1 from tenant 1 might get assigned 10.4.128.3, VM1 from tenant 2 might get 10.4.128.4, and so on. This means that tenant 1 can see the traffic from tenant 2. Not a good thing in most cases.

In a VLAN network, tenants are separated because each is assigned to a VLAN. In OpenVSwitch plugin (or ML2 with OVS driver), OVS will in the virtual switches allocate an internal VLAN for each tenant. If you mix in a hardware plugin like the Cisco Nexus plugin, it will be asked to allocate VLANs as well. These VLANs provide separation amongst the tenants (as VLANs are designed to do). It also means that tenants can specify the same subnet and overlap in that subnet range - VM1 from tenant 1 can get assigned IP 10.4.128.3 and VM1 from tenant 2 can also get 10.4.128.3, without conflict. This makes life easier for administrators because they don't have to worry about tenants that want the same subnet and address allocations, because the VLANs keep them separate.

GRE segmenation (and VXLAN) also provides separation among tenants, and also allows overlapping subnets and IP ranges. It does this by encapsulating tenant traffic in tunnels. Say your tenant has VMs running on compute nodes A, B, and C. Neutron (along with OVS) will build a fully connected mesh of tunnels between all of these machines, and create a tunnel bridge on each of these nodes that is used to direct traffic from VMs into and out of these tunnels. If a VM on machine A wants to send packets to a VM on machine B, machine A will encapsulate the IP packets coming out of the VM using a segmentation ID that is generated for the tenant by OpenStack, and the receiving machine (B) will decapsulate the packets and route them to the destination VM using the addressing information in the ethernet frame.

GRE and VXLAN scale better than VLAN, and while VLAN based networking probably has its applications (you might be integrating with a infrastructure that is VLAN-based to begin with), I have found GRE/VXLAN based OVS setups to be easier to deploy and debug than VLAN based setups (one reason is you can use a dumb switch to connect all the physical hosts), and so my feeling is you want to start there if you have a deployment scenario that involves multiple tenants and you want to allow for overlapping network segments and IP address ranges in your tenants.

Wednesday, May 4, 2016

Neutron type driver vs mechanism driver

The following content was copy paste from this link.

http://aqorn.com/understanding-openstack-neutron-ml2-plugin/

I copy it here for my own convenience. Credit goes to the original author.

ML2 can really be broken down into two camps working together: types and mechanisms. Types typically refer to the type of network being implemented (notice nova-network isn’t an option). Mechanisms on the other hand generally refer to how that network type should be implemented.
It’s important to understand, within the greater context of this plugin, that ML2 separates network types from vendor-specific mechanisms to access those networks. Network types and mechanisms are handled as modular drivers (swappable) and supplants legacy/monolithic plugins such as openvswitch and linuxbridge.

ML2 still supports existing Layer2 agents via RPC interface however. Some examples include openvswitch, linuxbridge and Hyper-V.

The diagram below helps visualize the ML2 plugin structure:
ml2-arch1

Generally speaking, the ML2 type driver maintains the type-specific state, provides tenant network allocation, validates provider network attributes and models network segments by using provider attributes. Some ML2-supported network types include GRE, VLAN, VxLAN, Local and Flat.

In the other camp, the ML2 mechanism driver calls on the CRUD of L2 resources on vendor products such as Cisco Nexus, Arista, LinuxBridge and Open vSwitch.

Multiple mechanism drivers can access the same network simultaneously with three types of models :
  1. agent-based:  Linuxbridge, OVS
  2. controller-based: OpenDaylight
  3. ToR switch-based: Cisco Nexus, Arista, Mellanox
More information can be found here:
https://wiki.openstack.org/wiki/Neutron/ML2

ElasticSearch Command and Scripts


  • Figure out the health of clusters
    • http://192.168.1.90:9200/_cluster/health?pretty=true
  • Query nodes stats:
    • GET http://192.168.1.90:9200/_nodes/stats/fs,indices?pretty
  • Index shards
    • http://192.168.1.90:9200/leap/_search_shards 
  • List all indexes:
    • http://9.30.217.20:9200/_cat/indices?v
  • List all the index and types:
    • http://9.30.217.20:9200/_mapping?pretty=true
  • List documents from an index and type:
    • http://9.30.217.20:9200/<index_name>/<doc_type>/_search
  • Create an index with 3 shards only:
      POST http://192.168.1.90:9200/leap
      {
          "settings" : {
              "index" : {
                  "number_of_shards" : 3,
                  "number_of_replicas" : 1
              }
          }
      }
      

       

Friday, April 22, 2016

Useful procedures on Ubuntu

Enable root ssh login on Ubuntu 14.04

edit /etc/ssh/sshd_config to make sure the following:

     PermitRootLogin yes
 
if want to allow password log in, then add the following line as well 
 
     PasswordAuthentication yes
 
for root user, there must be a password generated for root.  
 
Restart the ssh service
 
     service ssh restart 

Change em1 to eth0

sudo apt-get remove biosdevname
sudo update-initramfs -u 
 
on ubuntu 15.10 do the following:
edit /etc/default/grub change
 
   GRUB_CMDLINE_LINUX_DEFAULT="net.ifnames=0" 
 
run this command:
   grub-mkconfig -o /boot/grub/grub.cfg
 
on ubuntu 16.04 do the following:
edit /etc/default/grub change
 
   GRUB_CMDLINE_LINUX="net.ifnames=0"
run this command: 
 
   grub-mkconfig -o /boot/grub/grub.cfg 
 
reboot the system 

Kill processes in Ubuntu by name

kill $(pgrep someprogram)

killall -v someprogram

pkill someprogram

kill `ps -ef | grep someprogram | grep -v grep | awk ‘{print $2}’`


 

Sunday, April 17, 2016

How to setup Pydio on iOS devices

Pydio is a free app for mobile devices to access the ftp accounts. You can find it for both iOS and android devices. Once you install the app onto your mobile devices, follow the steps below to configure it, the example below assumes that your account is called "churchdocs" and the FTP address is www.yourorg.com

    1. Click on the settings icon at the top left.
    2. Click on the + sign at the top right to add a new ftp account:
    3. Input the following information for each field:
        Server address: https://webftp.dreamhost.com
        User name: churchdocs
        User password: <<the password>>
        Label: RCCCDocs <<this should be something meaningful to you>>
        Trust SSL Certificate: On
        Legacy Server (< 4.0): Off
        WebFTP Setup: On
        FTP Address: www.yourorg.com
        FTP Port: 21
        FTP Secure: Off
        FTP Active: Off
    4. Once you input all the above information, click on Save button at the top right.
    5. Now you should have an entry, on the startup page, from this point on, you should be able to just click on that entry to see the contents in the account. On mobile devices, you can view documents and also upload.

Saturday, April 16, 2016

Add service daemon to Ubuntu

  1. Create a script in /etc/init.d, say we have it named as awesome_script.sh
  2. Make the script executable, chmod +x awesome_script.sh
  3. Make the daemon starts automatically when system starts
    • sudo update-rc.d awesome_script.sh defaults
  4. Remove the daemon:
    • sudo update-rc.d -f awesome_script.sh remove

Useful Linux Commands


  1. lsusb - list all usb ports
  2. lsmod - list loaded modules
  3. modprobe <module name> - load a module
  4. modprobe -r <module name> - unload a module
  5. delete all the files in the directory or subdirectory matches the pattern
    1. rm -- **/*.o
      or 
      find . -type f -name '*.o' -delete