Tuesday, January 30, 2018

Fabric transaction workflow

The basic flow of fabric transactions:

  1.  Users of the blockchain send request for a transaction to one or couple of peers This is called transaction proposal
  2. Peers simulate the transaction using chaincode, parameters and state of the ledger and return read-write set (delta of changes result from simulation). This is called endorsement response and is with peer individual crypto key. The endorsement response is sent back to users by peers.
  3. Then those endorsement responses are sent by users to ordering service, they are validated for authenticity, spending problem and against different policies, and if result is OK, this is called transaction validation, then this read-write set is send to peers to update the ledger and the database.
So you can send transaction proposal to peers in orgA and orgB and got there endorsement responses and decide do you want to send them to orderer. If you execute transaction proposal and got endorsement results, but stop here (not send them to orderer) no changes will be made to the ledger.

In endorsement results you have identities (peers and there organization), so you can create logic to push the results to orderer or to stop the transaction and you can identify who is the initiator of the
request etc.

Also in version 1.1 there is more elegant solution for this - system chaincode plugins. If I am not wrong there will be the option to plug your custom system chanicode (plugins) in runtime (using golang) and those plugins will have low level access to the data and the flow so you will be able to execute custom logic in different part of the process.

Tuesday, January 23, 2018

Use tls for Jenkins and gerrit

* Jenkins


If you installed jenkins by using apt install, there should be a file /etc/default/jenkins which is a configuration file, you can simply change the last line to be this to enable tls.
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=-1 --httpsPort=8080"
Notice that httpPort is set to -1 to disable http access. and httpsPort set to 8080 so that the access to port 8080 will need to be using https.

* Gerrit


Assume gerrit's root install directory is /home/ubuntu/review, if your location is a bit different, then you will need to replace that with the correct directory

1. Create a keystore by using the following command:

cd /home/ubuntu/review
mkdir keys; cd keys
keytool -keystore store -alias jetty -genkey -keyalg RSA
chmod 600 keystore
2. Change /home/ubuntu/review/etc/gerrit.config file, to make the following changes:
[gerrit]
    canonicalWebUrl = https://192.168.56.30:9090/
[httpd]
    listenUrl = https://192.168.56.30:9090/
    sslKeyStore = /home/ubuntu/review/keys/store
3. Change /home/ubuntu/review/etc/secure.config file, to add keystore password
[httpd]
    sslKeyPassword = <YOUR_KEYSTORE_PW>

* Deal with self signed certificates when use git.

git config --global http.sslverify false

Rename git branch

git remote rename origin gerrit

Gerrit with LDAP

* create a file named users.ldif with the following content

#===============================
dn: cn=Tong Li,dc=fabric,dc=com
objectclass: inetOrgPerson
cn: Tong Li
sn: Li
uid: tongli
userpassword: fabric1234
mail: tong.li@fabric.com
description: sweet guy

#===============================
dn: cn=John Lee,dc=fabric,dc=com
objectclass: inetOrgPerson
cn: John Lee
sn: Lee
uid: johnlee
userpassword: fabric1234
mail: john.lee@fabric.com
description: mad guy

#===============================
dn: cn=Job Builder,dc=fabric,dc=com
objectclass: inetOrgPerson
cn: Job Builder
sn: Builder
uid: jobbuilder
userpassword: fabric1234
mail: job.builder@fabric.com
description: dumb guy
#===============================







Notice that the IP address needs to be replaced with your machine IP address.

* Start ldap container with the predefined users:
docker run --name ldap \
-v /home/ubuntu/users.ldif:/container/service/slapd/assets/config/bootstrap/ldif/50-bootstrap.ldif \
--restart unless-stopped -p 389:389 -p 636:636 -e LDAP_ORGANISATION="Fabric Build" \
-e LDAP_DOMAIN="fabric.com" \
-e LDAP_ADMIN_PASSWORD="fabric1234" -d osixia/openldap:1.1.11 --copy-service

* Create a directory called gerrit_volume and Start gerrit container
docker run --name gerrit --restart unless-stopped \
-v /home/ubuntu/gerrit_volume:/var/gerrit/review_site \
-p 9090:8080 -p 29418:29418 -e WEBURL=http://192.168.56.30:9090 \
-e AUTH_TYPE=LDAP -e LDAP_SERVER=ldap://192.168.56.30 \
-e LDAP_ACCOUNTBASE=dc=fabric,dc=com \
-e LDAP_USERNAME=cn=admin,dc=fabric,dc=com \
-e LDAP_PASSWORD=fabric1234 \
-d openfrontier/gerrit


* These steps are for verification purposes, since the users will be loade automatically, no need to add
ldapadd -x -D "cn=admin,dc=fabric,dc=com" -f users.ldif -w fabric1234
ldappasswd -s welcome123 -W -D "cn=Tong Li,dc=fabric,dc=com" -x "uid=admin,dc=fabric,dc=com" -w fabric1234
docker exec ldap ldapsearch -x -H ldap://localhost -b dc=fabric,dc=com -D "cn=admin,dc=fabric,dc=com" -w fabric1234

Thursday, January 18, 2018

K8s notes

After docker gets installed, it is better to have the regular user be able to do docker command.
sudo gpasswd -a $USER docker
The above command will grant the current user docker permission.

This is the command to pull docker image from gcr.io
docker pull gcr.io/google-containers/hyperkube:v1.9.1
The above command will pull the hyperkube image version v1.9.1 onto your docker environment.

K8S release web site:
https://github.com/kubernetes/kubernetes/releases

It seems that k8s has recommended the following:

run as native services:
  docker, kubelet, and kube-proxy

run as containers:
  etcd, kube-apiserver, kube-controller-manager, and kube-scheduler

hyperkube is a super image which contains kubelet, kube-proxy, kube-controller-manager and kube-scheduler.

You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, we recommend that you run these as containers, so you need the hyperkube image.

Things needed for setting up k8s

etcd:
gcr.io/google-containers/etcd:2.2.1 or
quay.io/coreos/etcd:v2.2.1 
 
k8s:     
gcr.io/google-containers/hyperkube:v1.9.1
BIN_SITE=https://storage.googleapis.com/kubernetes-release/release/
$BIN_SITE/v1.9.1/bin/linux/amd64/kubelet
$BIN_SITE/v1.9.1/bin/linux/amd64/kube-proxy

Monday, January 15, 2018

cloud-config script to set hostname in openstack cloud

#cloud-config
runcmd:
 - addr=$(ip -4 -o addr|grep -v '127.0.0.1'|awk '{print $4}'|cut -d '/' -f 1)
 - echo $addr `hostname` >> /etc/hosts

Saturday, January 13, 2018

How to setup docker swarm like env using weave net

The procedure to setup docker swarm on two nodes using weave net

Create two VirtualBox Ubuntu VMs with the following settings:
1. Host-only network and NAT network
2. Each machine resolves the two hosts to their NAT IP addresses
3. Two machines can ping each other using host name
Install docker on each machine:
sudo apt-get install docker.io
Download weave net and make it executable on each machine:
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
On the first machine(u1710-1), run the following commands:
weave launch
eval $(weave env)
On the second machien(u1710-2), run the following commands:
weave launch u1710-1
eval $(weave env)

Use the follow procedure to test containers running on two machines can talk to each other

1. Start a container on the first machine by running the following command:
docker run --name box1 -it --rm busybox
2. Start a container on the second machine by running the following command, then ping first container:
docker run --name box2 -it --rm busybox
# ping -c2 box1
If one container can ping the other, then you have successfully setup weave net. The containers that you created should be able to ping outside of the world as well.


Note:
1. If you restart your machine, you will need to always run the following command before you start any new container, this is to make sure that your DOCKER_HOST environment varilable gets setup correctly.
2. The procedure has been also tested on aws and openstack cloud. Using the free tier VMs, you will be able to do the same as you do on VirtualBox. If you do this in your cloud environment, the security group will need to include rules to allow port on TCP 6783 and UDP 6783/6784, and TCP 6782 (monitoring) to be open. Of course, it will be also helpful if you allow ICMP traffic and ssh (22) so that you can easily test your connectivity.

Thursday, January 4, 2018

Fabric Events


The first thing to note is that nothing happens until a transaction fails or is committed into a block.

If you are listening to a peer for transaction notifications, you will get a success or failure event for the transaction when the following occur:
      If the transaction fails during endorsement (i.e. returns an error code), then there will be no read / write set and no transaction ID and the error will be propagated directly to applications listening to the endorser.
      If the transaction fails during commit (a.k.a. late failure, MVCC error) because of key version issues, then the block will contain the transaction but without the failure information, so you must be listening for transaction errors to catch these. The world state changes will not have been applied, and you will get the error when the block is written (I believe) to the channel on the peer to which you are listening.
      If the transaction is successful and committed to a block, then you will receive a success event when the block has been written into the channel on the peer to which you are listening.
If you are listening for block events, then you will get the event when a block has been written to the channel on the peer to which you are listening.

Chaincode events (a.k.a. contract emitted events) are sent when the block has been written to the channel on the peer to which you are listening. I.e. only when the block is cast in stone do the events come flying out.

If you examine the block format, you will find arrays in there for the transactions, and in each transaction all emitted events are shown (you can emit more than one).