Friday, November 9, 2018

Getting all running jobs from jenkins

http://hfrdrestsrv.rtp.raleigh.ibm.com:8080/computer/api/xml?tree=computer[oneOffExecutors[currentExecutable[url]]]&xpath=//url&wrapper=build

Sunday, October 21, 2018

mac os x disk mount

Catalina now has restrictions for various applications. You will have to do the following first.

  1. go to System Preferences->Security&Privacy->Privacy->Full Disk Access
  2. unlock settings (click on lock icon in the left bottom corner and enter password)
  3. click on + button and select iTerm.app application
  4. restart iTerm.app and try the mount process again

For some reason, now I can only use the sudo to mount a disk

sudo mount -t hfs /dev/disk2s2 /Volumes/tongbackup

I have to create the mount point (/Volumes/tongbackup) first

I did not have to do that before.

For msdos fat file system, do the following.

First Unmount:

sudo diskutil unmount /dev/disk3s1

Then Mount again:

This mount the msdos file type
sudo mount -w -t msdos /dev/disk3s1 /Volumes/$NAME
 
or
 
This mount the exfat file type
 
sudo mount -w -t exfat /dev/disk3s1 /Volume/$NAME 
 
Then you can write to it. 

Monday, October 15, 2018

Jenkins config file provider content

Jenkins has a plugin which provide configuration files to various jobs. The config file managed by this plugin is saved in jenkins_home/org.jenkinsci.plugins.configfiles.GlobalConfigFiles.xml file.

That file is obviously a xml file with content similar to the following:

<org.jenkinsci.plugins.configfiles.GlobalConfigFiles
  plugin="config-file-provider@3.3">
  <configs class="sorted-set">
    <comparator class="org.jenkinsci.plugins.configfiles.GlobalConfigFiles$1" />
    <org.jenkinsci.plugins.configfiles.custom.CustomConfig>
      <id>bxproduction</id>
      <name>bxproduction</name>
      <comment></comment>
      <content>
         some content
      </content>
      <providerid>org.jenkinsci.plugins.configfiles.custom.CustomConfig</providerid>
    </org.jenkinsci.plugins.configfiles.custom.CustomConfig>
  </configs>
</org.jenkinsci.plugins.configfiles.GlobalConfigFiles>

Thursday, October 11, 2018

Useful jenkins urls


API token
c2ed2dc5e57d6f8bb4be378cbc0abf73

Get the job queue
http://9.42.40.122:8080/queue/api/json?pretty=true

Get the crumb:
crumbvar=$(curl -u tongli:c2ed2dc5e57d6f8bb4be378cbc0abf73 http://9.42.40.122:8080/crumbIssuer/api/json)

Show the crumb:
echo $crumbvar

Trigger a build:
curl -vvv -H "Jenkins-Crumb:afa4c106b6a34ff9a8caa1ae9ce3f4d2" -X POST \
http://tongli:c2ed2dc5e57d6f8bb4be378cbc0abf73@9.42.40.122:8080/job/testBuild/buildWithParameters?jobCommand=start


The response header:
< HTTP/1.1 201 Created
< Date: Mon, 09 Apr 2018 14:02:29 GMT
< X-Content-Type-Options: nosniff
< Location: http://9.42.40.122:8080/queue/item/216/
< Content-Length: 0
< Server: Jetty(9.4.z-SNAPSHOT)
<
* Connection #0 to host 9.42.40.122 left intact


Get all the builds of a job:
http://9.42.40.122:8080/job/testBuild/api/json?pretty=true

Get build specific info for build testBuild  #17
http://9.42.40.122:8080/job/testBuild/17/api/json?pretty=true

Get only desired info for a job build
http://9.42.40.122:8080/job/testBuild/17/api/json?pretty=true&tree=result,id,queueId,url
http://9.42.40.122:8080/job/testBuild/17/api/xml?pretty=true&tree=result,id,queueId,url


Get the build number by using queue id:
This call queries job named testBuild with queue id of 218
http://9.42.40.122:8080/job/testBuild/api/xml?tree=builds[id,number,result,queueId]&xpath=//build[queueId=218]

http://9.42.40.122:8080/job/testBuild/api/json?tree=builds[id,number,result,queueId]


The pattern is tree=keyname[field1,field2,subkeyname[subfield1]] and so on.


@litong01 "username":"bmxbcv1@us.ibm.com",
"password":"m072017j",
"org":"bcstarterplantest1@gmail.com",
"space":"dev",
"apiendpoint":"https://api.stage1.ng.bluemix.net",
"serviceInstanceName": "tmp-instance",
"service":"ibm-blockchain-5-staging",
"servicePlan":"ibm-blockchain-plan-v1-starter-staging",

Wednesday, September 5, 2018

Using cello ansible agent to setup fabric network

docker run -it --rm -v ~/test/vars:/opt/agent/vars hyperledger/cello-ansible-agent ansible-playbook -e "mode=apply env=fabricspec deploy_type=k8s" setupfabric.yml

Thursday, July 26, 2018

Script to start prometheus and grafana using docker

#!/bin/bash

# Start prometheus server
docker run -d -p 9090:9090 --restart unless-stopped \
  -v /home/ubuntu/graph/prometheus:/etc/prometheus \
  -v /home/ubuntu/graph/prometheus:/prometheus \
  --user $(id -u) --name prometheus prom/prometheus \
  --config.file=/etc/prometheus/prometheus.yml

# Start prometheus node exporter which collects node resource metrics
docker run -d -p 9100:9100 --restart unless-stopped \
  --user $(id -u) --name node-exporter prom/node-exporter

# Start grafana
docker run -d -p 3000:3000 --restart unless-stopped \
  -v /home/ubuntu/graph/grafana:/var/lib/grafana \
  --user $(id -u) --name grafana grafana/grafana


Assume that directories exists

    /home/ubuntu/graph/prometheus
    /home/ubuntu/graph/grafana

#Using graphite seems to be the better option since it allows posting data to it.
#Here is how to start it.

docker run -d   --name graphite   -p 8080:80   -p 2003:2003   sitespeedio/graphite:1.1.3

then access the web UI at 8080.

Run the following command to post some random data.

while true; do echo "fabric.tong.diceroll;nn=$((RANDOM%30));john=100 $((RANDOM % 100)) -1" | nc -q0 192.168.56.32 2003;sleep 3; done

The above command actually post metric named fabric.tong.diceroll with two tags, one is named nn, one is named john.

VirtuaBox host only network stops working

Once in awhile, virtualbox host only network will stop working. The host will not be able to ping the guest system. Even if the removal and recreate the host only network won't help. What it really helped me was to completely shutdown my macbook machine. do a cold restart, then network started working.

The route table of the host will look like this if the routing is working.

Use this command to show the current routing related to the VB host-only network
netstat -nr | grep 192.168

192.168.56         link#17            UC              3        0 vboxnet
192.168.56.1       a:0:27:0:0:0       UHLWIi          1        6     lo0
192.168.56.3       8:0:27:15:45:66    UHLWI           0        5 vboxnet   1171
192.168.56.255     ff:ff:ff:ff:ff:ff  UHLWbI          0        3 vboxnet

When it was not working, the netstat -nr shows no entries like the above.

Regardless the host network is working or not, from the host, you won't be able to ping the gateway of the host network, for example, ping 192.168.56.1 will fail.

If not wantting cold restart the machine, do the following.

sudo route delete 192.168.56.0/24

Then delete host only network and recreate host only network, then restart VM, it will work.

For some reasons, the mac host may not show any route to network 192.168.56.0/24. when that happens, use the following procedure to get it back.

1. Delete all host-only networks from virtualbox
2. Completely shut down the mac host, then reboot the host
3. Recreate the host-only network.

At this point, mac host should show at least the following (using netstat -nr command):

192.168.56         link#17            UC              2        0 vboxnet

After you start a VM which uses host-only network, you should see the following:

192.168.56         link#17            UC              2        0 vboxnet
192.168.56.255     ff:ff:ff:ff:ff:ff  UHLWbI          0        1 vboxnet

If you actually ping the guest VM with IP 192.168.56.32, and the ping is successful, then you should see the following:

192.168.56         link#17            UC              2        0 vboxnet
192.168.56.32      8:0:27:8f:d9:5e    UHLWIi          1        3 vboxnet   1195
192.168.56.255     ff:ff:ff:ff:ff:ff  UHLWbI          0        1 vboxnet

Monday, July 23, 2018

Fabric sdk-go usage

The flow to use go-sdk to create a new channel.

1. Get configuration from a configuration file using config.FromFile(path_to_config_file), this returns a ConfigProvider type.
2. Use ConfigProvider function and fabsdk.Option to create a sdk.
        sdk := fabsdk.New(configOpt, sdkOpts...)
3.  Use sdk.Context to create a client context.
clientContext := sdk.Context(fabsdk.WithUser(orgAdmin),
fabsdk.WithOrg(ordererOrgName))
4. Use resmgmt to create a new res management client
resMgmtClient, err := resmgmt.New(clientContext)
5. Use the resMgmtClient, orgName, and the sdk context to create a mspClient
mspClient, err := mspclient.New(sdk.Context(),
mspclient.WithOrg(orgName))
6. Get AdminIdentity using the orgAdmin string.
adminIdentity, err := mspClient.GetSigningIdentity(orgAdmin)
7. Use resmgmt.SaveChannelRequest.
    req := resmgmt.SaveChannelRequest{ChannelID: channelID,
             ChannelConfigPath: integration.GetChannelConfigPath(channelID + ".tx"),
             SigningIdentities: []msp.SigningIdentity{adminIdentity}}
8. Save the channel
    txID, err := resMgmtClient.SaveChannel(req,
resmgmt.WithRetry(retry.DefaultResMgmtOpts),
resmgmt.WithOrdererEndpoint("orderer.example.com"))
9. Make sure things went well.    
    require.Nil(t, err, "error should be nil")
    require.NotEmpty(t, txID, "transaction ID should be populated")







 

Friday, July 20, 2018

health vital info collection

we could create a block chain system which allow users to send data to a blockchain which de-identifies the user specific information such as name, location, etc. for example,

1. name or id will be replaced with something to prevent being used to identify a person. the ID may be replaced with something which can be used to correlate data set but can not be used to identify a user.
2. age can be mapped to an age group
3. location, optional
4. sex, optional
5. race, optional

The information posted onto the network if get confirmed, will receive points. transaction gets posted to the network once a day or every 6 hours. each transaction should contain a number of data points, for example, the vitals will be measured every 10 minutes, if post is done every 6 hours, then each transaction should contain 36 data points.

The data can be used to track population in a particular areas during a disaster. data can be correlated to some other statistics to predict the population.

Thursday, June 14, 2018

Hyperledger Fabric operation

Fabric operation sequence

1. Create channel
2. Peer join channel
3. Install Chaincode
4. Instantiate Chaincode

Create channel

1. an orderer endpoint and its ca certificate
2. channel name
3. channel transaction file
4. tls flag
5. timeout value
6. peer command, set as environment variable
7. peer information

Example:
peer channel create -o {{ cliorderer.name }}:7050 -c firstchannel
  -f /etc/hyperledger/allorgs/keyfiles/firstchannel.tx
  --tls true --timeout $TIMEOUT
  --cafile msp/tlscacerts/tlsca.{{ cliorderer.org }}-cert.pem

Instantiate chaincode (Per channel)

1. an orderer endpoint and its ca certificate
2. channel name
3. chaincode name and chaincode must be placed at the right location
4. tls flag
5. timeout value
6. version number
7. argument
8. peer information, set as environment variable
9. endorsement policy
peer chaincode instantiate -o {{ cliorderer.name }}:7050 --tls true
  --cafile msp/tlscacerts/tlsca.{{ cliorderer.org }}-cert.pem
  -C firstchannel -n firstchaincode -v 1.0
  -c '{"Args":["init","a", "100", "b","200"]}'
  -P "AND ('{{ orgmembers }}.member')"

Join channel (Per peer)

1. channel block
8. peer information, set as environment variable
peer channel join -b firstchannel.block

Install chaincode (Per peer)

1. chaincode name and version
2. chaincode path in GOPATH src directory, for example: $GOPATH/src/chaincode
3. peer information, set as environment variable
peer chaincode install -n firstchaincode -v 1.0 -p chaincode

Monday, June 11, 2018

k8s issues

1. pods dns search and communication
2. persistent volume capabilities
     so many different drivers, very confusing.
     policies are very confusing, tied way too much to the underlying storage. Recycle, Retain, what are the differences and what do they exactly mean.
3. pod allocation policies
4. Start using k8s is not a walk in the park, gke(google), aks(Azure), CKS (Cloud Kubernetes Service) (IBM) provide a fairly easy process to start a k8s cluster, however, eks(Amazon) requires a user to create a role to authorize a cluster creation, the process of creating that role is not very obvious. EKS asks to add worker node which is a separate step.
5. Getting the kubeconfig file is where things really going dramatically different.
     1. IBM offers a link to download
     2. Google and Microsoft require their own client tools to get i, glcoud and az
     3. Amazon offers an instruction and copy/paste to accomplish that.
6. To create a persistent volume claim, this is very different.
7. Dashboard access, from easy to somewhat so weird approaches, for example, IBM provides a simple link to the standard k8s dashboard, Azure provides a command to start a proxy server, then user can access that proxy server for the dashboard, I think this is a rather strange way of offering the dashboard access.
8. Length of the provisioning is long.
9. docker in docker issue. end point unknown, using daemon set to create dind container for endpoint.

alibaba kubernetes service.

a PV can only be used by one PVC. and the policy has to be set to Recycle for NFS persistent volume.

Thursday, May 31, 2018

How to get kubeconfig from google k8s cluster

1. Run the command showing from clicking on the connect button. This will create file in ~/.kube/config file. However this file can not be used directly by kubectl.
2. Run the following command:
   kubectl get pods
   The above command somehow changed the ./kube/config file adding the access-token and expiry-key. then that file ~/.kube/config


the above will be using access token in the kubeconfig.

You could also use user name and password.

edit the ~/.kube/config file, add the following in the users section of the file

- name: tong
  user:
    password: xxxxxx
    username:xxxxx

The user name and password should come from k8s cluster endpoint show credentials button.
Then in the contexts -> context -> user, use user tong. This will also work.

Thursday, May 17, 2018

What to do if only want to submit some of the changed files

Assume that you have changed file1, file2, file3 and file4, but you only want to commit file1 and file2, follow the this procedure.

1. Use git add to add individual files to be committed.
     git add file1
     git add file2

2. The commit the changes.
    git commit -s

3. Then stash all the changes.
    git stash

4. Now git checkout other files which have been changed to make the work space clean
    git checkout -- file3
    git checkout -- file4

5. Now your workspace should be clean. Git review the commit
    git review

6. Simply do git stash pop to get changes to file3 and file4 back.
    git stash pop







Use nfs server docker container to set up nfs server

1. Start up a server:

docker run -d --name nfs --privileged --net=host  \
    -v /home/ubuntu/logs:/nfsshare -e SHARED_DIRECTORY=/nfsshare \
    itsthenetwork/nfs-server-alpine:latest

2. Now to mount that share, a client using ubuntu will need to have nfs-common installed.

sudo apt install -y nfs-common

now mount the share to a directory named test

sudo mount -v 192.168.56.106:/ test

now dismount the share.

sudo umount test

These mounted directory owned by root, can only be changed by root users.

Wednesday, May 9, 2018

Script to run a job every 5 minutes

#!/bin/bash
cd /home/ibmadmin/hl/src/hfrd
if git diff-index --quiet HEAD --; then
    # No changes
    echo 'not changes'
else
    # We have some changes, rebuild the  api server docker image
    git pull
    make api-docker
    docker restart hfrdserver
    # Sync to the mirror git server
    eval $(ssh-agent -s)
    ssh-add /home/ibmadmin/.ssh/fd
    cd /home/ibmadmin/gitserver/hfrd.git
    git fetch
    git push --mirror ssh://git@9.42.91.228:2222/git-server/repos/hfrd.git
    kill -9 $SSH_AGENT_PID
fi

cd /home/ibmadmin/gitserver/synch
at now + 5 minute -f thesync.sh > next.log 2>&1

at command is to schedule
atq to query
atrm to delete

Tuesday, May 1, 2018

Start jenkins using container

The procedure to setup a jenkins server using docker container

1. Create the following directory, assume you are at your home directory:
$ mkdir -p ~/jenkins/jenkins_home

2. Run the following command to start up the git server:
docker run -d -p 9090:8080 -p 50000:50000 \
  --restart unless-stopped \
  -v ~/jenkins/jenkins_home:/var/jenkins_home \
  --name jenkins jenkins/jenkins:lts
Now your server is running at the port 9090. Use the following url to access its UI.
    http://<hostip>:9090

The first time access it, you will have to find the initial admin pw and do the setup. All the jenkins files will be at ~/jenkins/jenkins_home

You can then add that host as the jenkins slave to start run your jobs.

Monday, April 30, 2018

Set up my own git server.

The procedure to setup a git server

1. Create few directories
$ mkdir -p ~/gitserver/repos ~/gitserver/keys
2. Create an empty project named myrepo
$ cd ~/gitserver/repos
$ git init --bare myrepo.git
3. Add some public keys to the keys directory ~/gitserver/keys by simply copy pub key files in that directory.

4. Run the following command to start up the git server:
docker run -d -p 2222:22 \
  --restart unless-stopped \
  -v /home/ibmadmin/gitserver/repos:/git-server/repos \
  -v /home/ibmadmin/gitserver/keys:/git-server/keys \
  --name gitserver jkarlos/git-server-docker
5. Now access the project at this url:
ssh://git@9.42.91.228:2222/git-server/repos/myrepo.git
Notice that the user for this git server is called "git".

The procedure to setup a mirror repo of another repo. In the following example, mirror.git is the mirror of the original.git repo.

Do the following initially:
git clone --mirror git@example.com/original.git
cd original.git
git push --mirror git@example.com/mirror.git 
Do not delete the original.git, keep that directory. When there is an update, make sure that the private key which matches the public key used for the git server was added to ssh-agent, then simply do the following:
cd original.git
git fetch
git push --mirror git@example.com/mirror.git

Start a gerrit server using docker container

docker run -d -p 9090:8080 -p 29418:29418 \
  --restart unless-stopped \
  -v /home/ibmadmin/gerrit/gerrit_vol:/var/gerrit/review_site \
  -e AUTH_TYPE=DEVELOPMENT_BECOME_ANY_ACCOUNT \
  --name gerrit openfrontier/gerrit

see openfrontier/gerrit for more starting parameters.

Tuesday, April 24, 2018

Make virtualbox VM using host dns

Run the following command on the host where VirtualBox runs to make changes to a VM.

VBoxManage modifyvm "NewJenkins" --natdnshostresolver1 on

This has to run when the VM is shutdown. The name of the VM in the above case is called 'NewJenkins'

Friday, April 20, 2018

what is the dns doing

with the following deployment file:

apiVersion: v1
kind: Service
metadata:
  name: tongpeer
spec:
  selector:
    name: busybox
  clusterIP: None
  ports:
  - name: foo # Actually, no port is needed.
    port: 1234
    targetPort: 1234
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox1
  labels:
    name: busybox
spec:
  hostname: busybox-1
  subdomain: tongpeer
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    name: busybox
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox2
  labels:
    name: busybox
spec:
  hostname: busybox-2
  subdomain: tongpeer
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    name: busybox
 
When this gets deployed onto default namespace, 
 
The service name becomes tongpeer.default.svc.fabricnet, which can be nslookup by
 
nslookup tongpeer
nslookup tongpeer.default.svc.fabricnet
 
 
Two busybox get names like the following:
 
busybox-1.tongpeer.default.svc.fabricnet
busybox-2.tongpeer.default.svc.fabricnet 

nslookup will work with the full name for pods.


If the namespace is called tongnet, then the names will be

tongpeer
tongpeer.tongnet.svc.fabricnet

busybox-1.tongpeer.tongnet.svc.fabricnet
busybox-2.tongpeer.tongnet.svc.fabricnet


Need to figure out why there is svc.fabricnet at the end.

Thursday, April 5, 2018

Run docker inside a docker container but produce containers side by side.

docker run -v /var/run/docker.sock:/var/run/docker.sock ...
 
For example:
 
    docker run -v /var/run/docker.sock:/var/run/docker.sock \
       -ti docker 
 
or start the docker and use it from a different instance
 
    docker run --privileged --name some-docker -d docker:dind 

    docker exec -it some-docker /bin/sh

Sunday, April 1, 2018

Setting up nexus3 with docker and raw repo with ldap and tls enabled

1. Create a directory to hold all nexus3 data and keystore:
mkdir -p /home/ubuntu/nexus3/etc/ssl

2. Create a keystore by running the following commands:
cd /home/ubuntu/nexus3/etc/ssl
keytool -genkeypair -keystore keystore.jks -storepass password \
-alias fabric.com -keyalg RSA -keysize 2048 -validity 5000 \
-keypass password -dname 'CN=*.fabric.com, dc=fabric, dc=com, C=US' \
-ext 'SAN=DNS:fabric.com,IP:192.168.56.30' -ext "BC=ca:true"

   This creates a file named keystore.jks, if you use other name for the nexus3, for some reasons, it won't work, very strange.

3. Make the user 200:200 to own the directory:
sudo chown -R 200:200 /home/ubuntu/nexus3 
4. Run the following command to start it.
docker run -d -p 8081:8081 -p 8443:8443 --restart always \
--name nexus -v /home/ubuntu/nexus3:/nexus-data \
-v /home/ubuntu/nexus3/etc/ssl:/opt/sonatype/nexus/etc/ssl \
sonatype/nexus3

   This command starts up nexus3 container and hookup with the container with the right keystore location and data location.

5. Configure the nexus3 to use ldap by login to https://192.168.56.30:8081

      Administration -> Security -> LDAP
      Connection:
      LDAP server address:      ldap://192.168.56.30:389
      Search base:                     dc=fabric,dc=com
      Authentication method:   Simple Authentication
      Username or DN:            cn=admin,dc=fabric,dc=com
      Password:                        fabric123

      User and group:
      Base DN:        empty string
      Are users locate in structures below the user base DN?  off
      User filter:      empty string
      User ID sttribute:        uid
      Real name attribute:      cn
      Email attribute:       mail
      Password attribute:      userpassword
      Map LDAP groups as roles:    off

     Administration -> Security -> Realms
     add LDAP Realm to the left box

6. Raw repository:

    Create a raw hosted repository,
    Create a role so that people in the role can operator the raw repository
        Administration -> Security -> Roles,  Click on Create role button, Nexus role
        basically add nx-repository-view-raw-<repo-name>-* to the left box.
    Map users to that role.
        Administration -> Security -> Users, Source: LDAP, search for all the users
        basically associate each user to the role created in above step.
    Then use the following command to upload a file to the repository

curl -u user1:fabric123 -k --upload-file users.ldif \
https://192.168.56.30:8081/repository/fabricImages
   The above command uploads the file users.ldif to the raw (hosted) repository named fabricImages
curl -u user1:fabric123 -k -T users.ldif \
https://192.168.56.30:8081/repository/fabricImages/testfolder/users.ldif
   The above command uploads a file and create the testfolder directory as well at the same time.

7. Docker repository:

   Create docker hosted repository, set https port to be 8443, Force basic authentication
   and allow redeploy. Since this is using the self signed certificate, any docker client
   wants to access it, will need to put the server certificate in

/etc/docker/certs.d/<server>:<port>/ca.crt file
   the server has to be the server name or IP address.
   the port in this case is 8443

   To get the server certificate file, run the following command:
keytool -printcert --sslserver 192.168.56.30:8443 -rfc

Friday, March 30, 2018

Develop a new blockchain app based on hyperledger fabric


The following process all based on carauction business network.

1. add a new auctioneer, id A100
2. add two members, id M100 and M200, with some balance
3. add a vehicle for assets, id V100, make it owner M100
4. add a vehicle listing for assets, id L100 to sell vehicle V100

5. then submit transactions. one is offer, one is closebidding.

6. check that all members balance have changed.


Auctioneer is not involved, so we need to change that to make it very real.

Go to define, to change model file. Add the following

Change the Auctioneer to have a balance:
o Double balance
Change the listing to have an agent:
 --> Auctioneer auctioneer

So that auctioneer can have some money and each listing can have an agent.

Go to script file and change the function CloseBidding.

Commissions to auctioneer will be 5% of the sell price:
commission = highestOffer.bidPrice * 0.05
listing.auctioneer.balance += commission

seller will lose the commission from the sell price:
seller.balance += highestOffer.bidPrice - commission;

Now make sure these changes are made permanent onto the chain:
const auctioneerRegistry = await  getParticipantRegistry('org.acme.vehicle.auction.Auctioneer');

await auctioneerRegistry.updateAll([listing.auctioneer]);

After all these changes, click on the update to update the entire network.

Then do the same test to see balance changes from each member and auctioneer.

Wednesday, March 28, 2018

pass an array to ansible

Here is an example how to pass a list of strings to ansible
  ansible-playbook -i run/runhosts -e "mode=apply env=vb1st" \
  --extra-vars='{"peerorgs":["orga","orgb"]}' setupfabric.yml \
  --tags="composersetup"
Notice that the peerorgs is set to a list of two strings.

The above command also deploys another business network onto the fabric network. Before you run that you need to make sure that the business network archive file exists in directory /opt/gopath/vb1st/fabric/run/keyfiles

Friday, March 16, 2018

Deploying a hyperledger Composer blockchain business network to fabric

1. Create a connection profile per org according to the connection profile format.

2. Create a business network cards for each org using the following command:
composer card create
   -p connection_profile_for_the_org.json 
   -u PeerAdmin
   -c admin_cert
   -k admin_private_key
   -r PeerAdmin -r ChannelAdmin 
   -f network_card_output_file.card
network_card_output_file.card is a name used for the card file name, provide a name as you like

3. Import each org business card to Composer playground using the following command:
composer card import
   -f network_card_output_file.card
   --name name_of_business_card 
name_of_business_card is the name of the business card, can be the same as the card file name

4. Installing Composer runtime onto the Hyperledger Fabric peer node for each org:
composer runtime install
   -c name_of_business_card
   -n business_network_archive_file.bna
business_network_archive_file is the business network archive file, should be exported.

5. Define an endorsement policy for the network:

assume that name of the policy file is endorsement-policy.json 6. Start business network:

composer network start
   -c name_of_business_card
   -a trade-network.bna 
   -o endorsementPolicyFile=endorsement-policy.json 
   -A admin@org1 -C admin@org1_cert.pem 
   -A admin@org2 -C admin@org2_cert.pem 
   ...
Start composer playground, mount two things:

docker run -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  -v /home/ubuntu/.composer:/home/composer/.composer \
  --name composer-playground --publish 8080:8080 \
  --detach hyperledger/composer-playground:next


Create the card: 

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next card create \
  -p /opt/gopath/vb1st/fabric/run/keyfiles/orga/connection.json \
  -c /opt/gopath/vb1st/fabric/run/keyfiles/orga/users/Admin@orga/msp/admincerts/Admin@orga-cert.pem \
  -k /opt/gopath/vb1st/fabric/run/keyfiles/orga/users/Admin@orga/msp/keystore/admin_private.key \
  -r PeerAdmin -r ChannelAdmin \
  -u PeerAdmin@orga \
  -f /opt/gopath/vb1st/fabric/run/keyfiles/orga/orga_firstnetwork.card

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next card create \
  -p /opt/gopath/vb1st/fabric/run/keyfiles/orgb/connection.json \
  -c /opt/gopath/vb1st/fabric/run/keyfiles/orgb/users/Admin@orgb/msp/admincerts/Admin@orgb-cert.pem \
  -k /opt/gopath/vb1st/fabric/run/keyfiles/orgb/users/Admin@orgb/msp/keystore/admin_private.key \
  -r PeerAdmin -r ChannelAdmin \
  -u PeerAdmin@orgb \
  -f /opt/gopath/vb1st/fabric/run/keyfiles/orgb/orgb_firstnetwork.card


Import the card:

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
   -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
   hyperledger/composer-cli:next card import \
   -f /opt/gopath/vb1st/fabric/run/keyfiles/orga/orga_firstnetwork.card \
   --name PeerAdmin@orga

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
   -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
   hyperledger/composer-cli:next card import \
   -f /opt/gopath/vb1st/fabric/run/keyfiles/orgb/orgb_firstnetwork.card \
   --name PeerAdmin@orgb


Runtime install

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next runtime install \
  -c PeerAdmin@orga -n trade-network

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next runtime install \
  -c PeerAdmin@orgb -n trade-network

Request identity:

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next identity request \
  -c PeerAdmin@orga -u admin -s adminpw -d /home/composer/.composer/orgaAdmin

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next identity request \
  -c PeerAdmin@orgb -u admin -s adminpw -d /home/composer/.composer/orgbAdmin


Start the network:

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next network start \
  -c PeerAdmin@orga \
  -a /opt/gopath/vb1st/fabric/run/keyfiles/trade-network.bna \
  -A orgaAdmin \
  -C /home/composer/.composer/orgaAdmin/admin-pub.pem \
  -A orgbAdmin \
  -C /home/composer/.composer/orgbAdmin/admin-pub.pem

Thursday, March 15, 2018

Ubuntu 17.10 Static IP Configuration

The following is an example how to configure VirtualBox VM with NAT and Host-Only network cards. First card is NAT,  second card is Host-Only.


# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    enp0s3:
      dhcp4: yes
    enp0s8:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.56.30/24]
      nameservers:
        addresses: [8.8.8.8,8.8.4.4]

Tuesday, March 13, 2018

Install tiller using helm

once helm is downloaded, run the following command to make sure everything is correct.


helm init --override "spec.template.spec.containers[0].env[2].name"="KUBERNETES_MASTER" --override "spec.template.spec.containers[0].env[2].value"="192.168.56.101:8080"

This is to setup container with the env like this

    containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        - name: KUBERNETES_MASTER
          value: "192.168.56.101:8080"
        image: gcr.io/kubernetes-helm/tiller:v2.8.2

Without adding the kubernetes master, the tiller container won't be able to find where the k8s API server is.

Thursday, March 8, 2018

Docker ipvlan/macvlan setup

1. Did the following step (not sure if that is really needed):

   sudo iptables -P FORWARD ACCEPT

2. Make sure that each network card is in Promiscuous mode
3. Docker service needs to be configured to have the following:
      --experimental=true
    This is due to the problem that ipvlan is still experimental
4. Create a network using ipvlan:

docker network create -d ipvlan --subnet=192.168.0.0/24 --ip-range=192.168.0.208/28 --gateway=192.168.0.1 -o ipvlan_mode=l2 -o parent=enp0s3 ipvlan70

This assumes that the entire network is on 192.168.0.0/24, the range for this node is 192.168.0.208/28, the gateway is the gateway for the network. The ipvlan mode is l2 and use one of the network card of the machine. the name of the ipvlan is called ipvlan70

If everything is working, then you can create a container like this to test its connectivity:

docker run --net=ipvlan70 -it --name ipvlan70_2 --rm alpine /bin/sh

This container should be able to access internet and other containers on the same network but not the host IP, that is on purpose.



The following procedure is to setup macvlan

docker network create -d macvlan --subnet=192.168.0.0/24 --ip-range=192.168.0.208/28 -o macvlan_mode=bridge -o parent=enp0s3.70 macvlan70

Wednesday, February 28, 2018

weave net with docker composer

To setup weave net on 3 nodes.

weave launch 2nd_node_ip 3rd_node_ip
weave launch 1st_node_ip 3rd_node_ip
weave launch 1st_node_ip 2nd_node_ip

Once the weave net is up running, do the following

eval $(weave env)

to reset the env

eval $(weave env --restore)

To use docker composer launch a container, it is very important to have the following two things ready

1. eval $(weave env)
2. make sure that hostname is like this mycontainer1.weave.local


useful commands

weave status
weave status connections
weave stop
weave reset --force

Tuesday, January 30, 2018

Fabric transaction workflow

The basic flow of fabric transactions:

  1.  Users of the blockchain send request for a transaction to one or couple of peers This is called transaction proposal
  2. Peers simulate the transaction using chaincode, parameters and state of the ledger and return read-write set (delta of changes result from simulation). This is called endorsement response and is with peer individual crypto key. The endorsement response is sent back to users by peers.
  3. Then those endorsement responses are sent by users to ordering service, they are validated for authenticity, spending problem and against different policies, and if result is OK, this is called transaction validation, then this read-write set is send to peers to update the ledger and the database.
So you can send transaction proposal to peers in orgA and orgB and got there endorsement responses and decide do you want to send them to orderer. If you execute transaction proposal and got endorsement results, but stop here (not send them to orderer) no changes will be made to the ledger.

In endorsement results you have identities (peers and there organization), so you can create logic to push the results to orderer or to stop the transaction and you can identify who is the initiator of the
request etc.

Also in version 1.1 there is more elegant solution for this - system chaincode plugins. If I am not wrong there will be the option to plug your custom system chanicode (plugins) in runtime (using golang) and those plugins will have low level access to the data and the flow so you will be able to execute custom logic in different part of the process.

Tuesday, January 23, 2018

Use tls for Jenkins and gerrit

* Jenkins


If you installed jenkins by using apt install, there should be a file /etc/default/jenkins which is a configuration file, you can simply change the last line to be this to enable tls.
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=-1 --httpsPort=8080"
Notice that httpPort is set to -1 to disable http access. and httpsPort set to 8080 so that the access to port 8080 will need to be using https.

* Gerrit


Assume gerrit's root install directory is /home/ubuntu/review, if your location is a bit different, then you will need to replace that with the correct directory

1. Create a keystore by using the following command:

cd /home/ubuntu/review
mkdir keys; cd keys
keytool -keystore store -alias jetty -genkey -keyalg RSA
chmod 600 keystore
2. Change /home/ubuntu/review/etc/gerrit.config file, to make the following changes:
[gerrit]
    canonicalWebUrl = https://192.168.56.30:9090/
[httpd]
    listenUrl = https://192.168.56.30:9090/
    sslKeyStore = /home/ubuntu/review/keys/store
3. Change /home/ubuntu/review/etc/secure.config file, to add keystore password
[httpd]
    sslKeyPassword = <YOUR_KEYSTORE_PW>

* Deal with self signed certificates when use git.

git config --global http.sslverify false

Rename git branch

git remote rename origin gerrit

Gerrit with LDAP

* create a file named users.ldif with the following content

#===============================
dn: cn=Tong Li,dc=fabric,dc=com
objectclass: inetOrgPerson
cn: Tong Li
sn: Li
uid: tongli
userpassword: fabric1234
mail: tong.li@fabric.com
description: sweet guy

#===============================
dn: cn=John Lee,dc=fabric,dc=com
objectclass: inetOrgPerson
cn: John Lee
sn: Lee
uid: johnlee
userpassword: fabric1234
mail: john.lee@fabric.com
description: mad guy

#===============================
dn: cn=Job Builder,dc=fabric,dc=com
objectclass: inetOrgPerson
cn: Job Builder
sn: Builder
uid: jobbuilder
userpassword: fabric1234
mail: job.builder@fabric.com
description: dumb guy
#===============================







Notice that the IP address needs to be replaced with your machine IP address.

* Start ldap container with the predefined users:
docker run --name ldap \
-v /home/ubuntu/users.ldif:/container/service/slapd/assets/config/bootstrap/ldif/50-bootstrap.ldif \
--restart unless-stopped -p 389:389 -p 636:636 -e LDAP_ORGANISATION="Fabric Build" \
-e LDAP_DOMAIN="fabric.com" \
-e LDAP_ADMIN_PASSWORD="fabric1234" -d osixia/openldap:1.1.11 --copy-service

* Create a directory called gerrit_volume and Start gerrit container
docker run --name gerrit --restart unless-stopped \
-v /home/ubuntu/gerrit_volume:/var/gerrit/review_site \
-p 9090:8080 -p 29418:29418 -e WEBURL=http://192.168.56.30:9090 \
-e AUTH_TYPE=LDAP -e LDAP_SERVER=ldap://192.168.56.30 \
-e LDAP_ACCOUNTBASE=dc=fabric,dc=com \
-e LDAP_USERNAME=cn=admin,dc=fabric,dc=com \
-e LDAP_PASSWORD=fabric1234 \
-d openfrontier/gerrit


* These steps are for verification purposes, since the users will be loade automatically, no need to add
ldapadd -x -D "cn=admin,dc=fabric,dc=com" -f users.ldif -w fabric1234
ldappasswd -s welcome123 -W -D "cn=Tong Li,dc=fabric,dc=com" -x "uid=admin,dc=fabric,dc=com" -w fabric1234
docker exec ldap ldapsearch -x -H ldap://localhost -b dc=fabric,dc=com -D "cn=admin,dc=fabric,dc=com" -w fabric1234

Thursday, January 18, 2018

K8s notes

After docker gets installed, it is better to have the regular user be able to do docker command.
sudo gpasswd -a $USER docker
The above command will grant the current user docker permission.

This is the command to pull docker image from gcr.io
docker pull gcr.io/google-containers/hyperkube:v1.9.1
The above command will pull the hyperkube image version v1.9.1 onto your docker environment.

K8S release web site:
https://github.com/kubernetes/kubernetes/releases

It seems that k8s has recommended the following:

run as native services:
  docker, kubelet, and kube-proxy

run as containers:
  etcd, kube-apiserver, kube-controller-manager, and kube-scheduler

hyperkube is a super image which contains kubelet, kube-proxy, kube-controller-manager and kube-scheduler.

You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, we recommend that you run these as containers, so you need the hyperkube image.

Things needed for setting up k8s

etcd:
gcr.io/google-containers/etcd:2.2.1 or
quay.io/coreos/etcd:v2.2.1 
 
k8s:     
gcr.io/google-containers/hyperkube:v1.9.1
BIN_SITE=https://storage.googleapis.com/kubernetes-release/release/
$BIN_SITE/v1.9.1/bin/linux/amd64/kubelet
$BIN_SITE/v1.9.1/bin/linux/amd64/kube-proxy

Monday, January 15, 2018

cloud-config script to set hostname in openstack cloud

#cloud-config
runcmd:
 - addr=$(ip -4 -o addr|grep -v '127.0.0.1'|awk '{print $4}'|cut -d '/' -f 1)
 - echo $addr `hostname` >> /etc/hosts

Saturday, January 13, 2018

How to setup docker swarm like env using weave net

The procedure to setup docker swarm on two nodes using weave net

Create two VirtualBox Ubuntu VMs with the following settings:
1. Host-only network and NAT network
2. Each machine resolves the two hosts to their NAT IP addresses
3. Two machines can ping each other using host name
Install docker on each machine:
sudo apt-get install docker.io
Download weave net and make it executable on each machine:
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
On the first machine(u1710-1), run the following commands:
weave launch
eval $(weave env)
On the second machien(u1710-2), run the following commands:
weave launch u1710-1
eval $(weave env)

Use the follow procedure to test containers running on two machines can talk to each other

1. Start a container on the first machine by running the following command:
docker run --name box1 -it --rm busybox
2. Start a container on the second machine by running the following command, then ping first container:
docker run --name box2 -it --rm busybox
# ping -c2 box1
If one container can ping the other, then you have successfully setup weave net. The containers that you created should be able to ping outside of the world as well.


Note:
1. If you restart your machine, you will need to always run the following command before you start any new container, this is to make sure that your DOCKER_HOST environment varilable gets setup correctly.
2. The procedure has been also tested on aws and openstack cloud. Using the free tier VMs, you will be able to do the same as you do on VirtualBox. If you do this in your cloud environment, the security group will need to include rules to allow port on TCP 6783 and UDP 6783/6784, and TCP 6782 (monitoring) to be open. Of course, it will be also helpful if you allow ICMP traffic and ssh (22) so that you can easily test your connectivity.

Thursday, January 4, 2018

Fabric Events


The first thing to note is that nothing happens until a transaction fails or is committed into a block.

If you are listening to a peer for transaction notifications, you will get a success or failure event for the transaction when the following occur:
      If the transaction fails during endorsement (i.e. returns an error code), then there will be no read / write set and no transaction ID and the error will be propagated directly to applications listening to the endorser.
      If the transaction fails during commit (a.k.a. late failure, MVCC error) because of key version issues, then the block will contain the transaction but without the failure information, so you must be listening for transaction errors to catch these. The world state changes will not have been applied, and you will get the error when the block is written (I believe) to the channel on the peer to which you are listening.
      If the transaction is successful and committed to a block, then you will receive a success event when the block has been written into the channel on the peer to which you are listening.
If you are listening for block events, then you will get the event when a block has been written to the channel on the peer to which you are listening.

Chaincode events (a.k.a. contract emitted events) are sent when the block has been written to the channel on the peer to which you are listening. I.e. only when the block is cast in stone do the events come flying out.

If you examine the block format, you will find arrays in there for the transactions, and in each transaction all emitted events are shown (you can emit more than one).