Sunday, April 1, 2018

Setting up nexus3 with docker and raw repo with ldap and tls enabled

1. Create a directory to hold all nexus3 data and keystore:
mkdir -p /home/ubuntu/nexus3/etc/ssl

2. Create a keystore by running the following commands:
cd /home/ubuntu/nexus3/etc/ssl
keytool -genkeypair -keystore keystore.jks -storepass password \
-alias fabric.com -keyalg RSA -keysize 2048 -validity 5000 \
-keypass password -dname 'CN=*.fabric.com, dc=fabric, dc=com, C=US' \
-ext 'SAN=DNS:fabric.com,IP:192.168.56.30' -ext "BC=ca:true"

   This creates a file named keystore.jks, if you use other name for the nexus3, for some reasons, it won't work, very strange.

3. Make the user 200:200 to own the directory:
sudo chown -R 200:200 /home/ubuntu/nexus3 
4. Run the following command to start it.
docker run -d -p 8081:8081 -p 8443:8443 --restart always \
--name nexus -v /home/ubuntu/nexus3:/nexus-data \
-v /home/ubuntu/nexus3/etc/ssl:/opt/sonatype/nexus/etc/ssl \
sonatype/nexus3

   This command starts up nexus3 container and hookup with the container with the right keystore location and data location.

5. Configure the nexus3 to use ldap by login to https://192.168.56.30:8081

      Administration -> Security -> LDAP
      Connection:
      LDAP server address:      ldap://192.168.56.30:389
      Search base:                     dc=fabric,dc=com
      Authentication method:   Simple Authentication
      Username or DN:            cn=admin,dc=fabric,dc=com
      Password:                        fabric123

      User and group:
      Base DN:        empty string
      Are users locate in structures below the user base DN?  off
      User filter:      empty string
      User ID sttribute:        uid
      Real name attribute:      cn
      Email attribute:       mail
      Password attribute:      userpassword
      Map LDAP groups as roles:    off

     Administration -> Security -> Realms
     add LDAP Realm to the left box

6. Raw repository:

    Create a raw hosted repository,
    Create a role so that people in the role can operator the raw repository
        Administration -> Security -> Roles,  Click on Create role button, Nexus role
        basically add nx-repository-view-raw-<repo-name>-* to the left box.
    Map users to that role.
        Administration -> Security -> Users, Source: LDAP, search for all the users
        basically associate each user to the role created in above step.
    Then use the following command to upload a file to the repository

curl -u user1:fabric123 -k --upload-file users.ldif \
https://192.168.56.30:8081/repository/fabricImages
   The above command uploads the file users.ldif to the raw (hosted) repository named fabricImages
curl -u user1:fabric123 -k -T users.ldif \
https://192.168.56.30:8081/repository/fabricImages/testfolder/users.ldif
   The above command uploads a file and create the testfolder directory as well at the same time.

7. Docker repository:

   Create docker hosted repository, set https port to be 8443, Force basic authentication
   and allow redeploy. Since this is using the self signed certificate, any docker client
   wants to access it, will need to put the server certificate in

/etc/docker/certs.d/<server>:<port>/ca.crt file
   the server has to be the server name or IP address.
   the port in this case is 8443

   To get the server certificate file, run the following command:
keytool -printcert --sslserver 192.168.56.30:8443 -rfc

Friday, March 30, 2018

Develop a new blockchain app based on hyperledger fabric


The following process all based on carauction business network.

1. add a new auctioneer, id A100
2. add two members, id M100 and M200, with some balance
3. add a vehicle for assets, id V100, make it owner M100
4. add a vehicle listing for assets, id L100 to sell vehicle V100

5. then submit transactions. one is offer, one is closebidding.

6. check that all members balance have changed.


Auctioneer is not involved, so we need to change that to make it very real.

Go to define, to change model file. Add the following

Change the Auctioneer to have a balance:
o Double balance
Change the listing to have an agent:
 --> Auctioneer auctioneer

So that auctioneer can have some money and each listing can have an agent.

Go to script file and change the function CloseBidding.

Commissions to auctioneer will be 5% of the sell price:
commission = highestOffer.bidPrice * 0.05
listing.auctioneer.balance += commission

seller will lose the commission from the sell price:
seller.balance += highestOffer.bidPrice - commission;

Now make sure these changes are made permanent onto the chain:
const auctioneerRegistry = await  getParticipantRegistry('org.acme.vehicle.auction.Auctioneer');

await auctioneerRegistry.updateAll([listing.auctioneer]);

After all these changes, click on the update to update the entire network.

Then do the same test to see balance changes from each member and auctioneer.

Wednesday, March 28, 2018

pass an array to ansible

Here is an example how to pass a list of strings to ansible
  ansible-playbook -i run/runhosts -e "mode=apply env=vb1st" \
  --extra-vars='{"peerorgs":["orga","orgb"]}' setupfabric.yml \
  --tags="composersetup"
Notice that the peerorgs is set to a list of two strings.

The above command also deploys another business network onto the fabric network. Before you run that you need to make sure that the business network archive file exists in directory /opt/gopath/vb1st/fabric/run/keyfiles

Friday, March 16, 2018

Deploying a hyperledger Composer blockchain business network to fabric

1. Create a connection profile per org according to the connection profile format.

2. Create a business network cards for each org using the following command:
composer card create
   -p connection_profile_for_the_org.json 
   -u PeerAdmin
   -c admin_cert
   -k admin_private_key
   -r PeerAdmin -r ChannelAdmin 
   -f network_card_output_file.card
network_card_output_file.card is a name used for the card file name, provide a name as you like

3. Import each org business card to Composer playground using the following command:
composer card import
   -f network_card_output_file.card
   --name name_of_business_card 
name_of_business_card is the name of the business card, can be the same as the card file name

4. Installing Composer runtime onto the Hyperledger Fabric peer node for each org:
composer runtime install
   -c name_of_business_card
   -n business_network_archive_file.bna
business_network_archive_file is the business network archive file, should be exported.

5. Define an endorsement policy for the network:

assume that name of the policy file is endorsement-policy.json 6. Start business network:

composer network start
   -c name_of_business_card
   -a trade-network.bna 
   -o endorsementPolicyFile=endorsement-policy.json 
   -A admin@org1 -C admin@org1_cert.pem 
   -A admin@org2 -C admin@org2_cert.pem 
   ...
Start composer playground, mount two things:

docker run -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  -v /home/ubuntu/.composer:/home/composer/.composer \
  --name composer-playground --publish 8080:8080 \
  --detach hyperledger/composer-playground:next


Create the card: 

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next card create \
  -p /opt/gopath/vb1st/fabric/run/keyfiles/orga/connection.json \
  -c /opt/gopath/vb1st/fabric/run/keyfiles/orga/users/Admin@orga/msp/admincerts/Admin@orga-cert.pem \
  -k /opt/gopath/vb1st/fabric/run/keyfiles/orga/users/Admin@orga/msp/keystore/admin_private.key \
  -r PeerAdmin -r ChannelAdmin \
  -u PeerAdmin@orga \
  -f /opt/gopath/vb1st/fabric/run/keyfiles/orga/orga_firstnetwork.card

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next card create \
  -p /opt/gopath/vb1st/fabric/run/keyfiles/orgb/connection.json \
  -c /opt/gopath/vb1st/fabric/run/keyfiles/orgb/users/Admin@orgb/msp/admincerts/Admin@orgb-cert.pem \
  -k /opt/gopath/vb1st/fabric/run/keyfiles/orgb/users/Admin@orgb/msp/keystore/admin_private.key \
  -r PeerAdmin -r ChannelAdmin \
  -u PeerAdmin@orgb \
  -f /opt/gopath/vb1st/fabric/run/keyfiles/orgb/orgb_firstnetwork.card


Import the card:

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
   -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
   hyperledger/composer-cli:next card import \
   -f /opt/gopath/vb1st/fabric/run/keyfiles/orga/orga_firstnetwork.card \
   --name PeerAdmin@orga

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
   -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
   hyperledger/composer-cli:next card import \
   -f /opt/gopath/vb1st/fabric/run/keyfiles/orgb/orgb_firstnetwork.card \
   --name PeerAdmin@orgb


Runtime install

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next runtime install \
  -c PeerAdmin@orga -n trade-network

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next runtime install \
  -c PeerAdmin@orgb -n trade-network

Request identity:

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next identity request \
  -c PeerAdmin@orga -u admin -s adminpw -d /home/composer/.composer/orgaAdmin

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next identity request \
  -c PeerAdmin@orgb -u admin -s adminpw -d /home/composer/.composer/orgbAdmin


Start the network:

docker run -v /home/ubuntu/.composer:/home/composer/.composer \
  -v /opt/gopath/vb1st/fabric:/opt/gopath/vb1st/fabric \
  hyperledger/composer-cli:next network start \
  -c PeerAdmin@orga \
  -a /opt/gopath/vb1st/fabric/run/keyfiles/trade-network.bna \
  -A orgaAdmin \
  -C /home/composer/.composer/orgaAdmin/admin-pub.pem \
  -A orgbAdmin \
  -C /home/composer/.composer/orgbAdmin/admin-pub.pem

Thursday, March 15, 2018

Ubuntu 17.10 Static IP Configuration

The following is an example how to configure VirtualBox VM with NAT and Host-Only network cards. First card is NAT,  second card is Host-Only.


# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    enp0s3:
      dhcp4: yes
    enp0s8:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.56.30/24]
      nameservers:
        addresses: [8.8.8.8,8.8.4.4]

Tuesday, March 13, 2018

Install tiller using helm

once helm is downloaded, run the following command to make sure everything is correct.


helm init --override "spec.template.spec.containers[0].env[2].name"="KUBERNETES_MASTER" --override "spec.template.spec.containers[0].env[2].value"="192.168.56.101:8080"

This is to setup container with the env like this

    containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        - name: KUBERNETES_MASTER
          value: "192.168.56.101:8080"
        image: gcr.io/kubernetes-helm/tiller:v2.8.2

Without adding the kubernetes master, the tiller container won't be able to find where the k8s API server is.

Thursday, March 8, 2018

Docker ipvlan/macvlan setup

1. Did the following step (not sure if that is really needed):

   sudo iptables -P FORWARD ACCEPT

2. Make sure that each network card is in Promiscuous mode
3. Docker service needs to be configured to have the following:
      --experimental=true
    This is due to the problem that ipvlan is still experimental
4. Create a network using ipvlan:

docker network create -d ipvlan --subnet=192.168.0.0/24 --ip-range=192.168.0.208/28 --gateway=192.168.0.1 -o ipvlan_mode=l2 -o parent=enp0s3 ipvlan70

This assumes that the entire network is on 192.168.0.0/24, the range for this node is 192.168.0.208/28, the gateway is the gateway for the network. The ipvlan mode is l2 and use one of the network card of the machine. the name of the ipvlan is called ipvlan70

If everything is working, then you can create a container like this to test its connectivity:

docker run --net=ipvlan70 -it --name ipvlan70_2 --rm alpine /bin/sh

This container should be able to access internet and other containers on the same network but not the host IP, that is on purpose.



The following procedure is to setup macvlan

docker network create -d macvlan --subnet=192.168.0.0/24 --ip-range=192.168.0.208/28 -o macvlan_mode=bridge -o parent=enp0s3.70 macvlan70