Thursday, July 20, 2017

How to check if zookeeper and kafka are running correctly


Check on zookeeper:
telnet ipaddress port
stats
For example:
telnet 172.16.21.3 2181
Trying 172.16.21.3...
Connected to 172.16.21.3.
Escape character is '^]'.
stats
Zookeeper version: 3.4.9-1757313, built on 08/23/2016 06:50 GMT
Clients:
 /172.16.21.4:58476[1](queued=0,recved=321,sent=327)
 /172.16.38.0:55630[1](queued=0,recved=245,sent=245)
 /172.16.39.0:38124[1](queued=0,recved=240,sent=240)
 /172.16.21.1:39190[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/14
Received: 807
Sent: 812
Connections: 4
Outstanding: 0
Zxid: 0x100000033
Mode: leader
Node count: 31
Connection closed by foreign host.


To check if the kafka nodes actually all registered, do the following:
1. docker exec -it zookeeper1st bash
2. cd /zookeeper-3.4.9/bin/zkCli.sh  ls /brokers/ids

WatchedEvent state:SyncConnected type:None path:null
[1, 2, 3]
or
1. docker exec -it kafka3rd bash
2. ./kafka-topics.sh --list --zookeeper zookeeper1st:2181
3. ./kafka-topics.sh --describe --zookeeper zookeeper1st:2181

Wednesday, July 19, 2017

Some thing about orderer joining the party

tongli 11:28 PM
@jimthematrix so there is no way at all to add a user or an orderer or a peer?

jimthematrix 11:31 PM  
@tongli not with the cryptogen tool right now. but you can use the resulting ca certs and key to initialize a fabric-ca server to issue additional certs for user/orderer/peer identities, or use a tool like openssl to do the same
@CarlXK 对的,想支持扩展就需要这么做

tongli 11:35 PM
@jimthematrix right, I guess the missing pieces are after ca got your what needed, how do you make a new peer joining in an existing channel? can we do that? and how do you make an orderer join?

jimthematrix 11:52 PM
adding a new peer of an existing org to a channel is pretty straightforward: you get the latest channel config from the orderer and send that to the peer. this doesn't require modifying the channel. If you want to add a whole new org to the channel, then you first have to follow a process to update the channel config with the orderer, then send the updated channel config to the new peers of the new org
i actually don't know what is involved in adding new orderers to an existing network. it's a some combination of starting the new orderer node with the genesis block, and updating the consortium definition in the system channel. for details you'd have to ask @jyellick

jyellick 11:59 PM
> you get the latest channel config from the orderer and send that to the peer.
This actually isn't true. The peer only supports joining through the genesis block.

jyellick 12:01 AM
> i actually don't know what is involved in adding new orderers to an existing network.
Generally, simply start the orderer with the same genesis block that the other orderers were started with. The orderer will catch up from the Kafka broker logs. Then, once the orderer is up to date, second a reconfiguration transaction on any channels you wish to use the new orderer updating the set of orderer addresses.

chenxuan 5:07 AM
@baohua peer 节点的/etc/hyperledger/fabric是怎么制定的

baohua 8:23 AM
哦 可以通过配置指定:$FABRIC_CFG_PATH

chenxuan 8:41 AM
当我执行make docker的时候 我看到里面的里面指定了
FABRIC_CFG_PATH 是不是这个环境变量打包到了镜像当中去


baohua 9:35 AM
if in dockerfile, then it is.

tongli 1:21 PM
@jyellick thanks for your explanation on how the orderer joining the party. That actually makes a lot of sense to me.
👍 1 
@jyellick jason, what if the orderer comes from different org which was never part of the genesis block when it was created?
When genesis block gets created, it uses Orderer profile , I assumed that takes in the organizations which orderers belong to.
when a new orderer from a new org wants to jump in, the genesis block would not have any idea about the new org, right?

jyellick 1:39 PM
For now, you would still bootstrap the new orderer with the old genesis block. And the new orderer would play the chain forward until it got to the current state.
This approach has many drawbacks, and it is a planned feature in the future to allow the orderer to be bootstrapped from a later config block (and to generally allow data pruning)
But for v1, the only option is to start with the true genesis block.
As an alternative, you may copy the ledger from an already current orderer, and use that as the seed for a new orderer, this might be preferable in some devops scenarios.

tongli 1:59 PM
@jyellick thanks, but I do not think I am clear on how the authentication is done for the new orderer, I mean how does everybody in the party already know this new guy and consider the new orderer legit? I mean how is the authentication done? or it does not really matter?

jyellick 2:02 PM
The Kafka orderers do not speak directly to eachother. They only interact via Kafka. So, if Kafka authorizes the new orderer (generally because of TLS), then this new orderer will be able to participate in ordering. Peers also authenticate via TLS, but additionally, when receiving a block, they verify that it has been signed by one of the ordering orgs per the BlockValidation policy. By default, this policy allows anyone from the ordering orgs to sign the blocks. Adding a new orderer org would extend this policy to allow this new org to sign blocks.

tongli 2:04 PM
Excellent. Thanks so much!

Wednesday, June 21, 2017

Fabric certificates

Each organization needs the following components:

1. ca
2. msp
3. orderers or peers
4. users

        The ca needs to have:
              1. private key
              2. certificate

        The msp needs:
              1. admin certificate
              2. the sign cert is the same as the CA certificate

        Each user needs: msp and tls
            for msp:
              1. keystore private key
            for tls
              2. tls server.key - need to generate
              3. tls server.crt - need to sign with CA certificate

        Each peer needs: msp and tls
            for msp:
              1.  keystore private key - need to generate
              2.  sign certificate - need to generate with ca certificate
            for tls:
              1. tls server.key - need to generate
              2. tls server.crt - need to generate with ca certificate

        Each orderer needs: msp and tls
            for msp:
              1. keystore private key - need to generate
              2. sign certificate - need to generate with the ca sign certificate
            for tls:
              1. tls server.key - need to generate
              2. tls server.crt - need to sign with the ca certificate

The process to create all the certificates
1. Create CA private key and certificate
2. Create a private key as the admin user keystore key, then use CA certificate sign the private key
   to create the admin certificate
3. For either orderer or peer, create a private key as the msp keystore private key, then use CA
   certificate sign the private key to create the peer or orderer certificate
4. Regardless it is a user or peer or orderer, each will need tls keys. Create a private key, then use
   CA certificate sign the private key to create the user, peer or orderer sign certificate.

Looks like fabric uses pkcs8 format rather than the traditional ec format, so use the following command to convert.

openssl pkcs8 -topk8 -nocrypt -in tradfile.pem -out p8file.pem
 
 
Here is an example. 
 
 
1. Generate a CA private key
 
  openssl ecparam -genkey -name prime256v1 -noout -out ca.key
 
2. Convert that key to pkcs8 format (Do not have to do this)
 
  openssl pkcs8 -topk8 -nocrypt -in ca.key -out ca.sk
 
3. Create certificate for CA

openssl req -x509 -new -SHA256 -nodes -key ca.sk -days 1000
   -out ca.crt -subj "/C=US/ST=NC/L=Cary/O=orga/CN=ca.orga" 
 
4. Generate a private key for a server or user and convert to pkcs8 format
 
  openssl ecparam -genkey -name prime256v1 -noout -out server.key
  openssl pkcs8 -topk8 -nocrypt -in server.key -out server.sk (optional)
 
5. Create a certificate signing request (CSR)

  openssl req -new -SHA256 -key server.sk -nodes -out server.csr
    -subj "/C=US/ST=NC/L=Cary/O=orga/CN=peer1.orga"  
 
6. Once generated, you can view the full details of the CSR:

   openssl req -in server.csr -noout -text 
 
7. Now sign the certificate using the CA keys:
 
   openssl x509 -req -SHA256 -days 1000 -in server.csr -CA ca.crt
    -CAkey ca.sk -CAcreateserial -out server.crt 

Friday, June 16, 2017

Change Ubuntu machine search domain

 For ubuntu 16.04, add the following to file /etc/resolvconf/resolv.conf.d/base

search xxxxx

where xxxxx is the search domain to be added to resolv.conf.

After make that change, you will need to restart the networking services.

systemctl restart networking


The following procedure does not work for ubuntu 16.04.
You will need to edit this file with your favorite editor:
 
sudo nano /etc/dhcp/dhclient.conf

Once in file, you should see a commented line with the word supersede next to it:
#supersede domain-name "...."
Uncomment that line, substitute the name supersede for append, then add the domain names you wish to search (follow the example below and leave a space after the first "):
append domain-name " ubuntu.com ubuntu.net test.ubunut.com";
 
If you just want to use complete different search domain, then just leave the
word supersede, only replace the domain names.
 
supersede domain-name " fabric.com whatever.io" 

Monday, June 12, 2017

Creating Self-Signed ECDSA SSL Certificate using OpenSSL (Copy from another post)

Before generating a private key, you’ll need to decide which elliptic curve to use. To list the supported curves run:

openssl ecparam -list_curves

The list is quite long and unless you know what you’re doing you’ll be better off choosing one of the sect* or secp*. For this tutorial I choose secp521r1 (a curve over 521bit prime).
Generating the certificate is done in two steps: 1. Create the private key,
openssl ecparam -name secp521r1 -genkey \
  -param_enc explicit -out private-key.pem
2. Create the self-signed X509 certificate:
openssl req -new -x509 -key private-key.pem \
  -out server.pem -days 730
The newly created server.pem and private-key.pem are the certificate and the private key, respectively. The -param_enc explicit tells openssl to embed the full parameters of the curve in the key, as opposed to just its name. This allows clients that are not aware of the specific curve name to work with it, at the cost of slightly increasing the size of the key (and the certificate).

You can examine the key and the certificate using
openssl ecparam -in private-key.pem -text -noout
openssl x509 -in server.pem -text -noout
Most webservers expect the private-key to be chained to the certificate in the same file. So run:
cat private-key.pem server.pem > server-private.pem
And install server-private.pem as your certificate. If you don’t concatenate the private key to the certificate, at least Lighttpd will complain with the following error:
SSL: Private key does not match
the certificate public key, reason: error:0906D06C:PEM
routines:PEM_read_bio:no start line

Thursday, June 1, 2017

How to install Gerrit

1. Install java jdk or Oracle java if you do not have it already
apt install openjdk-8-jre-headless
2. Create directories named review
mkdir -p ~/review
3. Download gerrit.war here, assume the war you are getting is gerrit-2.14.6.war
wget https://www.gerritcodereview.com/download/gerrit-2.14.6.war
4. Run the following command from where gerrit war file is:
java -jar gerrit-2.14.6.war init -d ~/review

During this process, you can use most of the default values. For authentication, if this is for development, you can use this value DEVELOPMENT_BECOME_ANY_ACCOUNT. Otherwise, choose apropriate means such as OpenID or LDAP etc. To trigger the jenkins build, there is no need to install events-log plugin unless you like jenkins to handle missed events.

5. Set up gerrit services
Execute the following commands to copy gerrit service file to the /lib/systemd/system directory:
sudo cp ~/review/bin/gerrit.service /lib/systemd/system
Edit /lib/systemd/system/gerrit.service file. Make sure that the StandardInput=socket is REMOVED:
# StandardInput=socket
 Without removing the above line, the service will wait for a socket which we do not really need.

Make sure WorkingDirectory and Environment set to the right values for GERRIT_HOME and JAVA_HOME in [Service] section. For the examples above, the two lines should look like the following:
Environment=GERRIT_HOME=/home/ubuntu/review
Environment=JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre
ExecStart=/usr/bin/java -Xmx1024m -jar \
  ${GERRIT_HOME}/bin/gerrit.war daemon -d ${GERRIT_HOME}
6. Start the gerrit services
sudo systemctl start gerrit.service 

To allow remote administration, add the following to the etc/gerrit.config file, then restart the service
[plugins]
        allowRemoteAdmin = true
The events-log.jar download location: https://gerrit-ci.gerritforge.com/job/plugin-events-log-bazel-stable-2.14/lastSuccessfulBuild/artifact/bazel-genfiles/plugins/events-log/events-log.jar git-review may need to do this
git remote rename origin gerrit
If you like to use an existing gerrit.config file, save the following content in the review/etc/gerrit.config file, then run the java command.
[gerrit]
 basePath = git
 serverId = edb71efd-32fe-48ae-a18b-b287e9f825a5
 canonicalWebUrl = http://192.168.56.30:9090/
[database]
 type = h2
 database = /home/ubuntu/review/db/ReviewDB [index]
 type = LUCENE
[auth]
 type = DEVELOPMENT_BECOME_ANY_ACCOUNT
[plugins]
        allowRemoteAdmin = true
[receive]
 enableSignedPush = false
[sendemail]
 smtpServer = localhost
[container]
 user = ubuntu
 javaHome = /usr/lib/jvm/java-8-oracle/jre
[sshd]
 listenAddress = *:29418
[httpd]
 listenUrl = http://*:9090/
[cache]
 directory = cache

How does gerrit trigger work on Jenkins

The gerrit trigger works like this:
  1. It connects to the gerrit server using ssh and uses the gerrit stream-events command
  2. It then watches this stream as the data comes in
  3. It will try to match the events to triggers that have defined in your projects
Common issues:
  1. Jenkins user has improper ssh credentials
  2. Jenkins user does not have the stream-events rights
  3. The user public key was not added to gerrit
How to check:
  1. Login as jenkins user, assume username is admin
  2. ssh -p 29418 admin@gerrit_server gerrit stream-events
  3. Push a commit to the server and you should see things on your stream
Problems:
  1. ssh connection failed? setup you ssh key pair
  2. No streaming right? Go to the All-Projects->Access and under Global Capabilities add Stream Events to the Non-Interactive Users group

Gerrit access rights

  1. Create the profile through in Gerrit web interface for your Jenkins user, and set up a SSH key for that user.
  2. Gerrit web interface > Admin > Groups > Non-Interactive Users > Add your jenkins user.
  3. Admin > Projects > ... > Access > Edit
    • Reference: refs/*
      • Read: ALLOW for Non-Interactive Users
    • Reference: refs/heads/*
      • Label Code-Review: -1, +1 for Non-Interactive Users
      • Label Verified: -1, +1 for Non-Interactive Users
IMPORTANT: On Gerrit 2.7+, you also need to grant "Stream Events" capability. Without this, the plugin will not work, will try to connect to Gerrit repeatedly, and will eventually cause OutOfMemoryError on Gerrit.
  1. Gerrit web interface > People > Create New Group : "Event Streaming Users". Add your jenkins user.
  2. Admin > Projects > All-Projects > Access > Edit
    • Global Capabilities
      • Stream Events: ALLOW for Event Streaming Users