Monday, November 25, 2019

Change OpenShift SecurityContextConstraints

Use this command to change the securityContextConstraints to run the cello on OpenShift

oc edit scc restricted
 
Once in the editor, change things like this
 
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: null
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
  type: RunAsAny
groups:
- system:authenticated
kind: SecurityContextConstraints
metadata:
  annotations:
    kubernetes.io/description: restricted denies access to all host features and requires
      pods to be run with a UID, and SELinux context that are allocated to the namespace.  This
      is the most restrictive SCC and it is used by default for authenticated users.
  creationTimestamp: 2019-11-25T16:01:36Z
  name: restricted
  resourceVersion: "103032"
  selfLink: /apis/security.openshift.io/v1/securitycontextconstraints/restricted
  uid: dbbb5df1-0f9c-11ea-842f-9a1f5595b3c8
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret 


Thursday, November 7, 2019

Add new org to a channel

The process of adding a new org to a channel is long, I will break this process up to three parts,

1. Just to add an org to an existing channel
2. Join the peer in the new org to the channel.
3. Chaincode upgrade and endorsement policies

Add an org to an existing channel

Prepare the new org

Start from just configtx.yaml and org3-crypto.yaml, run cryptogen and configtxgen to produce crypto materials and org3.json file:

cryptogen generate --config ./org3-crypto.yaml
configtxgen -printOrg Org3MSP > ../channel-artifacts/org3.json

Prepare the existing channel, (notice I am omitting all the conversion steps either from common block to json or json to common block, these steps are absolutely necessary)

Retrieve the channel configuration:

     peer channel fetch config config_block.pb

Get the channel config element using jq:

     jq .data.data[0].payload.data.config allConfig.json > config.json

Add the new org into the channel configuration which is the config.json file
   
  jq -s '.[0] * {"channel_group":{"groups":{"Application":{"groups": {"Org3MSP":.[1]}}}}}' \
     config.json org3.json > modified_config.json

Now we have the original configuration and modified configuration json file, we need to calculate the differences, but to be able to do that, we have to encode both into binary format.

  configtxlator proto_encode --input config.json --type common.Config --output config.pb
  configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pb


Then we can calculate the update protobuf binary:

  configtxlator compute_update --channel_id $CHANNEL_NAME --original config.pb \
    --updated modified_config.pb --output org3_update.pb
 
Again, we need to convert this into json format so that we can create update envolop
 
  configtxlator proto_decode --input org3_update.pb \
    --type common.ConfigUpdate | jq . > org3_update.json

Now we need to the update envolop json file:
 
  echo '{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":{"config_update":'$(cat org3_update.json)'}}}' | jq . > org3_update_in_envelope.json
 
Now we need to create the protobuf binary.
 
  configtxlator proto_encode --input org3_update_in_envelope.json \
  --type common.Envelope --output org3_update_in_envelope.pb  

We finally have the channel update request content (the protobuf binary file),
we need to sign that and eventually submit
 
  peer channel signconfigtx -f org3_update_in_envelope.pb 
 
This basically gathered one signature. We need to collect more and submit. The good news
is that peer channel update actually also attach a signature, if we use another org's peer
credential to submit channel update, then that org's peer signature will be also included.
 
  peer channel update -f org3_update_in_envelope.pb -c $CHANNEL_NAME \
    -o orderer.example.com:7050 --tls --cafile $ORDERER_CA 

At this point, the channel officially includes the new organization. But peers in that
organization are not part of this channel yet because peers in that organization will
need to utilize peer join command to do that.

Join the peer in the new org to the channel

Use the peer channel fetch command to retrieve the first block:

  peer channel fetch 0 mychannel.block -o orderer.example.com:7050 \
   -c $CHANNEL_NAME --tls --cafile $ORDERER_CA
 
Now use the peer join command to join the peer, but make sure all the environment varilables
are setup to use new org peer's certs, new org's ca certs etc
 
  peer channel join -b mychannel.block

Chaincode upgrade and endorsement policies

Use normal chaincode install and chaincode upgrade to change the endorsement policies to include the new organization.

Monday, November 4, 2019

What is needed to stand up a peer or orderer node in fabric

To stand up a fabric peer or orderer node, the following must be provided:

  • msp directory which contains at least the following:
    1. admincerts: admin certs, the actual admin user, can be shared cross all the peers in an org.
    2. cacerts: ca certificate
    3. keystore: the node signing key, that is the private key
    4. signcert: the node x509 certificate, the public key
  • tls directory which contains the following if tls is enabled:
    1. ca.crt
    2. server.crt
    3. server.key

So basically, you will need to enroll 3 times,

1. Enroll an admin user, so that you get admincert
2. Enroll a node, so that you get node keystore (private key) and signing key (public key)
3. Enroll a node tls certs, this is only required if you have tls enabled for the node.

Once these things are available, then you can start a node, either peer or orderer can use the same cert materials. remember to mount msp directory to /etc/hyperledger/fabric/msp, and tls directory to /etc/hyperledger/fabric/tls, with the default settings, peer and orderer will be able to find the right certs to start the node.

The above step is when you have a CA server available.

If you simply try to bootstrap a network using cryptogen, then you basically create crypto-config.yaml file, then run cryptogen command to generate necessary certs.

make sure that your file is named crypto-config.yaml, then run the following command:

cryptogen generate --output="a directory name" --config=configfile.yaml

Strange behavior of ca enroll command

There is really a very strange behavior, enroll always uses the current request user name as common name,  this creates a certificate always uses the current request user name as the common name, the only way that I can overcome this problem is to do the following:

1. enroll a new userid
2. register that user, then simply delete the entire directory which hold the new users certs
3. then enroll again using the new user's id and password

The right method to over come the strange behavior

1. make sure that your CA_CFG_PATH=$(pwd)
2. enroll the ca admin user first, and make sure
3. register the new user first
4. then enroll the new user using the new user id and password.

Sunday, November 3, 2019

configtx.yaml

Profiles defined in configtx.yaml file are the starting point which use other sections of configtx.yaml.
There are two different types of profiles, one is to be used for genesis block which is for bootstrapping orderer nodes, the other is to be used for channel block. What to be generated by configtxgen depends on which profile to be used and what flag is specified in the command line.

For example -profile -outputBlock flags are used to specify a profile to generate a genesis block.
Where -profile -outputCreateChannelTx is used to specify a profile to generate a channel create tx.


Profiles for genesis block should always contains Orderers and Consortiums elements. Profiles for just a channel should just contains Consortium and Application section. Notice for the genesis profile, the element is Consortiums, for a channel is Consortimum, so there are some small diference though.

Hyperledger Fabric ACLs

# in the system. These "resources" could be functions on system chaincodes
# (e.g., "GetBlockByNumber" on the "qscc" system chaincode) or other resources
# (e.g.,who can receive Block events). This section does NOT specify the resource's
# definition or API, but just the ACL policy for it.
#
# User's can override these defaults with their own policy mapping by defining the
# mapping under ACLs in their channel definition
#---Lifecycle System Chaincode (lscc) function to policy mapping for access control---#
# ACL policy for lscc's "getid" function
lscc/ChaincodeExists: /Channel/Application/Readers
# ACL policy for lscc's "getdepspec" function
lscc/GetDeploymentSpec: /Channel/Application/Readers
# ACL policy for lscc's "getccdata" function
lscc/GetChaincodeData: /Channel/Application/Readers
# ACL Policy for lscc's "getchaincodes" function
lscc/GetInstantiatedChaincodes: /Channel/Application/Readers
#---Query System Chaincode (qscc) function to policy mapping for access control---#
# ACL policy for qscc's "GetChainInfo" function
qscc/GetChainInfo: /Channel/Application/Readers
# ACL policy for qscc's "GetBlockByNumber" function
qscc/GetBlockByNumber: /Channel/Application/Readers
# ACL policy for qscc's "GetBlockByHash" function
qscc/GetBlockByHash: /Channel/Application/Readers
# ACL policy for qscc's "GetTransactionByID" function
qscc/GetTransactionByID: /Channel/Application/Readers
# ACL policy for qscc's "GetBlockByTxID" function
qscc/GetBlockByTxID: /Channel/Application/Readers
#---Configuration System Chaincode (cscc) function to policy mapping for access control---#
# ACL policy for cscc's "GetConfigBlock" function
cscc/GetConfigBlock: /Channel/Application/Readers
# ACL policy for cscc's "GetConfigTree" function
cscc/GetConfigTree: /Channel/Application/Readers
# ACL policy for cscc's "SimulateConfigTreeUpdate" function
cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers
#---Miscellanesous peer function to policy mapping for access control---#
# ACL policy for invoking chaincodes on peer
peer/Propose: /Channel/Application/Writers
# ACL policy for chaincode to chaincode invocation
peer/ChaincodeToChaincode: /Channel/Application/Readers
#---Events resource to policy mapping for access control###---#
# ACL policy for sending block events
event/Block: /Channel/Application/Readers
# ACL policy for sending filtered block events
event/FilteredBlock: /Channel/Application/Readers
# This section provides defaults for policies for various resources

Fabric channel configuration structure again

Each channel configuration is a nasty nested json file.


The top level has three elements, they are data, header and metadata.

Element header contains the data_hash, number and previous_hash, this is basically the block info, where the block sits (number),  the block hash and previous hash, these.

Element metadata for the system channel is empty. It also has metadata as its member which in turn is a list.

Element data contains yet another element named data which contains payload and signature.

What is important is really the payload which contains again data and header just like the very top structure.

In the payload.header element, we can see the channel_header and signature_header.

The data.config element is where everything is defined, channel_group is the starting point of the channel configuration. For the detailed structure from this point on please refer to the this blog

config.channel_group is the start of the system channel which is the very first channel created on a given orderering cluster. 

Hyperledger Fabric Policy

Hyperledger Fabric Policy in the configtx.yaml file uses shorthanded notation, which normally looks like a file path like this:

Channel/Application/Writers

That notation basically indicates the policy defined in

Channel/Groups/Application/Policies/Writers

Noticed that the Groups and Policies in the path are omitted.


Notice the 1st mod_policy in the chart is "/Channel/Orderer/Admins" which actually refers channel_group/groups/Orderer/policies/Admins element. The reason why it refers to that is because it starts with an absolute path. Actually the 2nd mod_policy also points to the same role but it uses relative path.

Though 2nd and 3rd mod_policy both only use the relative path "Admins", they actually point to the different role in the policies. The 2nd as indicated above, it points to the 1st Admins role, but the 3rd policy points to the 2nd Admins role. So if the absolute path used for the 2nd mod_policy, it would have been "/Channel/Orderer/Admins", same way if the absolute path used for the 3rd mod_policy, it would have been "/Channel/Admins", notice it has no word Orderer in it.

Do not mix up policy and policies. The element policy is always under defined role such as Admins, Readers and Writers etc, these roles are always directly under element policies. Where element policies is always under groups.<Something>, in our above example, you can see policies elements are under groups.Consortiums, groups.Orderer, config.channel_group. The element policies really just defines the role, element policy defines a specific rule. Policy has types, policies just have a list of roles. Policy type 1 is SignaturePolicy and type 3 is ImplictMetaPolicy. type 2 is the msp.

mod_policy indicates what role can make modifications to the Policies itself, policies control very much the fabric configuration, so in nearly all the cases, this means who can make changes channel configurations, who can read the channel configurations. The channel in this case can be either system channel or application channel. Now regarding who can create proposal, commit tx, prove tx all are controlled by ACL.

Policy defaults:

The configtxgen tool creates default policies as follows:


/Channel/Readers : ImplicitMetaPolicy for ANY of /Channel/*/Readers
/Channel/Writers : ImplicitMetaPolicy for ANY of /Channel/*/Writers
/Channel/Admins  : ImplicitMetaPolicy for MAJORITY of /Channel/*/Admins

/Channel/Application/Readers : ImplicitMetaPolicy for ANY of /Channel/Application/*/Readers
/Channel/Application/Writers : ImplicitMetaPolicy for ANY of /Channel/Application/*/Writers
/Channel/Application/Admins  : ImplicitMetaPolicy for MAJORITY of /Channel/Application/*/Admins

/Channel/Orderer/Readers : ImplicitMetaPolicy for ANY of /Channel/Orderer/*/Readers
/Channel/Orderer/Writers : ImplicitMetaPolicy for ANY of /Channel/Orderer/*/Writers
/Channel/Orderer/Admins  : ImplicitMetaPolicy for MAJORITY of /Channel/Orderer/*/Admins

# Here * represents either Orderer, or Application, and this is repeated for each org
/Channel/*/Org/Readers : SignaturePolicy for 1 of MSP Principal Org Member
/Channel/*/Org/Writers : SignaturePolicy for 1 of MSP Principal Org Member
/Channel/*/Org/Admins  : SignaturePolicy for 1 of MSP Principal Org Admin



see the following chart for examples of ImplictMetaPolicy and SignaturePolicy

The principal_classification can either be ROLE or IDENTITY. However, more commonly the ROLE type is used, as it allows the principal to match many different certs issued by the MSP's CA. The role matches MSPRole defined as MEMBER, ADMIN, CLIENT, PEER.

Member matches any certificate issued by the MSP. Admin matches certificate enumerated as admin in the MSP definition. Client (Peer) matches certificates that carry the client(peer) Organizational Unit.

In the case of IDENTITY the principal field is set to the bytes of a certificate literal.That is, the principal field should just contain the base64 encoded (The very long string) certificate for that identity.

The sub_policy in the ImplicitMetaPolicy indicate any named policy in its value, in the example above, which is Admins, so any the policy named Admins at that level or under will be used. If the rule is ALL, then all the policies named Admins at that level or under will be evaluated.

Thursday, October 24, 2019

Deals with github pull request

Once you forked the upstream repo and may have also done a pull request, the upstream repo most likely will be moving forward with new commits. To work on new fixes, it is important to synch up your own repo with the upstream. Here is the process to do that.


git fetch upstream
git checkout master
git merge upstream/master
git push origin master

The commands above basically do the following things:

1. get the latest from upstream (it should be showing what the upstream is at .git/config)
2. switch to your local master branch
3. then merge the upstream and your own master branch
4. eventually push everything in the local master branch to your own remote repo.

Thursday, October 17, 2019

Fabric channel configuration structure and its meanings

The fabric configuration is a hierarchical structure. It is basically a nested ConfigGroup.  If look at the exported genesis block, it starts at data.data[0].payload.data.config.  (parent elements are not showing in the picture)

fabric configuration config contains nested ConfigGroup and an element called sequence. sequence element must increment every time a committed change happens to the configuration. The root ConfigGroup is named channel_group as shown in the picture.

The channel_group element likes any other ConfigGroup always contain 5 elements, these 5 elements are groups, mod_policy, policies, values and version.

The groups element regardless where it is, is a map of ConfigGroups, or you can think the groups element as a list of ConfigGroup with names. For example, the top level groups which belongs to channel_group basically contains 3 ConfigGroup, their names are Application, Consortiums and Orderer. Each of these 3 things is a ConfigGroup.

The mod_policy is used to govern the required signatures to modify that element. For groups, modification is adding or removing elements (or changing the mod_policy). For values and policies, modification is changing the value and policy fields respectively (or changing the mod_policy). Each element’s mod_policy is evaluated in the context of the current level of the config

The policies element is a map of ConfigPolicy which contains mod_policy, policy and version

The values element is a map of ConfigValue, which is basically a key name value pair and is specific to each element.

The version element tracks number of the changes to the config group.

The official fabric doc on the topic can be found here

I feel that policies really just defines group of people, only when policies are combined with ACL, then things started become a bit clearer. Traditional application ACL only requires one actor to act on resources, but blockchain normally would require a group of actors (thus the policies) to act on, so the policy has to define who and how many, that will be the differences comparing to traditional ACL.

Wednesday, October 16, 2019

Findings of the fabric cryptogen

Hyperledger fabric cryptongen is an utility to generate a set of certificates and private keys. It will take a yaml file normally named crypto-config.yaml to produce a set of files. Here is a structure for an organization.


Each ca should have its cert and private key.

Each org (peer or orderer org) should have its admin user and other users.

Each node (peer or orderer node) should have its signcert and private key (file in keystore directory).

Directories such as admincerts, cacerts and tlscacerts under each peer or orderer node msp directory contain admin cert, ca cert and tls ca certs.

Either a user or node (peer or orderer node) should also have its tls directory which is parallel to msp directory. tls directory contains ca.crt, server.crt and server.key


started in version 1.4.3, cryptogen no longer places the admin cert in the admincert folder under any msp directory. Even though the direcctory still gets created, it remains empty.

Thursday, August 22, 2019

Hyperledger Fabric msp structure

The following is a directory and file structure, the directory and files were created by default by peer.
the focus here is the msp directory. Here are the observations:

1. admincert.pem and peer.pem (the signcerts) are the same file.
2. config.yaml file contains things almost hardcoded.
3. key.pem is the private key to admincert.pem



Here are more detailed structures in terms of the various certificates for an org and the node in the org.

The following chart shows the various certs for an organization. There should be ca, tlsca and msp. ca and tlsca should be consist of a cert and its private key. The msp should contain its admin cert, ca cert and tlscacert. The ca cert and tlscacert under msp should be the same as in ca and tlsca. All these certs organized just to make sure that the msp directory contains necessary files which can be distributed. The ca and tlsca directory also contains the private key which should not be distributed. Also notice that the ca/ca.ordererorg-cert.pem is the same file as in msp/cacerts/ca.ordererorg-cert.pem (green boxed), and tlsca/tlsca.ordererorg-cert.pem is the same file as in msp/tlscacerts/tlsca.orderorg-cert.pem (red boxed)


the followingchart shows the various certs for a node within an organization.

1. There are two top directories, msp and tls
2. msp contains materials which a node (orderer or peer) msp configuration should point to. This directory also contains the organization ca certs
3. signcerts is the signing cert for the node, the keystore direcotry contains the private key for the signing cert.
4. tls contains the tls cert and key,  tls/ca.crt is the same file as in organization's tlsca cert. Notice that tlsca file contained in two different directories even though they are the same file (red boxed).

Any thing that labeled ca will be the same as in the organization's certificates.

Wednesday, August 14, 2019

fabric-ca working flow

Once a Fabric CA is setting up,  with the initial admin and password set to be tongli and tonglipw,  then admin can then do the following:


1. enroll a new users:

fabric-ca-client enroll --id.name admin2 -u https://tongli:tonglipw@u1804:7054 --tls.certfiles $(pwd)/cakeys/ca.org1-cert.pem

Notice that the cert pem file at the end has to be the ca certificate

2. Once a user is enrolled, the admin can register the user which will provide a password for the user:

fabric-ca-client register --id.name admin2  --id.attrs 'hf.Revoker=true,admin=true:ecert'   -u https://tongli:tonglipw@u1804:7054 --tls.certfiles $(pwd)/cakeys/ca.org1-cert.pem

3. You can also add affiliation by doing the following:

   a) enroll a new user:
fabric-ca-client enroll --id.name admin -u https://tongli:tonglipw@u1804:7054 --tls.certfiles $(pwd)/cakeys/ca.org1-cert.pem

   b) register the new user
fabric-ca-client register --id.name admin  --id.attrs 'hf.Revoker=true,admin=true:ecert' -u https://tongli:tonglipw@u1804:7054 --tls.certfiles $(pwd)/cakeys/ca.org1-cert.pem

   c) now add the new affiliation
fabric-ca-client affiliation add org1 -u https://admin:qxuPwzKYVFAn@u1804:7054 --tls.certfiles $(pwd)/cakeys/ca.org1-cert.pem

   d) nested affiliation just need to use dot, for example
fabric-ca-client affiliation add org1.department1.department1 -u https://admin:qxuPwzKYVFAn@u1804:7054 --tls.certfiles $(pwd)/cakeys/ca.org1-cert.pem


 One can enroll many ids, the difference for user, peer, orderer is how these id get registered. When register the id, you will need to specify a type for example:

export FABRIC_CA_CLIENT_HOME=$HOME/fabric-ca/clients/admin
fabric-ca-client register --id.name client1 --id.type client \
  --id.affiliation bu1.department1.Team1 
 
The created user signcerts should be named <id>@<org name>-cert.pem
 Otherwise, gosdk can not find the certificate, the access will fail. 

Thursday, August 1, 2019

Customize VSCode colors

Click on settings (the wheel icon at the bottom left), which will open up the Settings on the right hand side, then select Workbench->Appearance, find the link says Edit in settings.json, basically paste the following then save.


   "workbench.colorCustomizations": {
        "sideBar.background": "#424d66",
        "sideBar.foreground": "#ffffff",
        "sideBar.dropBackground": "#c0aeae",
        "list.hoverForeground": "#ffffff",
        "list.hoverBackground": "#2825df",
        "gitDecoration.modifiedResourceForeground": "#ffffff",
        "gitDecoration.untrackedResourceForeground": "#ffffff",
        "gitDecoration.addedResourceForeground": "#ffffff",
        "list.errorForeground": "#ffffff",
        "list.inactiveSelectionBackground": "#ff2200",
        "list.inactiveSelectionForeground": "#ffffff",
        "list.activeSelectionForeground": "#ffffff",
        "list.activeSelectionBackground": "#ff2200"
    }


If you like to change the color scheme globally, then you will need to do the following:
 
Click on settings, select User -> Workbench -> Appearance, then find
Color Customizations section on the right, click on Edit in settings.json, then add the
same section above in the file. 

Tuesday, July 30, 2019

What are the Digital Signatures

The actually signing probably depends on what kind of certificate it is. this is a useful read.
A digital certificate consists of three things:
  • A public key.
  • Certificate information. ("Identity" information about the user, such as name, user ID, and so on.)
  • One or more digital signatures.
Typically the "one of more digital signatures" part is done by listing an set of encrypted hashes of the certificate. So when you want to sign a certificate, you would compute the hash of the certificate, encrypt it using your private signing key, and add it to the list of digital signatures.
 So in a sense, that the certificate is a production of a private key applied to a bunch of information. So whoever receive that certificate will be able to see that person's public key, identity information, then will be able to use the public key to digest the digital signature to make sure that the hash come out of the decrypted digital signature match the certificate part of identity part of the information. so that you know this is real.

Thursday, July 18, 2019

How to create cds file

Assume that your chaincode is in a directory named node, and use the following command to generate the cds file.

1. Cd to the directory which is the parent directory of the node directory
2. Run the following command:

docker run --rm --name peer --entrypoint peer -v $(pwd):/mychaincode hyperledger/fabric-peer chaincode package -n mycc -p /mychaincode/node -l node -v 1.0.0 /mychaincode/test.cds

This creates a file named test.cds in the current directory named mycc with version 1.0.0

Friday, July 12, 2019

Setup dev environment for k8s operator

The process is to setup k8s operator dev environment

1. Install docker, kubectl, golang and mercurial etc. using the method from each product
2. Setup git config if you have not:
    git config --global user.email "your email address here"
    git config --global user.name "Tong Li"  
3. Add the following to the end of your .profile:

    export PATH=$PATH:/usr/local/go/bin
    export GOROOT=/usr/local/go
    export GOBIN=$GOROOT/bin
    export GOPATH=~/hl
    export GO111MODULE=on
    The above assumes that your golang is installed in to /usr/local/go directory and you have
    a directory in your home named hl

4. To be able to create an operator, you will need to first create a directory under $GOPATH, the run the operator creation command:

   mkdir $GOPATH/myproject
   cd $GOPATH/myproject
   operator-sdk new myfirst

Thursday, June 20, 2019

Commands to check if the cert and private key match

openssl pkey -in privateKey.key -pubout -outform pem | sha256sum
openssl x509 -in certificate.crt -pubkey -noout -outform pem | sha256sum

Friday, June 7, 2019

System performance measure tool

vmstat -w 3 20

This will measure the system 3 times per second and show 20 data points

Here is what each column means:

Procs
    r: The number of processes waiting for run time.
    b: The number of processes in uninterruptible sleep.
Memory
    swpd: the amount of virtual memory used.
    free: the amount of idle memory.
    buff: the amount of memory used as buffers.
    cache: the amount of memory used as cache.
    inact: the amount of inactive memory. (-a option)
    active: the amount of active memory. (-a option)
Swap
    si: Amount of memory swapped in from disk (/s).
    so: Amount of memory swapped to disk (/s).
IO
    bi: Blocks received from a block device (blocks/s).
    bo: Blocks sent to a block device (blocks/s).
System
    in: The number of interrupts per second, including the clock.
    cs: The number of context switches per second.
CPU
    These are percentages of total CPU time.
    us: Time spent running non-kernel code. (user time, including nice time)
    sy: Time spent running kernel code. (system time)
    id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
    wa: Time spent waiting for IO. Prior to Linux 2.5.41, included in idle.
    st: Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown.
 
We can use 
 
fdisk -l
 
to list all the disks in the system
 
Then use the following command to see block size
 
dumpe2fs /dev/sda1 | fgrep -e 'Block size' 

Sunday, April 14, 2019

The steps to create ingress in k8s

1. Create a namespace and service account
2. Create nginx server secret, basically a pair of crt and key file using the following command:
   
openssl req -newkey rsa:2048 -nodes -keyout nginx.key -x509 -days 365 -out nginx.crt
 
3. Create k8s configmap for customizing nginx which can include sub_filter directives etc.
4. k8s rbac to allow the service account to do things
5. Deploy ingress controller using either daemon set or deployment
6. Use either NodePort service or LoadBalancer to allow access to the daemon set.

The above steps are really just the steps to make sure that the access to the services uses nginx ingress controller.

The next few steps are to deploy the actual application.

1. Deploy your actually application using either pods or replicateset or whatever you prefer.
2. Create an Ingress service which maps path to each app. It is this service also has tls and basic authentication in like the following:


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-test 
  annotations:
    # type of authentication
    nginx.ingress.kubernetes.io/auth-type: basic
    # name of the secret that contains the user/password definitions
    nginx.ingress.kubernetes.io/auth-secret: TheNameOfK8sSecret
    # message to display with an appropriate context why the authentication is required
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
  
spec:
  tls:
    - hosts:
      - foo.bar.com
      # This assumes tls-secret exists and the SSL 
      # certificate contains a CN for foo.bar.com
      secretName: tls-secret
  rules:
    - host: foo.bar.com
      http:
        paths:
        - path: /
          backend:
            # This assumes http-svc exists and routes to healthy endpoints
            serviceName: http-svc
            servicePort: 80

Wednesday, February 20, 2019

What information contains in a certificate?

Certificate is normally issued to an individual or a company by CA. In cryptography, a public key certificate, also known as a digital certificate or identity certificate, is an electronic document used to prove the ownership of a public key. Which contains the following information.

openssl x509 -in tlsca.org2msp-cert.pem -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            df:c6:71:a4:bb:41:1f:73:83:ed:d5:95:93:24:2f:f6
    Signature Algorithm: ecdsa-with-SHA256
        Issuer: C=US, ST=California, L=San Francisco, O=org2msp, CN=tlsca.org2msp
        Validity
            Not Before: Feb 20 17:20:00 2019 GMT
            Not After : Feb 17 17:20:00 2029 GMT
        Subject: C=US, ST=California, L=San Francisco, O=org2msp, CN=tlsca.org2msp
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:02:ea:14:c2:52:0d:02:10:02:c1:6e:41:8e:b7:
                    33:0e:73:4b:1f:9d:8a:b3:d0:90:41:2d:4f:49:4f:
                    ee:cf:20:05:d4:e6:26:99:d4:d4:90:1c:71:02:bc:
                    1f:30:15:b1:b2:d4:b2:49:d5:9f:7b:f8:20:15:e6:
                    cc:ae:75:05:12
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign, CRL Sign
            X509v3 Extended Key Usage:
                TLS Web Client Authentication, TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Subject Key Identifier:
                1A:23:57:FF:C1:BC:12:26:EA:94:44:2A:35:E6:A6:AA:9A:58:26:B1:03:52:04:44:10:DA:54:AA:08:2D:D5:5D
    Signature Algorithm: ecdsa-with-SHA256
         30:44:02:20:68:f1:1c:b3:25:ac:a8:99:31:f1:a9:c5:ce:51:
         c6:cc:90:2f:06:1e:d0:8c:51:e3:1c:f6:30:3d:dd:59:49:8e:
         02:20:1b:88:49:b2:ce:c8:1e:30:52:d1:25:a7:7a:47:ff:a4:
         03:1b:8d:e5:48:4e:6a:e9:2d:eb:07:36:d3:b5:c0:d4


Thursday, February 14, 2019

Install perf on fabric container

1. apt install linux-tools-generic
2. apt install linux-tools-4.4.0-141-generic

dstat
apt install dstat

dstat -cd --disk-util --disk-tps

apt install atop ioping

iotop
lsblk

ioping /dev/xvdc

Sunday, February 10, 2019

Resize VirtualBox Hard disk

After your virtual machine run for awhile, you found that your originally allocated virtual hard disk may run out of the space. You may not always want to recreate the vm since you may have things in the VM that you do not want to destroy. Here is the process to size the hard disk without destroy what is already in the VM.

1. Use VBoxManage modifyhd command:

   VBoxManage modifyhd NGINX.vdi --resize 30000
 
  The parameter for --resize is in MB. 30000 is 30GB. 40000 is 40GB.
2. If your VM has snapshots, you will have to do the exact same command for each snapshot vdi file. Without doing this, you will not be able to do the next step.

3. Use gparted iso mounted onto your VM and then boot up your VM.
4. Use the gparted to resize your disk, then reboot your VM. Your VM at this point will have resized disk size.