Thursday, December 22, 2022

how to practise music instrument

 The following process uses all free available tools to prepare a piece of music so that one can practice.


1. Generate music XML file.

Use musescore to generate a piece of music and save as music XML file (either compressed or non compressed)

2. Create a free soundslice account and upload the generate music xml file to your account. Doing this step from a computer is a lot easier.

3. Soundslice is a web based application, so there is no mobile app for mobile devices such as android or iOS tablets or phones, so you can go to www.soundslice.com via a browser, then use the share button to add the bookmark onto you homescreen, that will work just like a native app.

4. You can pick any uploaded the piece (slice) to play, you can even mute the sound if you like, so that the app simply read through your music, and you can simply follow it to practice.


Note:

One can also use the tool to convert to music xml file by doing the following:

1. Simply screen capture an image.

2. Convert that image into a pdf file using image viewer

3. Use musescore pdf to music xml file tool to convert to musescore file

4. Edit/correct convertion issues, then export to music xml file.

Friday, December 2, 2022

Machine learning terms

 Graph:

Graphs are data structures to describe relationships and interactions between entities in complex systems. In general, a graph contains a collection of entities called nodes and another collection of interactions between a pair of nodes called edges


Shape:

The number of elements in each dimension of a tensor. The shape is represented as a list of integers. For example, the following two-dimensional tensor has a shape of [3,4]:

[[5, 7, 6, 4],
 
[2, 9, 4, 8],
 
[3, 6, 5, 1]]

TensorFlow uses row-major (C-style) format to represent the order of dimensions, which is why the shape in TensorFlow is [3,4] rather than [4,3]. In other words, in a two-dimensional TensorFlow Tensor, the shape is [number of rowsnumber of columns].

The vector of partial derivatives with respect to all of the independent variables. In machine learning, the gradient is the vector of partial derivatives of the model function. The gradient points in the direction of steepest ascent.


Tuesday, November 22, 2022

Allow ssh to access a remote linux system

 in the machine where you like to access the remote system, generate a ssh key and certificate file, normally id_rsa and id_rsa.pub file. Then copy the content of id_rsa.pub file to the remote system ~/.ssh/authorized_keys file, then you should be able to access the system.

Wednesday, November 16, 2022

Istio ambient mesh ztunnel implementations

 

# The results come from ztunnel pod
# iptables -S -t nat
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A PREROUTING -j LOG --log-prefix "nat pre [ztunnel-ntkqj] "
-A INPUT -j LOG --log-prefix "nat inp [ztunnel-ntkqj] "
-A OUTPUT -j LOG --log-prefix "nat out [ztunnel-ntkqj] "
-A OUTPUT -p tcp -m tcp --dport 15088 -j REDIRECT --to-ports 15008
-A POSTROUTING -j LOG --log-prefix "nat post [ztunnel-ntkqj] "
# iptables -S -t mangle
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A PREROUTING -j LOG --log-prefix "mangle pre [ztunnel-ntkqj] "
-A PREROUTING -i pistioin -p tcp -m tcp --dport 15008 -j TPROXY --on-port 15008 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
-A PREROUTING -i pistioout -p tcp -j TPROXY --on-port 15001 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
-A PREROUTING -i pistioin -p tcp -j TPROXY --on-port 15006 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
-A PREROUTING ! -d 10.30.0.5/32 -i eth0 -p tcp -j MARK --set-xmark 0x4d3/0xfff
-A INPUT -j LOG --log-prefix "mangle inp [ztunnel-ntkqj] "
-A FORWARD -j LOG --log-prefix "mangle fw [ztunnel-ntkqj] "
-A OUTPUT -j LOG --log-prefix "mangle out [ztunnel-ntkqj] "
-A POSTROUTING -j LOG --log-prefix "mangle post [ztunnel-ntkqj] "



===
-A PREROUTING -i pistioin -p tcp -m tcp --dport 15008 -j TPROXY --on-port 15008 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
Note: take every tcp packet targeting port 15008, deliver to 127.0.0.1:15008 and mark packet with 0x400/0xfff
port 15008 is Istio HBONE mTLS tunnel port

-A PREROUTING -i pistioout -p tcp -j TPROXY --on-port 15001 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
Note: take every tcp packet, then deliver them to 127.0.0.1:15001 and also mark packet with 0x400/0xfff
port 15001 is envoy outbound port

-A PREROUTING -i pistioin -p tcp -j TPROXY --on-port 15006 --on-ip 127.0.0.1 --tproxy-mark 0x400/0xfff
Note: take every tcp packet, then deliver to 127.0.0.1:15006 and mark packet with 0x400/0xfff
port 15006 is envoy inbound port

Monday, November 14, 2022

FIB vs RIB

 

The forwarding information base (FIB) is the actual information that a routing/switching device uses to choose the interface that a given packet will use for egress. For example, the FIB might be programmed such that a packet bound to a destination in 192.168.1.0/24 should be sent out of physical port ethernet1/2. There may actually be multiple FIB's on a device for unicast forwarding vs multicast RPF checking, different protocols (ip vs mpls vs ipv6) but the basic function is the same - selection criteria (usually destination) mapping to output interface/encapsulation. Individual FIB's may also be partitioned to achieve concurrent independent forwarding tables (i.e. vrf's).

Each FIB is programmed by one or more routing information bases (RIB). The RIB is a selection of routing information learned via static definition or a dynamic routing protocol. The algorithms used within various RIB's will vary - so, for example, the means by which BGP or OSPF determines potential best paths vary quite a bit. The means by which multiple RIB's are programmed into a common (set) of FIB's in a box will vary by implementation but this is where concepts like administrative distance are used (e.g. identical paths are learned via eBGP and OSPF, the eBGP is usually preferred for FIB injection). Again, RIB's may also be potentially partitioned to allow for multiple vrf's, etc.

Saturday, November 12, 2022

Allow two docker networks to communicate with each other

In some cases, it is useful to have containers running on two different docker bridge networks to communicate with each other. The easist thing is to remove the docker created isolation rules so that containers running on different bridged docker networks wont have their packets dropped by iptable rules. One other way is to add forward rules so that their packet will be accepted. Here is an example, assume there are two bridged docker networks b1 172.19.0.0/16 and b2 172.20.0.0/16. By default the containers running on these two separate networks are isolated (on purpose). With the following two Iptable rules, containers can communicate with each other. 


iptables -I FORWARD -s 172.19.0.0/16 -d 172.20.0.0/16 -j ACCEPT

iptables -I FORWARD -d 172.20.0.0/16 -s 172.19.0.0/16 -j ACCEPT 

How to solve the issue that VirtualBox cloned VMs have same dhcp IP address

In some cases, you may like to clone a VirtualBox VM for a test purposes. But after new VM gets created, you may find that the IP address assigned (using Host Only network) to the cloned VM is the same as the original VM. This of course causes issues. To resolve this problem, one way is to use static IP, but many times you are using DHCP, so in this case, you need to assign different MAC address for the cloned VM, then run the following commands,

sudo dhclient -r enp0s3

sudo dhclient enp0s3

Notice that enp0s3 is the interface card name of your VM, it can be something else such as eth0 or eth1.

Friday, November 4, 2022

Rust ownership, the shining new thing?

 

Ownership Rules

First, let’s take a look at the ownership rules. Keep these rules in mind as we work through the examples that illustrate them:

  • Each value in Rust has an owner.
  • There can only be one owner at a time.
  • When the owner goes out of scope, the value will be dropped.

Monday, September 26, 2022

How to check in linux which ports are used?

 Use one of the following commands to check

sudo lsof -i -P -n | grep LISTEN
sudo netstat -tulpn | grep LISTEN
sudo ss -tulpn | grep LISTEN
sudo lsof -i:22 ## see a specific port such as 22 ##
sudo nmap -sTU -O IP-address-Here

Thursday, September 22, 2022

headless in IT context

The word headless often used with many other IT terms such as service, system, device. Sometimes it is a bit confusing, what it really means, if it has been used as faceless, it may have been a lot easier for people to understand. So, basically when word headless is used with many other IT terms, it really means that there is no user interface associated with, for example, if it is headless service, that means, the service is more like a API service, a Rest service, backend service etc. It is running somewhere, you actually do not see unless one uses some user interface such as a client software to request access to it. It is similar used as headless system, such as a linux system running without even a command line user interface. Another example is headless IoT device, usb memory stick, drone, etc. Wireless network connected headless devices is another example.


Friday, August 26, 2022

Istio agent iptable command

When Istio workload starts up, the first thing is to start the proxy-init container which is the InitContainer for a workload. This proxy-init container uses istio/proxy2 which execute iptable command of the istio-agent.


What the iptable command actually does is to do the following which captured in its log. For example, the following is the helloworld workload with istio sidecar injection. Both app port and service port are 5000. The initContainer is named istio-init.


Create new Istio chains

-N ISTIO_INBOUND
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT

Add routing rules.
-A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT

Monday, July 25, 2022

Istio useful commands

Setup multiple primary and multiple cluster network.

1. istioctl proxy-config endpoints

istioctl pc endpoints --context kind-cluster1  helloworld-v1-7b8f6db47c-rw88t.sample


2. istioctl dashboard envoy

 istioctl dashboard envoy helloworld-v2-77f7f4cc45-mrgwv.sample --address=192.168.56.32 --browser=false


3. istioctl dashboard controlz (only against istiod instance)

istioctl dashboard controlz  istiod-7849bf7c66-6prbj.istio-system --address=192.168.56.32 --browser=false


Force ubuntu system to sync date and time for a system

just run the following commands to force the system to sync the date and time. 

sudo timedatectl set-ntp true
sudo timedatectl set-ntp false
sudo timedatectl set-ntp true
Running these 3 commands, then the system will sync your date and time
the correct date and time. 


Ubuntu virtualbox virtual machines can have the system date and time be out sync when the host machine goes on sleep, once the host machine is wake up, the virtual machine's time can be off quite a bit. Run the above command to bring the system date and time to the current.

Friday, July 8, 2022

downstream and upstream in computer networking

In case of Server-Client paradigm, the terms “Upstream” and “Downstream” are used only with respect to the server. So basically "Upstream" means data coming to server (or me), "Downstream" means data leaving server (or me).

In a network, the terms upstream and downstream are used with respect to the position of the devices.

For instance, the flow of packets between three kinds of switches: access, distribution and core switches. The flow of packets from the access switches to the distribution and core switches is upstream. Whereas, from core level switches to access switches, the flow of packets is downstream traffic. With respect to distribution switches, the flow of packets to core switches is upstream traffic while the flow to access switches is downstream traffic.  

So to make this a bit easier, for a giving device (app, or program), if data is coming to the device, then it is upstream. if data is leaving the device, then it is downstream.

 

Thursday, May 19, 2022

How to run Istiod in debug mode outside of k8s

 Debugging a kubernetes controller is a bit hard by keeping adding fmt.Println statement in your code. Using VS code to run the controller main.go in debug mode outside of the kubernetes cluster seems to be a doable solution.

This article will talk about how to do this using istiod as an example.

1. Load your Istio project into VS code.

2. Setup a debug profile (configuration) as follows:

{
"name": "Controller",
"type": "go",
"request": "launch",
"mode": "debug",
"env": {
"REVISION": "default",
"JWT_POLICY": "third-party-jwt",
"PILOT_CERT_PROVIDER": "istiod",
"POD_NAME": "tongli",
"POD_NAMESPACE": "istio-system",
"SERVICE_ACCOUNT": "",
"KUBECONFIG": "/home/ubuntu/.kube/config",
"ENABLE_LEGACY_FSGROUP_INJECTION": "false",
"ROOT_CA_DIR": "/tmp/work",
"PILOT_TRACE_SAMPLING": "1",
"PILOT_ENABLE_PROTOCOL_SNIFFING_FOR_OUTBOUND": "true",
"PILOT_ENABLE_PROTOCOL_SNIFFING_FOR_INBOUND": "true",
"ISTIOD_ADDR": "istiod.istio-system.svc:15012",
"PILOT_ENABLE_ANALYSIS": "false",
"CLUSTER_ID": "tongli"
},
"args": [
"discovery",
"--monitoringAddr=:15014",
"--log_output_level=default:info",
"--domain=cluster.local",
"--keepaliveMaxServerConnectionAge=30m"
],
"program": "${workspaceFolder}/pilot/cmd/pilot-discovery/main.go"
},

3. Now create a kubernetes cluster and make sure that kube config file is in the right place, corresponding to some of the environment variables.

4. For Istiod to work, you will also need to setup variable like ROOT_CA_DIR to a directory which VS Code has access to.

5. Now create a namespace in the kubernetes cluster called istio-system, which Istiod will need to start up.

6. Set up few break points in your code that your controller will run. Then start debug by choose the Controller profile in VScode. If now you send some requests against your controller, then it should break at one of the break point.


There should be ca certs created before hand in multi cluster deployment case so that istiod wont create its own secrets. The best way probably is to create the istio-ca-secret in istio-system namespace (or whatever the namespace it runs in) before start up the debugging process.

Tuesday, May 17, 2022

Istio virtual service and destination rules

The Istio Gateway service is a load balancer that will enable HTTP(S) traffic to your cluster. It will sit at the entry of the service mesh and listen to the external connection which will allow the external traffic into the mesh. 

It will have details like 

i) Hostnames that will route to services. 

ii) Serve certificates for those hostnames. 

iii) Port details.

The call from Gateway load balancer will be intercepted by the Istio object “Gateway” pod which is an envoy proxy (yes you are reading it correctly; Envoy acts as a sidecar and as well as Gateway pod). The call from Gateway will be redirected to the destination service based on the “VirtualService” routing configuration.

VirtualServices configure how traffic flows to a service by defining a set of traffic rules for routing Envoy traffic inside service mesh. The traffic rules define what criteria to match before applying the rules on the call. That is, an Istio virtual service might match up with multiple kubernetes services based on various criteria.

DestinationRules will come in to play when routing is done to your application service subsets v1 and v2. That is, destination rules starts its work after the request already reaches your services. Say if you are introducing a new version of service or patch fixes to production, it is often desirable to move a controlled percentage of user traffic to a newer version of service in the process of phasing out the older version (Canary deployment). Basically it covers the basic configuration for load balancing, connection pool, circuit breaker, etc.

ServiceEntry: If you want to call external services outside your service mesh, We have to create a service entry configuration for externally running business components/ down streams.

 

Istio proxy runs as init container vs sidecar

 Istio

    

Istio proxy parameters for running proxy as sidecar:

- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --log_output_level=default:info
- --concurrency
- "2"

Other differences:

allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1337
runAsNonRoot: true
runAsUser: 1337

Istio proxy parameters for running proxy as init container:

- istio-iptables
- -p
- "15001"
- -z
- "15006"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- '*'
- -d
- 15090,15021,15020

Other differences:

allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0

Wednesday, April 20, 2022

VSCode search exclude using multiple regex

 To exclude the files when doing a search in VSCode, one can specify multiple regular expressions in a string like this.


vendor/**,**/*_test.go


The above will exclude all the files under vendor folder and all the files whose name ends with _test.go

Saturday, April 16, 2022

How to use computer to control your devices

 To use a computer to control (Power on/off) many devices, one may use a system like this.


1.  Connect a multi-channel Relay module to a computer via possibly RS485 interface. Something like this.

Modbus RTU 8-Channels Relay Module with RS485 Bus


2. Then use low voltage wire to connect a device via solid state relay to the multi-channel relay. Something like this:



3. Then create a program to manage the multiple channel relay to control device's power.

4. It may be necessary to have a converter for RS485 so that the multi-channel relay can be connected with a computer (PC)









Wednesday, April 13, 2022

Istio secrets

 Istiod uses serviceAccount `istiod` and serviceAccountName `istiod` to have gain access to k8s api server. According to how service account works with a pod, this basically mount the token in this directory

/var/run/secrets/kubernetes.io/serviceaccount

this directory contains root ca, namespace and jwt.

in remote case, istiod has to be configured to use istio-kubeconfig secret to gain access to the remote cluster. The secret will be mounted to istiod pod at the following location

/var/run/secrets/remote

the name of the file normally is config which basically contains a content of a kube config file.




Tuesday, March 29, 2022

Istio executed analyzers by default when it is enabled.

annotations.K8sAnalyzer,

auth.AuthorizationPoliciesAnalyzer,

deployment.MultiServiceAnalyzer,

applicationUID.Analyzer,

deprecation.DeprecationAnalyzer,

gateway.IngressGatewayPortAnalyzer,

gateway.CertificateAnalyzer,

gateway.SecretAnalyzer,

gateway.ConflictingGatewayAnalyzer,

injection.Analyzer,

injection.ImageAnalyzer,

injection.ImageAutoAnalyzer,

meshnetworks.MeshNetworksAnalyzer,

service.PortNameAnalyzer,

sidecar.DefaultSelectorAnalyzer,

sidecar.SelectorAnalyzer,

virtualservice.ConflictingMeshGatewayHostsAnalyzer,

virtualservice.DestinationHostAnalyzer,

virtualservice.DestinationRuleAnalyzer,

virtualservice.GatewayAnalyzer,

virtualservice.JWTClaimRouteAnalyzer,

virtualservice.RegexAnalyzer,

destinationrule.CaCertificateAnalyzer,

serviceentry.Analyzer,

webhook.Analyzer,

schema.ValidationAnalyzer.WasmPlugin,

schema.ValidationAnalyzer.MeshConfig,

schema.ValidationAnalyzer.MeshNetworks,

schema.ValidationAnalyzer.DestinationRule,

schema.ValidationAnalyzer.EnvoyFilter,

schema.ValidationAnalyzer.Gateway,

schema.ValidationAnalyzer.ServiceEntry,

schema.ValidationAnalyzer.Sidecar,

schema.ValidationAnalyzer.VirtualService,

schema.ValidationAnalyzer.WorkloadEntry,

schema.ValidationAnalyzer.WorkloadGroup,

schema.ValidationAnalyzer.ProxyConfig,

schema.ValidationAnalyzer.AuthorizationPolicy,

schema.ValidationAnalyzer.PeerAuthentication,

schema.ValidationAnalyzer.RequestAuthentication,

schema.ValidationAnalyzer.Telemetry

Wednesday, March 16, 2022

Develop Istio and its operator locally

 To develop Istio locally on your machine can be problematic since Istiod deployment requires Istio pilot image to be available from a container image repository, this can be difficult since as a developer you are making changes to the container which is not in any repo yet. This process will let a developer to do this locally without using any container repo.


1. Build istio using your own version, for example,

  export VERSION=1.20-dev
  export TAG=$VERSION
  export HUB=istio
  export DEBUG=1 (optional)
   
 make istioctl docker.pilot docker.proxyv2 docker.operator  

Istio source directory has a file named Makefile.core.mk which should have a environment variable named VERSION defined,  the value should be something like 1.13-dev, 1.14-dev etc depends on which branch you may have. You can use the above example to set up an Istio version yourself to something in your like. 

Alternatively, you can add export

2. Once your istiod, proxyv2, operator images are built, you can now run this script to upload these images to your kind cluster.

3. Then you use the newly built istioctl cli to deploy istio onto your cluster, this way, your kind cluster will have your local image available and running, you then can look at the logs from istiod, or proxyv2 to find issues or test features.

Monday, March 14, 2022

More git things

1. Get all the tags (releases most likely) 

     git fetch --all --tags 

2. Then checkout a specific tag

     git checkout tags/1.12.5 -b my1.12.5 

 3. Check you are indeed on the branch

     git branch

 

============

To sync with upstream master branch

1. Fetch upstream branch, for example

    git fetch upstream master

2. Switch to the local branch that you like to sync with the upstream branch

    git rebase upstream/master

 

3. Doing the above may produce conflict, then you will need to resolve the conflicts, then run the following command:

   git rebase --continue

 

4. Most likely you will need to do the following to get your branch pushed to your own remote repo.

   git push -f origin master

============

To get someone else's pull request for build or test purpose, assume that your local repo is the clone of your own repository, and upstream is the upstream repo. then do the following:

1.  git fetch upstream pull/$ID/head:$BRANCHNAME

2. git checkout $BRANCHNAME

Where $ID should be the pull request id which is normally found at the very end of the PR url. $BRANCHNAME should be just a name. Once the fetch command succeeded, you can use git checkout to switch to that branch and do whatever you need to do.

Wednesday, March 9, 2022

Analyzing Istio Performance

Based on the instruction from this link. One can find some performance information, the following two things one can do to help when running the tool in a server which does not have browser. 1. Specify an IP address which can be reached from outside of the machine, for example, the original command looks like this
go tool pprof -http=:8888 localhost:8080/debug/pprof/heap
One can use a specific IP address to allow access from outside of the machine which is running the tool
go tool pprof -http=192.168.56.32:8888 localhost:8080/debug/pprof/heap
2. When running in the server env. -no_browser option probably will be nice to avoid the warning messages from the process.
go tool pprof -no_browser -http=192.168.56.32:8888 localhost:8080/debug/pprof/heap

Monday, February 21, 2022

Run Istio Integration test in debug mode in VSCode

Assume that you've setup your istio project in VSCode, and also run go mod vendor,

you can do the following to debug step-by-step in VSCode.


1. Create a kind k8s cluster

2. Create an Istio integration k8s cluster topology file named single.json, like this.

[
  {
    "kind": "Kubernetes",
    "clusterName": "istio-testing",
    "network": "istio-testing",
    "meta": {
      "kubeconfig": "/home/ubuntu/.kube/config"
    }
  }
]

Notice that the kubeconfig field, the value should be the kube config file

3. Now in VSCode, make sure that you have the following in your settings.

    "go.buildTags": "integ",
    "go.testFlags": ["-args", "--istio.test.kube.topology=/home/ubuntu/test/single.json", "--istio.test.skipVM"],

Now if you nagivate to an integration test go file in VSCode, you should be able to click on the codelens `debug test` to start debugging your code.

=================================================

For multiple cluster integration tests, the following few items will need to be taken care of:

1. Create multiple clusters using this script with a topology json file 

2. To use the code that you as a developer just built (such as istioctl or docker images such as pilot and proxyv2), you will need to make sure that these images get preloaded into the clusters created in the above step #1. If you use the scripts described in that step, then the newly built images should be loaded onto the cluster automatically.

3. In each test setup (most likely in TestMain method), you will need to setup how the tag and hub should be so that the process will use these images correctly. Otherwise, the process will use the public images not the ones that you just built.

   To do this, you most likely will need to do the followings:

   a. Create a new method like this

func enableMCSServiceDiscovery(t resource.Context, cfg *istio.Config) {
cfg.Values["global.tag"] = "1.15-dev"
cfg.Values["global.imagePullPolicy"] = "IfNotPresent"
cfg.Values["global.hub"] = "istio"
cfg.ControlPlaneValues = fmt.Sprintf(`
values:
pilot:
env:
PILOT_USE_ENDPOINT_SLICE: "true"
ENABLE_MCS_SERVICE_DISCOVERY: "true"
ENABLE_MCS_HOST: "true"
ENABLE_MCS_CLUSTER_LOCAL: "true"
MCS_API_GROUP: %s
MCS_API_VERSION: %s`,
common.KubeSettings(t).MCSAPIGroup,
common.KubeSettings(t).MCSAPIVersion)
}


   b. Then call that method in the TestMain method's Setup call like this. Notice one of the Setup method call uses the method enableMCSServiceDiscovery which got defined above.

func TestMain(m *testing.M) {
framework.
NewSuite(m).
Label(label.CustomSetup).
RequireMinVersion(17).
RequireMinClusters(2).
Setup(common.InstallMCSCRDs).
Setup(istio.Setup(&i, enableMCSServiceDiscovery)).
Setup(common.DeployEchosFunc("mcs", &echos)).
Run()
}


4. Make sure that VSCode settings file, contains the go.buildTags, go.testFlags settings like the followings

"go.buildTags": "integ",
"go.testFlags": ["-args", "--istio.test.kube.topology=/tmp/work/topology.json", "--istio.test.skipVM"],

 5. Once these above steps done, you can simply click on the debug test button (the codelens) above the MainTest method.

====================================================

If want to run the test locally (not via vscode codelens), you can do the following:

1. Found a PR from istio project on github which run successfully, there should be many integration tests, such as the following:

 
Notice that any test start with integ is an integration tests, you can pick any of them, then go to the raw build-log.txt file, in this file, you should be able to find the cluster topology file content. Then create a json file with the content like below:


[
{
"kind": "Kubernetes",
"clusterName": "config",
"podSubnet": "10.20.0.0/16",
"svcSubnet": "10.255.20.0/24",
"network": "network-1",
"primaryClusterName": "external",
"configClusterName": "config",
"meta": {
"kubeconfig": "/tmp/work/config"
}
},
{
"kind": "Kubernetes",
"clusterName": "remote",
"podSubnet": "10.30.0.0/16",
"svcSubnet": "10.255.30.0/24",
"network": "network-2",
"primaryClusterName": "external",
"configClusterName": "config",
"meta": {
"fakeVM": false,
"kubeconfig": "/tmp/work/remote"
}
},
{
"kind": "Kubernetes",
"clusterName": "external",
"podSubnet": "10.10.0.0/16",
"svcSubnet": "10.255.10.0/24",
"network": "network-1",
"primaryClusterName": "external",
"configClusterName": "config",
"meta": {
"fakeVM": false,
"kubeconfig": "/tmp/work/external"
}
}
]
 

Save the above content into a file such as topology.json

2. Then you should be able to find a command like the following:

go test -p 1 -v -count=1 -tags=integ -vet=off ./tests/integration/pilot/... \
-timeout 30m --istio.test.skipVM --istio.test.ci --istio.test.pullpolicy=IfNotPresent \
--istio.test.work_dir=/tmp/work --istio.test.hub=istio --istio.test.tag=1.15-dev \
--istio.test.kube.topology=/tmp/work/topology.json "--istio.test.select=,-postsubmit"

3. Change the parameters of the above command to fit your own env. Pay special attention to parameters like istio.test.tag, istio.test.hub, making changes based on your own build. In the above command, I built istio images locally and tagged them like public istio images, and preloaded into the k8s clusters, so that everything is ready to go.

4. The parameter ./tests/integration/pilot/... indicates what tests will be run, that must be a directory from the source tree. It will normally contain multiple TestMain methods, each TestMain method is considered as test suite. When it starts, you should see something like the following:

2022-04-29T14:47:53.132024Z info tf === DONE: Building clusters ===
2022-04-29T14:47:53.132029Z info tf === BEGIN: Setup: 'pilot_analysis' ===
2022-04-29T14:47:53.132083Z info tf === BEGIN: Deploy Istio [Suite=pilot_analysis] ===

that should give you some clear indication what test suite it is running. If there is any error, you can find that test suite in the source code and start debugging using VSCode. Notice that normally one integration test may contain many test suites, that is, as stated above, many TestMain methods in that directory or sub directory.

Friday, February 18, 2022

Check disk space usages

Run the following command to show which folder uses spaces exceeded G bits

 du -h ~ 2>/dev/null | grep '[0-9\.]\+G'

 

 

Friday, January 28, 2022

Envoy configurations

 

Envoy can use a set of APIs to update configurations, without any downtime or restart. Envoy only needs a simple bootstrap configuration file, which directs configurations to the proper discovery service API. Other settings are dynamically configured. Envoy's dynamic configuration APIs are called xDS services and they include:

  • LDS (Listener): This allows Envoy to query the entire listener. By calling this API, you can dynamically add, modify, and delete known listeners. Each listener must have a unique name. Envoy creates a universally unique identifier (UUID) for any unnamed listener.
  • RDS (Route): This allows Envoy to dynamically retrieve route configurations. Route configurations include HTTP header modifications, virtual host configurations, and the individual routing rules contained in each virtual host. Each HTTP connection manager can retrieve its own route configurations independently through an API. The RDS configuration, a subset of the LDS, specifies when to use static and dynamic configurations and which route to use.
  • CDS (Cluster): This is an optional API that Envoy calls to dynamically retrieve cluster-managed members. Envoy coordinates cluster management based on API responses, and adds, modifies, and deletes known clusters as needed. No clusters statically defined in Envoy configurations can be modified or deleted through the CDS API.
  • EDS(Endpoint): This is a gRPC- or RESTJSON-based API that allows Envoy to retrieve cluster members. It is a subset of the CDS. In Envoy, cluster members are called endpoints. Envoy uses discovery services to retrieve the endpoints in each cluster. EDS is the preferred discovery service.
  • SDS(Secret): This is an API used to distribute certificates. It simplifies certificate management. In non-SDS Kubernetes deployment, certificates must be created as keys and mounted to Envoy containers. If a certificate expires, its key must be updated and the Envoy container must be redeployed. When the SDS is used, the SDS server pushes certificates to all Envoy instances. If a certificate expires, the SDS server only needs to push the new certificate to the Envoy instance. The Envoy instance then applies the new certificate immediately without redeployment.
  • ADS(Aggregated): This is used to retrieve all the changes made by the preceding APIs in order from a serialized stream. In essence, the ADS is not an xDS service. Rather, it implements synchronous access to multiple xDS services in a single stream.

You can use one or more xDS services for configuration. Envoy's xDS APIs are designed for eventual consistency, and proper configurations are eventually converged. For example, Envoy may eventually use a new route to retrieve RDS updates, and this route may forward traffic to clusters that have not yet been updated in the CDS. As a result, the routing process may produce routing errors until the CDS is updated. Envoy introduces the ADS to solve this problem. Istio also implements the ADS, which can be used to modify proxy configurations.

 

The definition of service mesh

A service mesh is a distributed application infrastructure that is responsible for handling network traffic on behalf of the application in a transparent, out of process manner.

 Data Plane and Control Plane

The service proxies form the "data plane" through which all traffic is handled and observed. The data plane is responsible for establishing, securing, and controlling the traffic through the mesh. The management components that instruct the data plane how to behave is known as the "control plane". The control plane is the brains of the mesh and exposes an API for operators to manipulate the network behaviors. Together, the data plane and the control plane provide important capabilities necessary in any cloud-native architecture such as:

  • Service resilience
  • Observability signals
  • Traffic control capabilities
  • Security
  • Policy enforcement

 

Figure 1.9. Service mesh architecture with co-located application-layer proxies (data plane) and management components (control plane)

 

CH01 F14 generic service mesh

 

 

 

 

With a service proxy next to each application instance, applications no longer need to have language-specific resilience libraries for circuit breaking, timeouts, retries, service discovery, load balancing, et. al. Moreover, the service proxy also handles metric collection, distributed tracing, and log collection.

 

 

Friday, January 14, 2022

Istio component category and its elements

 

============= CustomResourceDefinition//authorizationpolicies.security.istio.io

- Processing resources for Istio core.
============= CustomResourceDefinition//destinationrules.networking.istio.io
============= CustomResourceDefinition//envoyfilters.networking.istio.io
============= CustomResourceDefinition//gateways.networking.istio.io
============= CustomResourceDefinition//istiooperators.install.istio.io
============= CustomResourceDefinition//peerauthentications.security.istio.io
============= CustomResourceDefinition//proxyconfigs.networking.istio.io
============= CustomResourceDefinition//requestauthentications.security.istio.io
============= CustomResourceDefinition//serviceentries.networking.istio.io
============= CustomResourceDefinition//sidecars.networking.istio.io
============= CustomResourceDefinition//telemetries.telemetry.istio.io
============= CustomResourceDefinition//virtualservices.networking.istio.io
============= CustomResourceDefinition//wasmplugins.extensions.istio.io
============= CustomResourceDefinition//workloadentries.networking.istio.io
============= CustomResourceDefinition//workloadgroups.networking.istio.io
============= ServiceAccount/external-istiod/istio-reader-service-account
============= ServiceAccount/external-istiod/istiod-service-account
============= ClusterRole//istio-reader-external-istiod
============= ClusterRole//istiod-external-istiod
============= ClusterRoleBinding//istio-reader-external-istiod
============= ClusterRoleBinding//istiod-external-istiod
============= Role/external-istiod/istiod-external-istiod
============= RoleBinding/external-istiod/istiod-external-istiod

✔ Istio core installed
============= ServiceAccount/external-istiod/istiod

- Processing resources for Istiod.
============= ClusterRole//istio-reader-clusterrole-external-istiod
============= ClusterRole//istiod-clusterrole-external-istiod
============= ClusterRole//istiod-gateway-controller-external-istiod
============= ClusterRoleBinding//istio-reader-clusterrole-external-istiod
============= ClusterRoleBinding//istiod-clusterrole-external-istiod
============= ClusterRoleBinding//istiod-gateway-controller-external-istiod
============= ValidatingWebhookConfiguration//istio-validator-external-istiod
============= EnvoyFilter/external-istiod/stats-filter-1.11
============= EnvoyFilter/external-istiod/stats-filter-1.12
============= EnvoyFilter/external-istiod/stats-filter-1.13
============= EnvoyFilter/external-istiod/tcp-stats-filter-1.11
============= EnvoyFilter/external-istiod/tcp-stats-filter-1.12
============= EnvoyFilter/external-istiod/tcp-stats-filter-1.13
============= ConfigMap/external-istiod/istio
============= ConfigMap/external-istiod/istio-sidecar-injector
============= Deployment/external-istiod/istiod
============= PodDisruptionBudget/external-istiod/istiod
============= Role/external-istiod/istiod
============= RoleBinding/external-istiod/istiod
============= HorizontalPodAutoscaler/external-istiod/istiod
============= Service/external-istiod/istiod


Wednesday, January 5, 2022

Access K8S rest API using curl command

 To get all the pods from a namespace,

curl -k --cacert ca.crt -H "Authorization: Bearer <The token>" https://172.19.0.3:6443/api/v1/namespaces/metallb-system/pods


Where the IP address and port should be the k8s api server IP and port, then the url should follow the naming convention which should be always

/api/<version>/namespaces/<namespace>/<resourcetype>

in the example above, the version is v1, namespace is metallb-system and we are trying to get all the pods.

Use --cacert to indicate an ca certificate file and use -k to allow insecure server connections when use ssl.

Tuesday, January 4, 2022

how does kubernetes_sd_configs actually work?

 When a job is configured like the following:

- job_name: 'kubernetes-pods'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__
    regex: (.+)
 
This is hard to figure out what it is saying. it turns out that this job basically
create a new url based on the formula of 
   ${__scheme__}://${__address__}/${__metrics_path__}
which in many cases __scheme__, __address__, __metrics_path__ are all use
the default value, it makes things even more confusing.
for example scheme is normally http if it is not specified
__metrics_path__ normally defaults to /metrics
__address__ normally defaults to the object IP address.

So for pod type, we are looking at http://${POD_IP}:8080/metrics by default
So, it is up to the person who configure this job to make up the part
of the url by using various actions. For example, in the above configuration
target_label: __metrics_path__ actually uses the pod annotation's
prometheus.io/path value if the pod has such annotation. if a pod does not
have such annotation, then the value of __metrics_path__ obviously will
be an empty string, which most likely wont produce a valid url for prometheus to 
retrieve any metrics.
For target_label __scheme__ in the above example, the action is to replace,
so the scheme will be basically whatever the annotation's prometheus.io/scheme
indicates.
 
where __address__ will be made up by two parts which was made up by the
regular expression using __address__ and pod annotation prometheus.io/port
if that pod indeed has that annotation. The default __address__ is the pod IP
address if nothing get changed.