Monday, December 6, 2021

Deploy Kiali with Istio external control plane

 When your Istio is using external control plane, deploying Kiali is not difficult but you will need to make sure the following

1. Deploy prometheus into namespace istio-system, otherwise, Kiali seems hard coded (or default configuration) will always look for prometheus in istio-system namespace

2. Change the sample kiali deployment file so that the Kiali goes into istio installed namespace, in our example, istio external control plane will be in namespace external-istiod. So make changes to the sample deployment file (which comes with istio package), replace istio-system with external-istiod in the entire file, so that kiali and its services, configmaps etc will all be in external-istiod, then deploy it.

3. Expose the kiali service with a loadbalancer, then access Kiali using the load balancer.

Friday, December 3, 2021

Istio mesh config, config cluster, remote cluster

 When a cluster contains istio custom resource definitions (CRDs) only, then that cluster is called istio config cluster. Which really just means the cluster at least contains Istio CRDs. A cluster can be just an Istio config cluster. If a cluster contains more than the CRDs, but also Istiod, then it is both config cluster and control plane. If a cluster really only contains Istio roles definitions such as istio-reader-clusterrole-external-istiod (that is the namespace) and clusterrolebinding (maybe the same name), and the mutating webhook configuration, most likely that cluster should be called remote istio cluster which should have been used for workload.


Monday, November 15, 2021

k8s webhooks

Retrieve all the validating webhooks in the cluster

 kg --context kind-cluster1  ValidatingWebhookConfiguration -A

 

 Retrieve all the mutating webhooks in the cluster

kg --context kind-cluster1  MutatingWebhookConfiguration -A

Sunday, October 3, 2021

Config VSCode to debug istio using vscode debug codelens

 VS Code debug test codelens is great to just run a particular test, however specify necessary arguments have been a bit mystery. However, manipulate settings.json file, one can get things done.

 

Open settings.json file, and add the following section, you will be able to specify flags for both build and test.

"go.buildTags": "integ",
"go.testFlags":["-args", "--istio.test.kube.topology=${workspaceFolder}/localtests.external-istiod.json", "--istio.test.skipVM"],

 Notice that the go.buildTags are needed to make sure that build actually work.

go.testFlags gets used to specify parameters for build and test. anything before "-args" is considered for build and anything after "-args" is considered test parameters. So in istio integration test, we will simply specify whatever necessary parameters after "-args", then click on the debug button of some of the tests and set up break point, you can step through the code in debug mode.

Friday, October 1, 2021

backup and restore iPhone onto mac

 backup entire phone onto mac, then click on Manage Backups, then right click on one of the backup, select Show in folder, find the backup folder, copy the entire folder to another location, then remove the folder to save space on mac. 

To restore the backup to a phone, copy the entire folder back to this folder:

/Users/tongli/Library/Application Support/MobileSync/Backup

Then the phone manager will be able to find the backup, then you can restore that backup to an iPhone.


Sunday, September 26, 2021

What is import when follow the instructs to setup istio multicluster

 When follow the instructions describe here to setup multicluster istio,

 

 https://istio.io/latest/docs/setup/install/multicluster/primary-remote_multi-network/

 

One thing is not described in the process very clearly but will fail the process is to make sure that the kubenetes cluster config file contains the k8s API endpoint which should not use the loopback IP address 127.0.0.1. This is very important when use KinD to deploy two k8s clusters on one machine, by default, KinD will create multiple kubenetes context in the config file, each of the context will use server: https://127.0.0.1:<port number> which works fine when access from host machine, but this will fail when access the API server from any other places. To avoid this problem, once KinD sets up the cluster, going to the config file and edit the url to point to the docker container IP address with the default port which most likely be 6443. For example

server: https://172.19.0.3:6443

Doing this will ensure that the API server is not only accessible from the host but also from the apps running inside the k8s clusters. 

Or simply use the following command to update, given that the cluster name is called kind-cluster1.

kubectl config set clusters.kind-cluster1.server https://172.19.0.3:6443

One other thing is also being ignored is that the two clusters should use the same root ca for their certificates. The certificate should be created in istio-system namespace and be named cacerts (if using default). The secret should have the following entries:

ca-cert.pem

ca-key.pem

cert-chain.pem

root-cert.pem

ca-cert.pem and ca-key will be the intermediate CA cert and key signed by the root cert. 

That cert will be used by deployment.apps/istiod



 

 

Tuesday, September 14, 2021

How to build istio locally for debugging

 To build istio locally then to debug, you need to setup two environment variables.

 

export TAG=30.1.2
export VERSION=$TAG

Once these two variables set, you can run the following command to build

make docker

If everything run correctly, there should be a list of istio images built, here is an example list

istio/install-cni                                     30.1.2 
istio/operator                                        30.1.2 
istio/istioctl                                        30.1.2 
istio/app_sidecar_centos_7                            30.1.2 
istio/app_sidecar_centos_8                            30.1.2 
istio/app_sidecar_debian_10                           30.1.2 
istio/app_sidecar_debian_9                            30.1.2 
istio/app_sidecar_ubuntu_focal                        30.1.2 
istio/app_sidecar_ubuntu_bionic                       30.1.2 
istio/app_sidecar_ubuntu_xenial                       30.1.2 
istio/app                                             30.1.2 
istio/proxyv2                                         30.1.2 
istio/pilot                                           30.1.2 
After these images are built, upload these images to the cluster where istio will be deployed.
istioctl operator init --tag 30.1.2
If you do not have access to the cluster to upload the images, then you will need to push the images to a docker image repository, then use the following command
istioctl operator init --hub docker.io --tag 30.1.2