Sunday, October 3, 2021

Config VSCode to debug istio using vscode debug codelens

 VS Code debug test codelens is great to just run a particular test, however specify necessary arguments have been a bit mystery. However, manipulate settings.json file, one can get things done.

 

Open settings.json file, and add the following section, you will be able to specify flags for both build and test.

"go.buildTags": "integ",
"go.testFlags":["-args", "--istio.test.kube.topology=${workspaceFolder}/localtests.external-istiod.json", "--istio.test.skipVM"],

 Notice that the go.buildTags are needed to make sure that build actually work.

go.testFlags gets used to specify parameters for build and test. anything before "-args" is considered for build and anything after "-args" is considered test parameters. So in istio integration test, we will simply specify whatever necessary parameters after "-args", then click on the debug button of some of the tests and set up break point, you can step through the code in debug mode.

Friday, October 1, 2021

backup and restore iPhone onto mac

 backup entire phone onto mac, then click on Manage Backups, then right click on one of the backup, select Show in folder, find the backup folder, copy the entire folder to another location, then remove the folder to save space on mac. 

To restore the backup to a phone, copy the entire folder back to this folder:

/Users/tongli/Library/Application Support/MobileSync/Backup

Then the phone manager will be able to find the backup, then you can restore that backup to an iPhone.


Sunday, September 26, 2021

What is import when follow the instructs to setup istio multicluster

 When follow the instructions describe here to setup multicluster istio,

 

 https://istio.io/latest/docs/setup/install/multicluster/primary-remote_multi-network/

 

One thing is not described in the process very clearly but will fail the process is to make sure that the kubenetes cluster config file contains the k8s API endpoint which should not use the loopback IP address 127.0.0.1. This is very important when use KinD to deploy two k8s clusters on one machine, by default, KinD will create multiple kubenetes context in the config file, each of the context will use server: https://127.0.0.1:<port number> which works fine when access from host machine, but this will fail when access the API server from any other places. To avoid this problem, once KinD sets up the cluster, going to the config file and edit the url to point to the docker container IP address with the default port which most likely be 6443. For example

server: https://172.19.0.3:6443

Doing this will ensure that the API server is not only accessible from the host but also from the apps running inside the k8s clusters. 

Or simply use the following command to update, given that the cluster name is called kind-cluster1.

kubectl config set clusters.kind-cluster1.server https://172.19.0.3:6443

One other thing is also being ignored is that the two clusters should use the same root ca for their certificates. The certificate should be created in istio-system namespace and be named cacerts (if using default). The secret should have the following entries:

ca-cert.pem

ca-key.pem

cert-chain.pem

root-cert.pem

ca-cert.pem and ca-key will be the intermediate CA cert and key signed by the root cert. 

That cert will be used by deployment.apps/istiod



 

 

Tuesday, September 14, 2021

How to build istio locally for debugging

 To build istio locally then to debug, you need to setup two environment variables.

 

export TAG=30.1.2
export VERSION=$TAG

Once these two variables set, you can run the following command to build

make docker

If everything run correctly, there should be a list of istio images built, here is an example list

istio/install-cni                                     30.1.2 
istio/operator                                        30.1.2 
istio/istioctl                                        30.1.2 
istio/app_sidecar_centos_7                            30.1.2 
istio/app_sidecar_centos_8                            30.1.2 
istio/app_sidecar_debian_10                           30.1.2 
istio/app_sidecar_debian_9                            30.1.2 
istio/app_sidecar_ubuntu_focal                        30.1.2 
istio/app_sidecar_ubuntu_bionic                       30.1.2 
istio/app_sidecar_ubuntu_xenial                       30.1.2 
istio/app                                             30.1.2 
istio/proxyv2                                         30.1.2 
istio/pilot                                           30.1.2 
After these images are built, upload these images to the cluster where istio will be deployed.
istioctl operator init --tag 30.1.2
If you do not have access to the cluster to upload the images, then you will need to push the images to a docker image repository, then use the following command
istioctl operator init --hub docker.io --tag 30.1.2

Wednesday, September 1, 2021

How to get the current running system ARCH and OS

 

export ARCH=$(case $(uname -m) in x86_64) echo -n amd64 ;; aarch64) echo -n arm64 ;; *) echo -n $(uname -m) ;; esac)
export OS=$(uname | awk '{print tolower($0)}')

Thursday, August 26, 2021

Own Dockerfile

 

# FROM alpine:3.13 as BUILDER
#
# RUN wget https://github.com/ansible/ansible-runner/archive/refs/tags/1.4.7.tar.gz && \
# tar -xvf 1.4.7.tar.gz
#
# RUN apk add --no-cache py-pip build-base python3-dev linux-headers && \
# pip install virtualenvwrapper
# RUN cd ansible-runner-1.4.7 && virtualenv ansible-runner && pip install -e .

FROM quay.io/operator-framework/ansible-operator:v1.11.0 as BASE

FROM alpine:3.13

LABEL maintainer="litong01@us.ibm.com"
ENV PYTHONUNBUFFERED=1

RUN apk add --no-cache py-pip bash openssl py3-cryptography tini tar unzip && \
if [ ! -e /usr/bin/python ]; then ln -sf python3 /usr/bin/python ; fi && \
pip install ansible ansible-runner

RUN mkdir -p /etc/ansible \
&& echo "localhost ansible_connection=local" > /etc/ansible/hosts \
&& echo '[defaults]' > /etc/ansible/ansible.cfg \
&& echo 'roles_path = /opt/ansible/roles' >> /etc/ansible/ansible.cfg \
&& echo 'library = /usr/share/ansible/openshift' >> /etc/ansible/ansible.cfg

COPY --from=BASE /usr/local/bin/ansible-operator /usr/local/bin/ansible-operator
# COPY --from=BUILDER /usr/bin/ansible-runner /usr/local/bin/ansible-runner

ENV HOME=/opt/ansible \
USER_NAME=ansible \
USER_UID=1001

# Ensure directory permissions are properly set
RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd \
&& mkdir -p ${HOME}/.ansible/tmp \
&& chown -R ${USER_UID}:0 ${HOME} \
&& chmod -R ug+rwx ${HOME}

WORKDIR ${HOME}
USER ${USER_UID}

COPY requirements.yml ${HOME}/requirements.yml
RUN ansible-galaxy collection install -r ${HOME}/requirements.yml \
&& chmod -R ug+rwx ${HOME}/.ansible \
&& mkdir -p ${HOME}/.ansible/plugins \
&& rm -rf /var/cache/apk/*

COPY watches.yaml ${HOME}/watches.yaml
COPY roles/ ${HOME}/roles/
COPY playbooks/ ${HOME}/playbooks/
COPY utilities/launcher/ ${HOME}/launcher/
COPY ansible.cfg ${HOME}/launcher/bin/
COPY utilities/downloader/ ${HOME}/downloader/

COPY plugins ${HOME}/.ansible/plugins
COPY ansible.cfg ${HOME}/.ansible.cfg
COPY test.yaml ${HOME}/test.yaml
ENTRYPOINT ["tini", "--", "/usr/local/bin/ansible-operator", "run", "--watches-file=./watches.yaml"]

Monday, August 23, 2021

Process of access AWS EKS

 Here are the steps to access AWS EKS from command line.

1. Create a config file in ~/.aws directory named config with the following content
[default] region = us-east-1 output = json
2. Create a credential file in ~/.aws directory named credentials with the following content, make sure that you use your own access key id and secret which should be available when you create them via AWS console
[default] aws_access_key_id = XXXXXX aws_secret_access_key = XXXXXXXXXXXXXX
3. Update kubeconfig and current context
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
The region code should be us-east-1 if use the above example in config file. The name should be your eks cluster name