Monday, December 6, 2021

Deploy Kiali with Istio external control plane

 When your Istio is using external control plane, deploying Kiali is not difficult but you will need to make sure the following

1. Deploy prometheus into namespace istio-system, otherwise, Kiali seems hard coded (or default configuration) will always look for prometheus in istio-system namespace

2. Change the sample kiali deployment file so that the Kiali goes into istio installed namespace, in our example, istio external control plane will be in namespace external-istiod. So make changes to the sample deployment file (which comes with istio package), replace istio-system with external-istiod in the entire file, so that kiali and its services, configmaps etc will all be in external-istiod, then deploy it.

3. Expose the kiali service with a loadbalancer, then access Kiali using the load balancer.

Friday, December 3, 2021

Istio mesh config, config cluster, remote cluster

 When a cluster contains istio custom resource definitions (CRDs) only, then that cluster is called istio config cluster. Which really just means the cluster at least contains Istio CRDs. A cluster can be just an Istio config cluster. If a cluster contains more than the CRDs, but also Istiod, then it is both config cluster and control plane. If a cluster really only contains Istio roles definitions such as istio-reader-clusterrole-external-istiod (that is the namespace) and clusterrolebinding (maybe the same name), and the mutating webhook configuration, most likely that cluster should be called remote istio cluster which should have been used for workload.


Monday, November 15, 2021

k8s webhooks

Retrieve all the validating webhooks in the cluster

 kg --context kind-cluster1  ValidatingWebhookConfiguration -A

 

 Retrieve all the mutating webhooks in the cluster

kg --context kind-cluster1  MutatingWebhookConfiguration -A

Sunday, October 3, 2021

Config VSCode to debug istio using vscode debug codelens

 VS Code debug test codelens is great to just run a particular test, however specify necessary arguments have been a bit mystery. However, manipulate settings.json file, one can get things done.

 

Open settings.json file, and add the following section, you will be able to specify flags for both build and test.

"go.buildTags": "integ",
"go.testFlags":["-args", "--istio.test.kube.topology=${workspaceFolder}/localtests.external-istiod.json", "--istio.test.skipVM"],

 Notice that the go.buildTags are needed to make sure that build actually work.

go.testFlags gets used to specify parameters for build and test. anything before "-args" is considered for build and anything after "-args" is considered test parameters. So in istio integration test, we will simply specify whatever necessary parameters after "-args", then click on the debug button of some of the tests and set up break point, you can step through the code in debug mode.

Friday, October 1, 2021

backup and restore iPhone onto mac

 backup entire phone onto mac, then click on Manage Backups, then right click on one of the backup, select Show in folder, find the backup folder, copy the entire folder to another location, then remove the folder to save space on mac. 

To restore the backup to a phone, copy the entire folder back to this folder:

/Users/tongli/Library/Application Support/MobileSync/Backup

Then the phone manager will be able to find the backup, then you can restore that backup to an iPhone.


Sunday, September 26, 2021

What is import when follow the instructs to setup istio multicluster

 When follow the instructions describe here to setup multicluster istio,

 

 https://istio.io/latest/docs/setup/install/multicluster/primary-remote_multi-network/

 

One thing is not described in the process very clearly but will fail the process is to make sure that the kubenetes cluster config file contains the k8s API endpoint which should not use the loopback IP address 127.0.0.1. This is very important when use KinD to deploy two k8s clusters on one machine, by default, KinD will create multiple kubenetes context in the config file, each of the context will use server: https://127.0.0.1:<port number> which works fine when access from host machine, but this will fail when access the API server from any other places. To avoid this problem, once KinD sets up the cluster, going to the config file and edit the url to point to the docker container IP address with the default port which most likely be 6443. For example

server: https://172.19.0.3:6443

Doing this will ensure that the API server is not only accessible from the host but also from the apps running inside the k8s clusters. 

Or simply use the following command to update, given that the cluster name is called kind-cluster1.

kubectl config set clusters.kind-cluster1.server https://172.19.0.3:6443

One other thing is also being ignored is that the two clusters should use the same root ca for their certificates. The certificate should be created in istio-system namespace and be named cacerts (if using default). The secret should have the following entries:

ca-cert.pem

ca-key.pem

cert-chain.pem

root-cert.pem

ca-cert.pem and ca-key will be the intermediate CA cert and key signed by the root cert. 

That cert will be used by deployment.apps/istiod



 

 

Tuesday, September 14, 2021

How to build istio locally for debugging

 To build istio locally then to debug, you need to setup two environment variables.

 

export TAG=30.1.2
export VERSION=$TAG

Once these two variables set, you can run the following command to build

make docker

If everything run correctly, there should be a list of istio images built, here is an example list

istio/install-cni                                     30.1.2 
istio/operator                                        30.1.2 
istio/istioctl                                        30.1.2 
istio/app_sidecar_centos_7                            30.1.2 
istio/app_sidecar_centos_8                            30.1.2 
istio/app_sidecar_debian_10                           30.1.2 
istio/app_sidecar_debian_9                            30.1.2 
istio/app_sidecar_ubuntu_focal                        30.1.2 
istio/app_sidecar_ubuntu_bionic                       30.1.2 
istio/app_sidecar_ubuntu_xenial                       30.1.2 
istio/app                                             30.1.2 
istio/proxyv2                                         30.1.2 
istio/pilot                                           30.1.2 
After these images are built, upload these images to the cluster where istio will be deployed.
istioctl operator init --tag 30.1.2
If you do not have access to the cluster to upload the images, then you will need to push the images to a docker image repository, then use the following command
istioctl operator init --hub docker.io --tag 30.1.2

Wednesday, September 1, 2021

How to get the current running system ARCH and OS

 

export ARCH=$(case $(uname -m) in x86_64) echo -n amd64 ;; aarch64) echo -n arm64 ;; *) echo -n $(uname -m) ;; esac)
export OS=$(uname | awk '{print tolower($0)}')

Thursday, August 26, 2021

Own Dockerfile

 

# FROM alpine:3.13 as BUILDER
#
# RUN wget https://github.com/ansible/ansible-runner/archive/refs/tags/1.4.7.tar.gz && \
# tar -xvf 1.4.7.tar.gz
#
# RUN apk add --no-cache py-pip build-base python3-dev linux-headers && \
# pip install virtualenvwrapper
# RUN cd ansible-runner-1.4.7 && virtualenv ansible-runner && pip install -e .

FROM quay.io/operator-framework/ansible-operator:v1.11.0 as BASE

FROM alpine:3.13

LABEL maintainer="litong01@us.ibm.com"
ENV PYTHONUNBUFFERED=1

RUN apk add --no-cache py-pip bash openssl py3-cryptography tini tar unzip && \
if [ ! -e /usr/bin/python ]; then ln -sf python3 /usr/bin/python ; fi && \
pip install ansible ansible-runner

RUN mkdir -p /etc/ansible \
&& echo "localhost ansible_connection=local" > /etc/ansible/hosts \
&& echo '[defaults]' > /etc/ansible/ansible.cfg \
&& echo 'roles_path = /opt/ansible/roles' >> /etc/ansible/ansible.cfg \
&& echo 'library = /usr/share/ansible/openshift' >> /etc/ansible/ansible.cfg

COPY --from=BASE /usr/local/bin/ansible-operator /usr/local/bin/ansible-operator
# COPY --from=BUILDER /usr/bin/ansible-runner /usr/local/bin/ansible-runner

ENV HOME=/opt/ansible \
USER_NAME=ansible \
USER_UID=1001

# Ensure directory permissions are properly set
RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd \
&& mkdir -p ${HOME}/.ansible/tmp \
&& chown -R ${USER_UID}:0 ${HOME} \
&& chmod -R ug+rwx ${HOME}

WORKDIR ${HOME}
USER ${USER_UID}

COPY requirements.yml ${HOME}/requirements.yml
RUN ansible-galaxy collection install -r ${HOME}/requirements.yml \
&& chmod -R ug+rwx ${HOME}/.ansible \
&& mkdir -p ${HOME}/.ansible/plugins \
&& rm -rf /var/cache/apk/*

COPY watches.yaml ${HOME}/watches.yaml
COPY roles/ ${HOME}/roles/
COPY playbooks/ ${HOME}/playbooks/
COPY utilities/launcher/ ${HOME}/launcher/
COPY ansible.cfg ${HOME}/launcher/bin/
COPY utilities/downloader/ ${HOME}/downloader/

COPY plugins ${HOME}/.ansible/plugins
COPY ansible.cfg ${HOME}/.ansible.cfg
COPY test.yaml ${HOME}/test.yaml
ENTRYPOINT ["tini", "--", "/usr/local/bin/ansible-operator", "run", "--watches-file=./watches.yaml"]

Monday, August 23, 2021

Process of access AWS EKS

 Here are the steps to access AWS EKS from command line.

1. Create a config file in ~/.aws directory named config with the following content
[default] region = us-east-1 output = json
2. Create a credential file in ~/.aws directory named credentials with the following content, make sure that you use your own access key id and secret which should be available when you create them via AWS console
[default] aws_access_key_id = XXXXXX aws_secret_access_key = XXXXXXXXXXXXXX
3. Update kubeconfig and current context
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
The region code should be us-east-1 if use the above example in config file. The name should be your eks cluster name

Thursday, August 19, 2021

Example to overwrite istio configuration parameters

 The following is an example of config.yaml file which can be used to install istio with customized parameters:

apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: istiod namespace: istio-system spec: components: pilot: enabled: true k8s: overlays: - kind: Deployment name: istiod patches: - path: spec.template.spec.containers.[name:discovery].args.[100] value: --grpcAddr=:15020 - path: spec.template.spec.containers.[name:discovery].ports.[containerPort:15010] value: containerPort: 15020 protocol: TCP - kind: Service name: istiod patches: - path: spec.ports.[port:15010] value: port: 15020 name: grpc-xds protocol: TCP
In the above example, the changes have been made based on yaml files like the following, the patch first added a new entry to the args list, then changed a containerPort from 15010 to 15020, the last one also replaced port in the service.
--- apiVersion: apps/v1 kind: Deployment metadata: name: istiod namespace: istio-system spec: template: spec: containers: - args: - discovery - --monitoringAddr=:15014 - --log_output_level=default:info - --domain - cluster.local - --keepaliveMaxServerConnectionAge - 30m env: - name: REVISION value: default - name: JWT_POLICY value: third-party-jwt . . . image: docker.io/istio/pilot:1.10.2 name: discovery ports: - containerPort: 8080 protocol: TCP - containerPort: 15010 protocol: TCP - containerPort: 15017 protocol: TCP --- apiVersion: v1 kind: Service metadata: labels: app: istiod install.operator.istio.io/owning-resource: unknown istio: pilot istio.io/rev: default operator.istio.io/component: Pilot release: istio name: istiod namespace: istio-system spec: ports: - name: grpc-xds port: 15010 protocol: TCP - name: https-dns port: 15012 protocol: TCP - name: https-webhook port: 443 protocol: TCP targetPort: 15017

Tuesday, August 10, 2021

Setup k8s cluster with kind using different k8s releases

#!/bin/bash
#! /bin/bash
# This script sets up k8s cluster using metallb and istio
# Make sure you have the following executable in your path
#     kubectl
#     kind
#     istioctl

# Setup some colors
ColorOff='\033[0m'        # Text Reset
Black='\033[0;30m'        # Black
Red='\033[0;31m'          # Red
Green='\033[0;32m'        # Green

K8S_RELEASE=$1
# Get all the available releases
alltags=$(wget -q https://registry.hub.docker.com/v1/repositories/kindest/node/tags -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}')

rm -rf ~/.kube/*

if [ -z $K8S_RELEASE ]; then
  kind create cluster
else
  if [[ "$alltags" == *"$K8S_RELEASE"* ]]; then
    kind create cluster --image=kindest/node:$K8S_RELEASE
  else
    echo "Available k8s releases are $alltags"
    exit 1
  fi
fi

# The following procedure is to setup load balancer
kubectl cluster-info --context kind-kind

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

PREFIX=$(docker network inspect -f '{{range .IPAM.Config }}{{ .Gateway }}{{end}}' kind | cut -d '.' -f1,2)

cat <

Tuesday, August 3, 2021

k8s service full domain name

 

orderer-sample.default.svc.cluster.local

<service-name>.<namespace>.svc.cluster.local

 

Sunday, August 1, 2021

Run playbook to access k8s with service account mounted.

 

quay.io/operator-framework/ansible-operator:v1.10.0

That image will work if service account with some required collections installed

collections:
- name: community.kubernetes
version: "1.2.1"
- name: operator_sdk.util
version: "0.2.0"
- name: community.general
version: "3.4.0"
- name: community.crypto
version: "1.7.1"

 Use this command to run this playbook


- name: Start fabric operations
  hosts: localhost
  gather_facts: no
  connection: local
  tasks:
    - name: Search for the matching secret of the CA organization
      community.kubernetes.k8s_info:
        kind: Secret
      register: casecret

    - debug:
        var: casecret

ansible-playbook test.yaml -e "ansible_python_interpreter=/usr/bin/python3.8"

Friday, July 16, 2021

echo parse and display a certificate

 You often gets a certificate in base64 encoded format, but you want to see what is in the certificate, here is the one liner to do this in linux


echo "<<this is the encoded certificate>>" | base64 -d | openssl x509 -noout -text

That is all it takes to see what is inside of the certificate.

Tuesday, July 13, 2021

How apps running inside k8s uses service account?

 Many articles talked about using service account and how service account secrets get mounted onto a pod (every pod will have a service account secret mounted to it even if you never reference one), but not many really talked about how these mounted tokens or secrets get used.

Here I will talk about this little missed step.

When a pod gets created, k8s will always mount a service account (default service account if not one specified), which will mount the service account secret onto a path like this by default:

  rootCAFile = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"

 tokenFile = "/var/run/secrets/kubernetes.io/serviceaccount/token"              

 

The magic for applications like K8S operators simply uses client class to do all sort of operations against k8s is because the client class actually uses this method InClusterConfig defined in client-go/rest/config.go file which will read secrets and tokens to return the in cluster configuration, then go on to authenticate with K8S API server for operations such as get, create, list K8S resources. Here is link to the method https://github.com/kubernetes/client-go/blob/v0.21.2/rest/config.go#L483 In this method, it will read the environment variables such as KUBERNETES_SERVICE_HOST, KUBERNETES_SERVICE_PORT to get K8S API server, read the token file, create tls configuration, then return this kube configuration file.

 

kubernetes python client package does the same thing, in this file

https://github.com/kubernetes-client/python-base/blob/master/config/incluster_config.py

Exactly same logic gets used to deal with service account secret and token.

SERVICE_PORT_ENV_NAME = "KUBERNETES_SERVICE_PORT"
SERVICE_TOKEN_FILENAME = "/var/run/secrets/kubernetes.io/serviceaccount/token"
SERVICE_CERT_FILENAME = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
SERVICE_HOST_ENV_NAME = "KUBERNETES_SERVICE_HOST"

 

Monday, July 12, 2021

what is happening when istio gets installed?

After using istioctl install command to install istio, then run the uninstall command to remove istio from the cluster, many resources will be removed, that also means, the install process created these resources during the install

When use this command to remove istio,

istioctl x uninstall --purge

The following things will happen:

  Removed IstioOperator:istio-system:installed-state.
  Removed HorizontalPodAutoscaler:istio-system:istio-ingressgateway.
  Removed HorizontalPodAutoscaler:istio-system:istiod.
  Removed PodDisruptionBudget:istio-system:istio-ingressgateway.
  Removed PodDisruptionBudget:istio-system:istiod.
  Removed Deployment:istio-system:istio-ingressgateway.
  Removed Deployment:istio-system:istiod.
  Removed Service:istio-system:istio-ingressgateway.
  Removed Service:istio-system:istiod.
  Removed ConfigMap:istio-system:istio.
  Removed ConfigMap:istio-system:istio-sidecar-injector.
  Removed Pod:istio-system:istio-ingressgateway-6968d58d88-9dq7k.
  Removed Pod:istio-system:istiod-74d4864d8d-psjs8.
  Removed ServiceAccount:istio-system:istio-ingressgateway-service-account.
  Removed ServiceAccount:istio-system:istio-reader-service-account.
  Removed ServiceAccount:istio-system:istiod-service-account.
  Removed RoleBinding:istio-system:istio-ingressgateway-sds.
  Removed RoleBinding:istio-system:istiod-istio-system.
  Removed Role:istio-system:istio-ingressgateway-sds.
  Removed Role:istio-system:istiod-istio-system.
  Removed EnvoyFilter:istio-system:metadata-exchange-1.10.
  Removed EnvoyFilter:istio-system:metadata-exchange-1.9.
  Removed EnvoyFilter:istio-system:stats-filter-1.10.
  Removed EnvoyFilter:istio-system:stats-filter-1.9.
  Removed EnvoyFilter:istio-system:tcp-metadata-exchange-1.10.
  Removed EnvoyFilter:istio-system:tcp-metadata-exchange-1.9.
  Removed EnvoyFilter:istio-system:tcp-stats-filter-1.10.
  Removed EnvoyFilter:istio-system:tcp-stats-filter-1.9.
  Removed MutatingWebhookConfiguration::istio-sidecar-injector.
  Removed ValidatingWebhookConfiguration::istiod-istio-system.
  Removed ClusterRole::istio-reader-istio-system.
  Removed ClusterRole::istiod-istio-system.
  Removed ClusterRoleBinding::istio-reader-istio-system.
  Removed ClusterRoleBinding::istiod-istio-system.
  Removed CustomResourceDefinition::authorizationpolicies.security.istio.io.
  Removed CustomResourceDefinition::destinationrules.networking.istio.io.
  Removed CustomResourceDefinition::envoyfilters.networking.istio.io.
  Removed CustomResourceDefinition::gateways.networking.istio.io.
  Removed CustomResourceDefinition::istiooperators.install.istio.io.
  Removed CustomResourceDefinition::peerauthentications.security.istio.io.
  Removed CustomResourceDefinition::requestauthentications.security.istio.io.
  Removed CustomResourceDefinition::serviceentries.networking.istio.io.
  Removed CustomResourceDefinition::sidecars.networking.istio.io.
  Removed CustomResourceDefinition::telemetries.telemetry.istio.io.
  Removed CustomResourceDefinition::virtualservices.networking.istio.io.
  Removed CustomResourceDefinition::workloadentries.networking.istio.io.
  Removed CustomResourceDefinition::workloadgroups.networking.istio.io.


After istioctl operator init, there are these things created. The list is created when using the uninstall --purge.

  Removed Deployment:istio-operator:istio-operator.
  Removed Service:istio-operator:istio-operator.
  Removed ServiceAccount:istio-operator:istio-operator.
  Removed ClusterRole::istio-operator.
  Removed ClusterRoleBinding::istio-operator.
  Removed CustomResourceDefinition::istiooperators.install.istio.io.


After do the following:

kubectl apply -f - <<EOF

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-operator
  name: example-istiocontrolplane
spec:
  profile: default
EOF

If remove everything, these are the things will be removed.

  Removed IstioOperator:istio-system:example-istiocontrolplane.
  Removed HorizontalPodAutoscaler:istio-system:istio-ingressgateway.
  Removed HorizontalPodAutoscaler:istio-system:istiod.
  Removed PodDisruptionBudget:istio-system:istio-ingressgateway.
  Removed PodDisruptionBudget:istio-system:istiod.
  Removed Deployment:istio-operator:istio-operator.
  Removed Deployment:istio-system:istio-ingressgateway.
  Removed Deployment:istio-system:istiod.
  Removed Service:istio-operator:istio-operator.
  Removed Service:istio-system:istio-ingressgateway.
  Removed Service:istio-system:istiod.
  Removed ConfigMap:istio-system:istio.
  Removed ConfigMap:istio-system:istio-sidecar-injector.
  Removed Pod:istio-system:istio-ingressgateway-6968d58d88-wcmvt.
  Removed Pod:istio-system:istiod-84cb7c8f48-7q6rx.
  Removed ServiceAccount:istio-operator:istio-operator.
  Removed ServiceAccount:istio-system:istio-ingressgateway-service-account.
  Removed ServiceAccount:istio-system:istio-reader-service-account.
  Removed ServiceAccount:istio-system:istiod-service-account.
  Removed RoleBinding:istio-system:istio-ingressgateway-sds.
  Removed RoleBinding:istio-system:istiod-istio-system.
  Removed Role:istio-system:istio-ingressgateway-sds.
  Removed Role:istio-system:istiod-istio-system.
  Removed EnvoyFilter:istio-system:metadata-exchange-1.10.
  Removed EnvoyFilter:istio-system:metadata-exchange-1.9.
  Removed EnvoyFilter:istio-system:stats-filter-1.10.
  Removed EnvoyFilter:istio-system:stats-filter-1.9.
  Removed EnvoyFilter:istio-system:tcp-metadata-exchange-1.10.
  Removed EnvoyFilter:istio-system:tcp-metadata-exchange-1.9.
  Removed EnvoyFilter:istio-system:tcp-stats-filter-1.10.
  Removed EnvoyFilter:istio-system:tcp-stats-filter-1.9.
  Removed MutatingWebhookConfiguration::istio-sidecar-injector.
  Removed ValidatingWebhookConfiguration::istiod-istio-system.
  Removed ClusterRole::istio-operator.
  Removed ClusterRole::istio-reader-istio-system.
  Removed ClusterRole::istiod-istio-system.
  Removed ClusterRoleBinding::istio-operator.
  Removed ClusterRoleBinding::istio-reader-istio-system.
  Removed ClusterRoleBinding::istiod-istio-system.
  Removed CustomResourceDefinition::authorizationpolicies.security.istio.io.
  Removed CustomResourceDefinition::destinationrules.networking.istio.io.
  Removed CustomResourceDefinition::envoyfilters.networking.istio.io.
  Removed CustomResourceDefinition::gateways.networking.istio.io.
  Removed CustomResourceDefinition::istiooperators.install.istio.io.
  Removed CustomResourceDefinition::peerauthentications.security.istio.io.
  Removed CustomResourceDefinition::requestauthentications.security.istio.io.
  Removed CustomResourceDefinition::serviceentries.networking.istio.io.
  Removed CustomResourceDefinition::sidecars.networking.istio.io.
  Removed CustomResourceDefinition::telemetries.telemetry.istio.io.
  Removed CustomResourceDefinition::virtualservices.networking.istio.io.
  Removed CustomResourceDefinition::workloadentries.networking.istio.io.
  Removed CustomResourceDefinition::workloadgroups.networking.istio.io. 

In operator case, the deployment of istio-operator uses image docker.io/istio/operator:1.10.2 which deploys into namespace istio-operatorby default. it will only create the operator crd, only when control plan gets created by using the following command:

kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: example-istiocontrolplane
spec:
  profile: default
EOF

Then all other crds will be created. The deployment of istiod will use docker.io/istio/pilot:1.10.2, istiod (or pilot) is now only watching the cluster, does the certificate and configuration. It is not doing what istio operator does which is to accept crd then convert them to various k8s resources. If just use istioctl, then there is no operator to interpret these requests, istioctl will convert all the request and create k8s resources, vs in operator case, it is the operator takes the request and create k8s resources.

Regardless using istioctl or istio operator or helm, the istiod will have to be deployed


Monday, June 21, 2021

Use protoc to generate go code using protobuf

This example is based on istio.io/api project to generate all the go code based on provided proto files in various sub directories.

 

protoc --proto_path=$(pwd) --go_out=$(pwd) --go_opt=paths=source_relative networking/**/*.proto security/**/*.proto type/**/*.proto analysis/**/*.proto authentication/**/*.proto meta/**/*.proto telemetry/**/*.proto

Tuesday, April 6, 2021

Rebase with upstream master branch

1. Make sure that the upstream repo is added to your configuration.

2. Do a git fetch

git fetch upstream

3. Do rebase

git rebase --interactive upstream/master
4. Push to remote
git push -f 

Monday, March 8, 2021

Install tekton and its dashboard on IBM Cloud

1. Install tekton 0.21.0
kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.21.0/release.yaml
2. Install latest tekton dashboard
kubectl apply --filename https://storage.googleapis.com/tekton-releases/dashboard/latest/openshift-tekton-dashboard-release.yaml --validate=false
3. Expose the dashboard UI
kubectl patch svc tekton-dashboard -n tekton-pipelines -p '{"spec": {"type": "LoadBalancer"}}'
4. Access the dashboard by using the external IP (hostname) and port 9097
kubectl get svc tekton-dashboard -n tekton-pipelines

The external IP or hostname may take a bit of time to be available.

 

For tectoncd/pipeline development, to use ko, set the docker repo env to be the full path like below

export KO_DOCKER_REPO=registry.hub.docker.com/email4tong

 The short name no longer works.

Thursday, March 4, 2021

Expose TCP traffic examples

After istio is installed, follow these steps:

0. Label the default namespace for istio sidecar injection
kubectl label namespace default istio-injection=enabled --overwrite


1. Patch istio-ingressgateway service so that the new port is supported.

Create a file named patch-service.yaml with the following content:

spec:
  ports:
  - name: tcp-31400
    protocol: TCP
    port: 31400
    targetPort: 31400
Run the following command
kubectl -n istio-system patch service istio-ingressgateway --patch "$(cat patch-service.yaml)"  

2. Create deployment, service, gateway and virtual service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-deployment
spec:
  selector:
    matchLabels:
      greeting: hello
      department: world
  replicas: 1
  template:
    metadata:
      labels:
        greeting: hello
        department: world
    spec:
      containers:
      - name: hello
        image: "email4tong/pathecho:latest"
        imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  selector:
    greeting: hello
    department: world
  ports:
  - protocol: TCP
    port: 7000
    targetPort: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: tcp-echo-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 31400
      name: tcp
      protocol: TCP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: tcp-echo
spec:
  hosts:
  - "*"
  gateways:
  - tcp-echo-gateway
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: hello-world.default.svc.cluster.local
        port:
          number: 7000



3. Now use the istio-ingressgateway service external endpoint (IP or hostname) and port 31400 to access the service. In above example, it is a simple http echo, so use curl to test is fine. If the actual service is not http but using any other tcp protocols, then you cannot use curl to test

Wednesday, March 3, 2021

Istio tcp ingress traffic

 To allow tcp traffic on a specific port using Istio, one will have to patch the istio-ingressgateway service to add the new port if the port is not 80 or 443. Then one will have to create a gateway, a virtual service, and also make sure that the deployments have the sidecar injected automatically. So to summarize:

1. Patch istio-ingressgateway to add new tcp port

2. Create gateway resource

3. Create virtual service resource

4. Make sure that the actual pod is injected with sidecar.

It is very important that the tcp port to be added before the gateway and virtual service resource were created, otherwise, it wont work.

Sunday, January 31, 2021

k8s ambassador and tls cert management

 With brand new google k8s, do the followings:

1. Set up kubeconfig by using the google connect command found from google cloud console, something similar to this.

gcloud container clusters get-credentials tongedge --zone us-east1-b --project odin-network-301700
2. Run the following command to deploy Ambassador

kubectl apply -f https://www.getambassador.io/yaml/aes-crds.yaml &&
kubectl wait --for condition=established --timeout=90s crd -lproduct=aes&&
kubectl apply -f https://www.getambassador.io/yaml/aes.yaml &&
kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes
Now, get the external ip address

 kubectl get -n ambassador service ambassador -o
 "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}"
3. Deploy kubernetes cert-manager

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.0/cert-manager.yaml 
Verify cert manager is working correctly

kubectl get pods --namespace cert-manager

k8s cert manager reference link https://cert-manager.io/docs/installation/kubernetes/ 

Now create a mapping and a service to make sure cert-manager created ingress can meet the http01 chanllenge.

---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
  name: acme-challenge-mapping
spec:
  prefix: /.well-known/acme-challenge/
  rewrite: ""
  service: acme-challenge-service
---
apiVersion: v1
kind: Service
metadata:
  name: acme-challenge-service
spec:
  ports:
  - port: 80
    targetPort: 8089
  selector:
    acme.cert-manager.io/http01-solver: "true"
The ambassador external ip now can be used to configure your dns entry via whatever the dns providers. The end result will be your domain name can be resolved into the ambassador external IP address.
To request a certificate, one has to set up an issuer first, using the follow yaml file to accomplish that.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    preferredChain: "ISRG Root X1"
    privateKeySecretRef:
      name: letencrypt-secret
    solvers:
    - http01:
        ingress:
          class: nginx
      selector: {}

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: tonglitls
  namespace: default
spec:
  dnsNames:
    - tongli.myddns.me
  secretName: tonglitls-secret
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt

 

 

Friday, January 8, 2021

4K Video Download app is the tool to help everyone

We are using a lot of free online videos for study, singing. Due to network issues, sometimes it is a big distraction when we use these videos online, with 4K video download, we can download these videos before we need them, this helped us a lot, it also allowed us to skip unwanted/unexpected ads during the real session which in some cases causes big embarrassment.

 

4KDownload performed really well when you download videos, I have not experienced any failures downloading videos regardless the video quality or my internet speed. The user interface is extremely easy to use, there is no learning curves to overcome, as soon as you see the user interface, one who can click a mouse will be able to use it. It not only works on Windows but also on OS X so that you would not have to worry about if your computer supports such nice tool. I've been using the free version for awhile, cannot wait to experience the load of features with paid version. I highly recommend anyone to try this free app. To get more information, please follow one of the links below.  

 


https://www.4kdownload.com/products/product-videodownloader/?r=free_license https://www.4kdownload.com/products/product-youtubetomp3/?r=free_license https://www.4kdownload.com/products/product-stogram/?r=free_license