Thursday, August 26, 2021

Own Dockerfile

 

# FROM alpine:3.13 as BUILDER
#
# RUN wget https://github.com/ansible/ansible-runner/archive/refs/tags/1.4.7.tar.gz && \
# tar -xvf 1.4.7.tar.gz
#
# RUN apk add --no-cache py-pip build-base python3-dev linux-headers && \
# pip install virtualenvwrapper
# RUN cd ansible-runner-1.4.7 && virtualenv ansible-runner && pip install -e .

FROM quay.io/operator-framework/ansible-operator:v1.11.0 as BASE

FROM alpine:3.13

LABEL maintainer="litong01@us.ibm.com"
ENV PYTHONUNBUFFERED=1

RUN apk add --no-cache py-pip bash openssl py3-cryptography tini tar unzip && \
if [ ! -e /usr/bin/python ]; then ln -sf python3 /usr/bin/python ; fi && \
pip install ansible ansible-runner

RUN mkdir -p /etc/ansible \
&& echo "localhost ansible_connection=local" > /etc/ansible/hosts \
&& echo '[defaults]' > /etc/ansible/ansible.cfg \
&& echo 'roles_path = /opt/ansible/roles' >> /etc/ansible/ansible.cfg \
&& echo 'library = /usr/share/ansible/openshift' >> /etc/ansible/ansible.cfg

COPY --from=BASE /usr/local/bin/ansible-operator /usr/local/bin/ansible-operator
# COPY --from=BUILDER /usr/bin/ansible-runner /usr/local/bin/ansible-runner

ENV HOME=/opt/ansible \
USER_NAME=ansible \
USER_UID=1001

# Ensure directory permissions are properly set
RUN echo "${USER_NAME}:x:${USER_UID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd \
&& mkdir -p ${HOME}/.ansible/tmp \
&& chown -R ${USER_UID}:0 ${HOME} \
&& chmod -R ug+rwx ${HOME}

WORKDIR ${HOME}
USER ${USER_UID}

COPY requirements.yml ${HOME}/requirements.yml
RUN ansible-galaxy collection install -r ${HOME}/requirements.yml \
&& chmod -R ug+rwx ${HOME}/.ansible \
&& mkdir -p ${HOME}/.ansible/plugins \
&& rm -rf /var/cache/apk/*

COPY watches.yaml ${HOME}/watches.yaml
COPY roles/ ${HOME}/roles/
COPY playbooks/ ${HOME}/playbooks/
COPY utilities/launcher/ ${HOME}/launcher/
COPY ansible.cfg ${HOME}/launcher/bin/
COPY utilities/downloader/ ${HOME}/downloader/

COPY plugins ${HOME}/.ansible/plugins
COPY ansible.cfg ${HOME}/.ansible.cfg
COPY test.yaml ${HOME}/test.yaml
ENTRYPOINT ["tini", "--", "/usr/local/bin/ansible-operator", "run", "--watches-file=./watches.yaml"]

Monday, August 23, 2021

Process of access AWS EKS

 Here are the steps to access AWS EKS from command line.

1. Create a config file in ~/.aws directory named config with the following content
[default] region = us-east-1 output = json
2. Create a credential file in ~/.aws directory named credentials with the following content, make sure that you use your own access key id and secret which should be available when you create them via AWS console
[default] aws_access_key_id = XXXXXX aws_secret_access_key = XXXXXXXXXXXXXX
3. Update kubeconfig and current context
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
The region code should be us-east-1 if use the above example in config file. The name should be your eks cluster name

Thursday, August 19, 2021

Example to overwrite istio configuration parameters

 The following is an example of config.yaml file which can be used to install istio with customized parameters:

apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: istiod namespace: istio-system spec: components: pilot: enabled: true k8s: overlays: - kind: Deployment name: istiod patches: - path: spec.template.spec.containers.[name:discovery].args.[100] value: --grpcAddr=:15020 - path: spec.template.spec.containers.[name:discovery].ports.[containerPort:15010] value: containerPort: 15020 protocol: TCP - kind: Service name: istiod patches: - path: spec.ports.[port:15010] value: port: 15020 name: grpc-xds protocol: TCP
In the above example, the changes have been made based on yaml files like the following, the patch first added a new entry to the args list, then changed a containerPort from 15010 to 15020, the last one also replaced port in the service.
--- apiVersion: apps/v1 kind: Deployment metadata: name: istiod namespace: istio-system spec: template: spec: containers: - args: - discovery - --monitoringAddr=:15014 - --log_output_level=default:info - --domain - cluster.local - --keepaliveMaxServerConnectionAge - 30m env: - name: REVISION value: default - name: JWT_POLICY value: third-party-jwt . . . image: docker.io/istio/pilot:1.10.2 name: discovery ports: - containerPort: 8080 protocol: TCP - containerPort: 15010 protocol: TCP - containerPort: 15017 protocol: TCP --- apiVersion: v1 kind: Service metadata: labels: app: istiod install.operator.istio.io/owning-resource: unknown istio: pilot istio.io/rev: default operator.istio.io/component: Pilot release: istio name: istiod namespace: istio-system spec: ports: - name: grpc-xds port: 15010 protocol: TCP - name: https-dns port: 15012 protocol: TCP - name: https-webhook port: 443 protocol: TCP targetPort: 15017

Tuesday, August 10, 2021

Setup k8s cluster with kind using different k8s releases

#!/bin/bash
#! /bin/bash
# This script sets up k8s cluster using metallb and istio
# Make sure you have the following executable in your path
#     kubectl
#     kind
#     istioctl

# Setup some colors
ColorOff='\033[0m'        # Text Reset
Black='\033[0;30m'        # Black
Red='\033[0;31m'          # Red
Green='\033[0;32m'        # Green

K8S_RELEASE=$1
# Get all the available releases
alltags=$(wget -q https://registry.hub.docker.com/v1/repositories/kindest/node/tags -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}')

rm -rf ~/.kube/*

if [ -z $K8S_RELEASE ]; then
  kind create cluster
else
  if [[ "$alltags" == *"$K8S_RELEASE"* ]]; then
    kind create cluster --image=kindest/node:$K8S_RELEASE
  else
    echo "Available k8s releases are $alltags"
    exit 1
  fi
fi

# The following procedure is to setup load balancer
kubectl cluster-info --context kind-kind

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

PREFIX=$(docker network inspect -f '{{range .IPAM.Config }}{{ .Gateway }}{{end}}' kind | cut -d '.' -f1,2)

cat <

Tuesday, August 3, 2021

k8s service full domain name

 

orderer-sample.default.svc.cluster.local

<service-name>.<namespace>.svc.cluster.local

 

Sunday, August 1, 2021

Run playbook to access k8s with service account mounted.

 

quay.io/operator-framework/ansible-operator:v1.10.0

That image will work if service account with some required collections installed

collections:
- name: community.kubernetes
version: "1.2.1"
- name: operator_sdk.util
version: "0.2.0"
- name: community.general
version: "3.4.0"
- name: community.crypto
version: "1.7.1"

 Use this command to run this playbook


- name: Start fabric operations
  hosts: localhost
  gather_facts: no
  connection: local
  tasks:
    - name: Search for the matching secret of the CA organization
      community.kubernetes.k8s_info:
        kind: Secret
      register: casecret

    - debug:
        var: casecret

ansible-playbook test.yaml -e "ansible_python_interpreter=/usr/bin/python3.8"