Also make sure that docker.sock file is accessible by others.
sudo chmod 666 /var/run/docker.sock
Also make sure that docker.sock file is accessible by others.
sudo chmod 666 /var/run/docker.sock
To delete the custom resource which stuck in delete state , follow these steps:
Once go test produces cover.out , you can use the following command to launch browser to show the coverage.
go tool cover -html=cover.out
Sometimes, you may purposely delete some of the files for testing, then you will need to restore these files, it will take a lot of key strokes if you do file by file, the below command can restore all the unstaged files in git
git restore -- $(git ls-files -m)
Use kubebuilder to start a new project:
kubebuilder init --domain my.domain --repo my.domain/guestbook
This step creates make file, dockerfile etc.
Once a project gets created, you normally run the following command to add API
kubebuilder create api --group webapp --version v1 --kind CronJob
Then you normally would edit the files in api/v1 _types.go files to add your own struct
basically data structure for your api. and make changes to the controller.go in the
controllers directory to implement your business logic.
Then you normally would need to run:
make manifests to generate crds, roles, role bindings etc.
make generate to generate code in zz_generated.deepcopy.go to capture changes that you
make to apis, that is, changes made in _types.go file will need to be reflected in
zz_generated.deepcopy.go file.
1. local check out the branch, for example
git checkout -b the-dirty-branch
2. pull the remote branch to the local
git pull the-dirty-branch
3. you can do the same thing for other branches if more branches are needed
4. then switch to the integration (or main) branch
5. cherry-pick from the dirty branch or rebase from the dirty branch
6. do git reset --soft to maintain unchanged release tag
kg csidrivers
kg storageclass
kg volumesnapshotclass
To list volume snapshot and it content
kg volumesnapshot -n test01
kg volumesnapshotcontent -n test01
Volume snapshot class uses driver.
storage class uses provisioner
volume snapshot class uses driver.
So driver and provisioner should be same thing?
Simply do the following command:
docker buildx create --use
Then run command
docker buildx ls
You should see amd64 included like the following.
linux/arm64, linux/amd64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
Some document says turn on the experienmental flag on which is not needed.
1. https://portworx.com/
You don’t have to use Portworx storage to use Portworx Backup. Backup and recover Kubernetes applications using Amazon EBS, Google Persistent Disk and Azure Block storage directly via CSI.
2. https://velero.io/ used to be called Heptio Ark
uses object storage
3. https://stash.run/
Stores backup data in AWS S3, Minio, Rook, GCS, Azure, OpenStack Swift, Backblaze B2 and Rest Server
4. https://trilio.io/
5. https://metallic.io/ very weak.
6. https://www.kasten.io/
7. https://www.rubrik.com/
8. https://storware.eu/
When k8s kind trying to use a local non loopback ip for api server address, mac firewall will ask if the incoming network connection should be allowed. A screen like this will be popped up,