Sunday, May 7, 2023

k3d walkthrough - does it replace minikube?

What is k3d?

k3d is a lightweight Kubernetes, which can be installed on your notebook, or low-power devices to test Kubernetes nodes and clusters. 

Problems with minikube

Previously I was happily using Minikube, an alternative single-node cluster for development. But I found some issues like port-forwarding, and not restarting after the host system restarts. I did a lot of research, and I found some solutions for that like these (to access Minikube from any host in the home network.)

kubectl port-forward 

minikube tunnel 

These solutions worked to some extent, but I have noticed they created a lot of unnecessary processes when I checked with "htop" command. 

I tried installing Nginx and did reverse-proxy to forward the traffic into the ingress load balancer. I was somehow happy with this. Also, I could even directly provide minikube ip address as the local service in cloudflare tunnel configuration. So, everything was working like a charm. However when I restarted my host system where minikube was running, I saw Minikube was stopped, and when I tried to start again, I got errors. For that, I had to stop and delete Minikube and redeploy all manifests which is tedious. 


Why k3d

I compared Minikube with Kind and k3d, I have chosen k3d because it is lightweight and very easy to install. 


Installation

Installation of k3d is very simple. Just run the following command:

curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash 

(Pre-requisite: docker should be installed, can be checked with docker --version)

Now k3d can be directly tested.

Create clusters and nodes

k3d cluster create tkb --servers 1 --agents 3 --image rancher/k3s:latest

kubectl cluster-info

k3d cluster list

k3d cluster delete tkb

Install ingress 

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml

Now the cool thing is, we can forward traffic from the host system into the ingress load balancer

k3d cluster create tkb -p "80:80@loadbalancer" --servers 1 --agents 3

Here, calling port 80 in the host will directly access the load balancer. To access hostnames, we need to write the hostnames information in "/etc/hosts" file in Linux. Or you can simply use central pi-hole dns configuration). We can simply call the services using hostnames

http://tomcat.kpaudel.lab

If the record of tomcat.kpaudel.lab exists in "/etc/hosts" file with the IP of the host system running k3d, we can get the tomcat service.  We dont need any reverse proxy. This actually worked in Cloudflare tunnel configuration too, I just need to provide the host IP address (192.168.1.XXX), not the cluster IP address. 


Friday, May 5, 2023

Make k8s services available globally

 Solution from cloudflare

Create account 

https://www.cloudflare.com/ 

Create a web application and set up your domain. 

Update the nameservers by logging domain registration page. (register.com.np) 

It will take about 24 hours to update your nameserver records until you can start further testing. 


Create tunnel

Zero Trust => Access=> Tunnel and click create tunnel. Provide the tunnel name. 

Then select docker and copy the docker command to run on your server.  The recommended way is to create compose file and set the token as an environmental variable because the token should be very secure.  (.bashrc is one of the places where environmental variables are stored)

docker-compose.yaml

ersion: '3.0'

networks:
minikube:
external: true

services:
cloudflaretunnel:
container_name: cloudflaretunnel-demo-1
image: cloudflare/cloudflared:latest
restart: unless-stopped
environment:
- TUNNEL_TOKEN=$CLOUDFLARE_TUNNEL_TOKEN
command: tunnel --no-autoupdate run
networks:
- minikube

Run this with "docker compose up", and that's it, tunnel creation is done. Now, Cloudflare can forward the traffic from this container to the services running on the server. 

Now, check the created tunnel, if it is shown as "Healthy", we are sure that everything is working so far. 

Now we configure the tunnel. We define the public hostnames and map the services in the local server. 

We provide the ingress ip address in Service URL, with type HTTP. 


In ingress ruleset, this hostname should be configured. And clicking "save hostname", we see the magic, the public hostname "subdomain.domain" will access the service running in your local network. 


So, you don't need to configure anything, no port forwarding, no router settings, no static ip adress. This solution I was looking for 7 years, now I have got it. 

Credit goes to this guy
https://www.youtube.com/watch?v=yMmxw-DZ5Ec 


Accessing k8s services in home network

Challenge: to access Kubernetes services from a remote computer in LAN.

>> Ingress configure

>> Forward traffic from the host system to ingress (using Nginx)

>> We need to use subdomains to avoid Link problems in the app.  For example: 

    kpaudel.com/tomcat will successfully open tomcat, but the button link in the tomcat itself does not point to the correct URL. 

So, instead, we define subdomains(for example tomcat.kpaudel.com). So, we could have many subdomains to point to the same address. Some mechanisms to implement wildcard DNS. 

>> Wildcard dns (multiple domains pointing to the same IP address)

WildCart DNS in Pi-hole

https://hetzbiz.cloud/2022/03/04/wildcard-dns-in-pihole/ 

(Alternative: create your own bind9+docker)

Sunday, March 19, 2023

Minikube - Persistence Volume

One of the important concepts in Kubernetes is to define persistence volumes to retain the data when nodes get deleted or restarted. I have not studied the methods for persistent volume in detail, but to fulfill my requirements, I found choosing persistent volume with NFS(network file system) to be very useful, so that I can assign an external volume mounted as NFS as a persistent volume in Kubernetes' pods. 

Some of the mechanisms of the persistent volume are 

1) Hostpath => Volume in the host system of the node. It is destroyed when a node is restarted or removed. 

2) Local => accessible to pods in  a node (any mounted partition can be assigned)

3) NFS => This one is the best for me. I can use any system in the network and mount the network file system (NFS) to use as a persistent volume. 

NFS Persistent Volume 

 https://kubernetes.io/docs/concepts/storage/volumes/#local 

In summary, we need to install nfs-kernel-server on the host system.

sudo apt update

sudo apt install nsf-kernel-server

Then we create a shared directory

sudo mkdir /Backup/k8s-volume

cd /Backup/k8s-volume

NFS will translate any root operations on the client to the nobody:nogroup credentials as a security measure. Therefore, you need to change the directory ownership to match those credentials.

sudo chown nobody:nogroup /Backup/k8s-volume

sudo service nfs-kernel-server restart


We now want to config NFS exports on the host

The following line in /etc/exports will suffice 

/Backup/k8s_volume *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)

The exported volume can be mounted anywhere from the network. So even if the nodes got destroyed or restarted, we have persistent data safe in the network system. 

sudo exportfs -rav  (exports all filesystem paths)

sudo exportfs -v  (verify)

Finally, restart the server:

sudo systemctl restart nfs-kernel-server

Now the server is running. If the server is using some firewall techniques, we have to allow the NFS port 2049. 

Command to verify (from client)

sudo mount -t nfs <server(ip/hostname)>:/Backup/k8s_volume /mnt

The files will be mounted into /mnt folder 

To unmount

sudo umount /mnt


Note: There is much other information in the link given above. For Kubernetes, this will be enough.

Now let's create pv.yaml file with NFS persistent volume:

apiVersion: v1
kind: PersistentVolume
metadata:
name: persistent-volume
labels:
type: nfs
app: k8s_volume
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: manual
#hostPath:
# path: /Backup/temp/volumes
nfs:
path: /Backup/k8s_volume
server: 192.168.x.xxx
readOnly: false

kubectl apply -f pv.yaml

kubectl get pv 

The result shows the create persistent volume. We can then allocate the volume needed for pods, which is called "persistent volume claim" or pvc. For example: pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-volume-claim
labels:
app: k8s_volume
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi  

Note the app name in pv.yaml and pvc.yaml is the same so that PVC knows from where the volume to claim is. 


Sunday, March 12, 2023

Preparing Mini-Server - Part 2 (Minikube)

Minikube

While docker and docker-compose are simple and fulfill the basic requirement, I prefer Kubernetes' way of managing containers also can be scaled. So, if there is some problem in one pod, Kubernetes automatically fixes the pod, recovering the downtime. 

Kubernetes can be best managed by creating multiple nodes, but setting Kubernetes cluster at home seems advanced. So, we need to find out a way to set up a Kubernetes cluster in one machine in a single node, and Minikube is developed for that. There are some alternatives to Minikube like Kind and k3s, but I am going to install Minikube and learn Kubernetes concepts. My final goal will be to fully deploy my services into Kubernetes, the services are currently running on Docker. 

So, here I describe all the processes to make Minikube's single node up and running. 

1) Installation of docker (in Ubuntu, tested in 22.04)

sudo apt-get remove docker docker-engine docker.io containerd runc

sudo mkdir -m 0755 -p /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

echo   "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \

 $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Test the installation

sudo docker run hello-world

Check if everything is fine

sudo service docker status

sudo systemctl status docker.service  

sudo systemctl is-enabled docker.service 

sudo systemctl is-active docker.service 

sudo docker compose version


Add user to docker group (so that you do not need to provide sudo with docker) 

Check if the docker group exists in /etc/group file. If no, add a docker group. 

sudo groupadd docker

Add the user to the docker group.

sudo usermod -aG docker $USER

Restart the system, test 

docker run hello-world

Refefence: https://docs.docker.com/engine/install/ubuntu/ 

2) Installation of Minikube

wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

chmod +x minikube-linux-amd64 

sudo install minikube-linux-amd64 /usr/local/bin/minikube

Verify

minikube version

Start

minikube start

Ref: https://r2schools.com/how-to-install-minikube-on-ubuntu-22-04-lts/

3) Installation of kubectl

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

chmod +x kubectl 


sudo install kubectl /usr/local/bin/kubectl

Verify

kubectl version -o json --client 

Ref: https://r2schools.com/how-to-install-minikube-on-ubuntu-22-04-lts/

4) Enable autocomplete for kubectl

  • Check if bash-completion is already installed. 
    type _init_completion
          
        If already installed, you will see some messages. If not installed, install it.
       apt-get install bash-completion 

  • Enable kubectl autocompletion and reload bash
        kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null

  • Setup alias for kubectl
    echo 'alias k=kubectl' >>~/.bashrc

  • Enable the alias for auto-completion and reload bash. (source ~/.bashrc)
    echo 'complete -o default -F __start_kubectl k' >>~/.bashrc

  Now, autocomplete will work for the alias too :)

Reference: https://spacelift.io/blog/kubectl-auto-completion 


Preparing Mini-Server - Part 1 (System)

BACKGROUND

In this article, I am going to share my experience with mini-server preparation. Actually, this whole job I have carried out as my hobby project and a kind of research & development. In my free time, I develop interesting projects and want to run them somewhere with 100% availability.

I got one free web space from somewhere (don't remember now) and I could run my small PHP project to save my personal passwords. Actually, I did not have much interest and wanted to switch to java which I am comfortable with. I then started looking for some java (free) servers where I can host my applications, unfortunately, I did not find any such provider. I had one notebook which I made run for 24 hours as a server and deployed my java applications. One big problem with this is it used to consume lots of power (almost 28W) and to run for 24 hours is, I think, not a good idea. Also, the noise produced by the notebook was also noticeable(when it runs for 24 hours).

Then I decided to purchase a virtual private server from Amazon. I was very excited to have almost 30 GB of space and 1 GB of RAM. I could run simple programs without any problem. More interesting is that I have full access to the system, and is very secure to save my private data there, than some random webspace provider. I had to pay ca. 5 EURO per month for this amazing service provided by amazon (EC2). And as compared to the electricity cost of running a full system, it was economical too. Later I found an economical option called Vultr(https://vultr.com), which I used for a couple of months, and later unsubscribed because of inactivity. IAAS provided by Amazon and Vultr was amazing, the only problem is resources that 1GB of RAM and 30 GB of SSD were not enough to run many applications parallel. You have to upgrade the resources costing more money. There were options too, to only run specific time intervals which sacrificed the availability of the service.

MINI-PC

The idea of MINI-PC came when I realize that instead of letting your data online in the cloud (which in my opinion has some sort of security and availability risks), you can have your data on your server and make it available on the internet through your router. So, you have full access to your system and can manage it in your own way. Another plus point you can get high SSD volume and more RAM. Your data will be stored locally, it does not go external world, by some security mechanism, you can fully secure your sensitive data. 

Factors to consider for a mini-server

We have to consider some points before you purchase a PC for a server to let it run for 24 hours. There are many min-pc available in the market. There are powerful mini-pc that consume more power and run warmer. The cooling system should run to make the system cool. Although they can do complex tasks faster, for a server that runs 24 hours a day, we want it to be as quiet as possible. Mini servers normally run without any graphical display, and they are not meant for gaming. So, a normal graphics card that comes with a processor should be enough. CPUs should consume as low power as possible (preferably 10W-15W - Celeron) and with this power bound as many cores as possible.  

Regarding fans, some mini-pcs have louder fans installed which can be heard in the room. For a server running 24 hours, PCs with louder fans are NOT recommended. There are BIOS settings that can set the fan speed based on temperature, to make the fan run quieter. For example, when I turned on the new mini-PC from NUC, the fan was always running, because the default setting was to run cooler with the fan always running. After changes in BIOS, the fan stopped running, and it was fully quiet. The minimum temperature to run the fan was made increased, so that fan starts running when the CPU temperature reaches that minimum temperature. 

I have done tons of research, and even though the price is higher compared to others, I could not compromise the build quality of Intel NUC PCs. My expectation was fulfilled by Intel NUC BOXNUC6CAYH (4 core intel Celeron & 8 GB memory), which is running since 2019 (almost 4 years now!) without any issues. The good part is that it has a passive fan and runs very cool 24 hours a day. I reboot the system very when there are kernel updates, that's it!

OPERATING SYSTEM

I have NOT done any research regarding the best operating system for the mini-pc. I needed lightweight, without any bloatware, and ubuntu-server met the requirements.


 The installation of the minimal server is very light and you can install the required packages/services later. After installation, I installed open-ssh to access the system remotely. Surprisingly, in contrast to windows, the memory usage was just 400 megabytes and CPU usage was almost zero. 



Enable Remote Login

To enable remote login, we have to install the open-ssh service. To allow remote login with password, you have to uncomment the following line in /etc/ssh/sshd_config file.

# PasswordAuthentication yes

Configure Network
One of the challenging tasks after installation of the ubuntu-server is to configure the network and assign an IP address to the system. 

>> Command to check all the interfaces and IP address
     ip a 
    We can see the network information including the IP addresses of all the interfaces.


Here, the first one is the lookback interface. The second one is ethernet and the third one is a wireless interface. The default network configuration is DHCP, so the IP address is assigned by a DHCP server in the network. If we want to assign static IP address, then we have to assign IP address statically. For that, we need to edit the file in the /etc/netplan folder. There are yaml files and we change dhcp to static and provided our information manually.

Wifi:
A typical wifi configuration looks something like this: 


Ethernet:
A typical ethernet configuration looks something like this:

Here I have disabled DHCP, so I have provided the Ip address, nameservers, and gateway myself. 

Note: to avoid the network verification time (ca. 2 minutes) while booting, we have made optional true, which skips the network checking.

After you have changes in one of those files, you run the following command to verify:
sudo netplan generate 
sudo netplan apply 

If you see no error messages there, you are good to go, otherwise, you have to fix the configuration problems. 

Notes:

1) Please don't enable disk encryption, which needs human intervention to provide the passphrase when the system is rebooted(which is infeasible to do remotely)
2) Use this command if "lsblk" or "df -h" not showing full disk size

sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv


Sunday, October 9, 2022

Auto-Reload Spring MVC Project

This is a much-required process while we develop spring boot applications. We do changes continuously and want to see the result at the same time without manually rebuilding the application. 

For the changes in Java files, we need to include a dependency 

developmentOnly 'org.springframework.boot:spring-boot-devtools'


In Eclipse, this should be enough, I have not tested it though.

If you are using IntelliJ, we need to do one extra step to enable hot-reload.





You need to go to settings (CTRL+ALT+S) and click "Build, Execution, Deployment" select "Compiler", and then on the right side, select the select "Build project automatically".


So far, we have implemented the hot reload of Java files. 

This does not take care of the changes in other files, for example, changes in template files, or other resources files. There are many alternatives to implement this, I prefer using gulp which watches the changes in the resource files and transfers the changes files into the build directory.  

(Reference:https: //attacomsian.com/blog/spring-boot-auto-reload-thymeleaf-templates )

1) Your system should have the latest npm and node. 
2) Instal gulp-client 
npm install gulp-cli -g
Installing globally makes it available to all other applications too.
3) Create a file package.json in the root folder with the following content

{
"name": "Corona Tracker",
"scripts": {
"gulp-watch": "gulp-watch"
},
"dependencies": {
"gulp": "^4.0.2",
"gulp-watch": "^5.0.1"
}
}


4) Run the command npm install (which installs these packages) 
5) Now, we have to define the actual task of watching and taking action. For that, create a file called "gulpfile.js" with the following text:

var gulp =require('gulp'),
watch=require('gulp-watch');

gulp.task('watch',function(){
return watch('src/main/resources/templates/**/*.*',()=>{
gulp.src('src/main/resources/templates/**')
.pipe(gulp.dest('build/resources/main/templates/'));
});
});

The task defined here is "watch"

6) Run "gulp watch" which watches directory templates, and if any changes are there, it copies the changes into the build directory. 


I preferred this method of loading static file changes because it is fast and fully customizable. The method in IntelliJ did not work in my case. (Changing IntelliJ registry did not sound good to me)