Learning Microservices From Scratch — Part 1
Setting up the Database
Over time, I have worked on many small microservices, particularly in Java. But they were either for personal projects or for adding new features to an existing codebase. I’ve also been developing multiple microservices at work, with the intention of making the production ready, in Golang. However, what I have never done is:
- Developed a Golang microservice from scratch.
- Developed a microservice while following TDD (maybe not 100% but as much as I can).
- Maximized automation.
- Followed the best practices I come across.
So let’s get right to it.
Decisions
To begin with, I thought I’d start with setting up the database first (hence the subtitle). Recently, I’ve been working with PostgreSQL databases at work, so I figured I’d stick with the same (I don’t have any specific requirement that would make me prefer any particular DBMS), and since I’ve always liked working with Kubernetes, it only makes sense to give CNPG a shot.
Next step, we need a Kubernetes environment on cloud to deploy our CNPG cluster. Unlike me, if you haven’t exhausted all the free tier options on top cloud service providers: AWS, Azure, GCP, you can provision a small Kubernetes cluster for both deploying the microservice and the CNPG cluster. If you want to just test locally, you can use minikube.
Now, before we get started, clone or fork the following repo. This is where I’ll be pushing all my code.
Provisioning a Kubernetes Cluster
We have many options to start with here. If you haven’t exhausted up all your free credits/free tier plans on popular cloud service providers like AWS, GCP, and Azure (unlike me 🥹), then you can create a small Kubernetes cluster on your platform of choice, and we’ll deploy the CNPG cluster on it.
However, if you’re out of free options, like me and are looking for a low-cost Kubernetes platform, then you could use Civo.
There are other cost-effective options too, but I’ll be going with Civo because it took me the least amount of time there to set up my account and billing options. That and the $250 credit you get for the first month 😉
Coming to the Kubernetes cluster itself, the following is the node configuration I went ahead with.
- 2 x
Extra Small — Standard
: 1 CPU, 1 GB, 30 GB - 1 x
Small — Standard
: 1 CPU, 2 GB, 40 GB
This is the cheapest node config I could work with. You remove any node or try to replace the Small
node with Extra Small
your pods might not get allocated to any node as they just don’t have enough allocatable resources. If you’re using Civo, here’s what your cluster would look like.
To connect to the cluster, download the corresponding kubeconfig file and place it in your ~/.kube
directory with the name config
Alright, we have our Kubernetes cluster, let's get our CNPG database ready now.
CNPG Database Setup
Before we get started, make sure you have prerequisites ready.
- Install
kubectl
following the instructions here. - Install
helm
. - Install
K9S
. This one is optional but would help a lot with interacting with your Kubernetes cluster.
If you’re a beginner when it comes to Kubernetes, or if you want to set up a CNPG locally before provisioning a cloud environment, I’d recommend you start with Minikube. This would create a Kubernetes cluster on your local machine, and then you can provision your CNPG cluster on it.
Once you have installed Minikube
, use the following command to create a new cluster.
minikube start --cpus 4 --memory 6144
(you can allocate CPU and memory resources here based on your local machine configuration)
Coming to CNPG, if you check their official documentation, the process goes like this.
- First, you install the controller.
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/releases/cnpg-1.24.0.yaml
- Next, you create a Postgres
Cluster
kubectl apply -f https://cloudnative-pg.io/documentation/1.15/samples/cluster-example.yaml
That’s it, with this you should have a 3-node PostgreSQL cluster. You can check the pods created in the default
namespace.
kubectl get pods -l cnpg.io/cluster=cluster-example
In order to configure the CNPG database from scratch, I have created a helm chart name cnpg-database
that takes care of the whole process and creates the following resources:
- dedicated
namespace
where the DB cluster pods are created secret
in the same namespace to store DB credentialsconfigmap
in the same namespace, to store SQL commands that create the tables and makes our DB user their owner- CNPG
Cluster
that uses the credentials and post-init SQL commands mentioned above - a
pod
that runs an adminer container, exposing port 8080, to connect to database and help with debugging
Let’s take a look at the template file.
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.cnpg_database.namespace }}
---
apiVersion: v1
data:
password: {{ .Values.cnpg_cluster.password | b64enc | quote }}
username: {{ .Values.cnpg_cluster.username | b64enc | quote }}
kind: Secret
metadata:
name: app-user-auth
namespace: {{ .Values.cnpg_database.namespace }}
type: kubernetes.io/basic-auth
---
apiVersion: v1
kind: ConfigMap
metadata:
name: post-init-sql-db-setup
namespace: {{ .Values.cnpg_database.namespace }}
data:
sql.commands:
{{ (.Files.Get "files/tables.sql") | indent 4 }}
{{ (.Files.Get "files/permissions.sql") | indent 4 }}
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: {{ .Values.cnpg_cluster.name }}
namespace: {{ .Values.cnpg_database.namespace }}
spec:
instances: {{ .Values.cnpg_cluster.instanceCount }}
storage:
size: {{ .Values.cnpg_cluster.storageSize }}
bootstrap:
initdb:
database: {{ .Values.cnpg_database.name }}
owner: {{ .Values.cnpg_cluster.username }}
secret:
name: app-user-auth
postInitApplicationSQLRefs:
configMapRefs:
- name: post-init-sql-db-setup
key: sql.commands
---
apiVersion: v1
kind: Pod
metadata:
name: adminer
namespace: {{ .Values.cnpg_database.namespace }}
spec:
containers:
- name: adminer
image: adminer:4.8.1
ports:
- containerPort: 8080
And here is the values.yaml
file.
cnpg_cluster:
instanceCount: 2
name: cnpg-cluster
password: ""
storageSize: "1Gi"
username: sourcescore
cnpg_database:
name: sourcescore
namespace: postgres-cluster
As you can see, we are creating a Postgres cluster named cnpg-cluster
in postgres-cluster
namespace, with 2 nodes with a storage size of 1Gi each. The username is sourcescore
and the password is left empty, to be set during the helm chart installation process.
This is where the Makefile comes into the picture. I have added recipes to create and cleanup a minikube cluster, and to install a CNPG controller and our cnpg-database
helm chart on the Kubernetes cluster. I’m also reading CNPG version and the Database password from environment variables, with default values to fall back to in case they are not set.
Let’s focus on setting up our database on a Civo cluster.
Earlier, I mentioned, you can download your cluster’s kubeconfig file to connect to it. If you have cloned the linked repo, you can place the file in the configs
directory and just run the following command to have your database all set up.
make cloud-pg-setup
Diving into the Makefile recipes,
cloud-k8s-setup:
chmod 400 configs/civo-kubeconfig
cp -f configs/civo-kubeconfig ~/.kube/config
cnpg-controller-setup:
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/releases/cnpg-$(CNPG_VERSION).yaml
@echo -e "\n\e[0;32mInstalled CNPG controller on the cluster :)\n\e[0m"
sleep 60
kubectl get deployment -n cnpg-system cnpg-controller-manager
@echo -e "\n\n"
pg-setup: cnpg-controller-setup
helm upgrade --install cnpg-database --set cnpg_cluster.password=$(PG_USER_PASSWORD) helm/cnpg-database
@echo -e "\n\e[0;32mCreated CNPG cluster :)\n\e[0m"
sleep 240
kubectl get pods -l cnpg.io/cluster=cnpg-cluster -n postgres-cluster
cloud-pg-setup: cloud-k8s-setup pg-setup
here is the output you’ll get when your database is ready to connect.
If you’ve installed K9S
you should be able to see the controller and the DB cluster pods as shown below.
Now, let’s connect to the database. Since our helm chart also creates an adminer pod, all we need to do is port forward to it, which would be super easy with K9S.
Before accessing adminer on our local port, let’s take a look at the services created by the DB Cluster
.
cnpg-cluster-rw
: applications connect only to the primary instance of the clustercnpg-cluster-ro
: applications connect only to hot standby replicas for read-only-workloadscnpg-cluster-r
: applications connect to any of the instances for read-only workloads
Now let’s access adminer
on our local port.
This is where the services created above come into the picture. If you want to have both read and write access, you can use rw
type service in the Server field. And as you should be able to see, all the tables we created are there. Moreover, since the user you logged in as was set as the owner of these tables, you should be able to insert data as well.
And with that, we’re done with creating and setting up a CNPG database. Next time, we are going to interact with this database from our Golang microservice. So stay tuned.
If you have any questions or suggestions, drop them in the comments section as well.
See you on another post 🖖