As someone who enjoys exploring new technologies, I’ve always been curious about Kubernetes. Instead of just following generic tutorials and to get a proper hands-on experience with it, I decided to pick one of my favorite projects Medusa and use it as a real-world use case for learning Kubernetes. By doing this, I was able deepen my understanding with Kubernetes while also improving my workflow for managing development environments.
For those who do not know what Medusa is, it is a headless commerce engine that provides developers with flexibility and scalability. Setting it up on Kubernetes for development purposes was a bit challenging at first, but with a bit of trial and error, I was able to manage. In this article, I’ll walk you through how I managed to deploy Medusa on Kubernetes using Minikube. You can also find under this repository
Creating a Medusa project
The first step was creating a new Medusa project using Yarn:
yarn dlx create-medusa-app@latest my-medusa-store
After setting up the project, I installed the necessary dependencies and extended the medusa-config.ts file to support caching
module.exports = defineConfig({
modules: [
{
resolve: "@medusajs/medusa/cache-redis",
options: {
redisUrl: process.env.REDIS_URL,
},
},
{
resolve: "@medusajs/medusa/event-bus-redis",
options: {
redisUrl: process.env.REDIS_URL,
},
},
{
resolve: "@medusajs/medusa/workflow-engine-redis",
options: {
redis: {
url: process.env.REDIS_URL,
},
},
},
],
// the rest of the initial configuration
});
Once the configurations were complete, I ran yarn build
to generate .medusa
build folder, which contains the necessary server files.
Dockerizing Medusa
I created a Dockerfile to package Medusa into a container. Instead of using a multi-step build process, I opted to copy the files directly from .medusa/server
into the container. The reasoning behind this decision was to allow a CI job to build the project separately before creating an image. If the build fails, there’s no point in creating an image.
FROM node:22-slim
RUN corepack enable yarn
WORKDIR /app
COPY .medusa/server ./
COPY .yarnrc.yml ./
RUN yarn install --immutable
EXPOSE 9000
CMD ["yarn", "start"]
Once the Dockerfile was ready, I built the image tagged as medusa-store:latest
and tested it using Docker Compose. This step ensured that everything was running correctly before deploying it to Kubernetes.
Setting up Kubernetes resources
Next, I created the necessary Kubernetes configurations, including ConfigMaps, Secrets, and Deployments. Below is a breakdown of the components I defined:
ConfigMaps and Secrets
I created YAML files to store environment variables and sensitive data securely.
# config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: medusa-config
data:
MEDUSA_ADMIN_ONBOARDING_TYPE: "default"
STORE_CORS: "http://localhost:8000,https://docs.medusajs.com"
ADMIN_CORS: "http://localhost:5173,http://localhost:9000,https://docs.medusajs.com"
AUTH_CORS: "http://localhost:5173,http://localhost:9000,http://localhost:8000,https://docs.medusajs.com"
For the passwords and secrets, it is mandatory to set a base64 version of the data. That can be done easily in the terminal through echo -n "postgres" | base64
.
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: medusa-secret
type: Opaque
data:
JWT_SECRET: c3VwZXJzZWNyZXQ=
COOKIE_SECRET: c3VwZXJzZWNyZXQ=
POSTGRES_PASSWORD: cG9zdGdyZXM=
POSTGRES_USER: cG9zdGdyZXM=
DATABASE_URL: cG9zdGdyZXM6Ly9wb3N0Z3Jlczpwb3N0Z3Jlc0Bwb3N0Z3Jlczo1NDMyL21lZHVzYT9zc2xfbW9kZT1kaXNhYmxl
REDIS_URL: cmVkaXM6Ly9yZWRpczo2Mzc5
Deployments and services
I created deployments and services for Redis, PostgreSQL, and Medusa. Below is an example of the Medusa deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: medusa-app
spec:
replicas: 1
selector:
matchLabels:
app: medusa
template:
metadata:
labels:
app: medusa
spec:
initContainers:
- name: migrate-db
image: medusa-store:latest
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "yarn medusa db:migrate"]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: medusa-secret
key: DATABASE_URL
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: medusa-secret
key: REDIS_URL
- name: create-admin-user
image: medusa-store:latest
imagePullPolicy: IfNotPresent
command:
[
"sh",
"-c",
"yarn medusa user -e [email protected] -p supersecret || true",
]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: medusa-secret
key: DATABASE_URL
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: medusa-secret
key: REDIS_URL
containers:
- name: medusa-container
image: medusa-store:latest
imagePullPolicy: IfNotPresent
env:
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: medusa-secret
key: JWT_SECRET
- name: COOKIE_SECRET
valueFrom:
secretKeyRef:
name: medusa-secret
key: COOKIE_SECRET
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: medusa-secret
key: DATABASE_URL
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: medusa-secret
key: REDIS_URL
envFrom:
- configMapRef:
name: medusa-config
I had to use init containers to migrate the database to ensure that our database is synchronised when the pod spins up. I also deployed Redis and PostgreSQL with similar configurations, ensuring they were accessible to the Medusa service.
Deploying to Minikube
I had to load the image into Minikube with a simple command minikube image load medusa-store:latest
. After that I created a namespace for our resources to finally be able to push all the resources to that namespace.
kubectl create namespace medusa
kubectl apply -f . -n medusa
Once that is completed, I used Minkube tunnel for the load balancer to get an IP assigned then I used ngrok on top of that to get an ephemeral HTTPS domain so I can access the admin dashboard.
minikube tunnel 127.0.0.1
ngrok http 80