Containers & Orchestration

Production-Ready Microservices

The Deployment Problem

You've built 15 microservices. Now what?

  • Server 1: Java 21, PostgreSQL 14, 8GB RAM
  • Server 2: Java 17, PostgreSQL 15, 16GB RAM
  • Your Laptop: Java 21, H2 Database, 32GB RAM

"It works on my machine!"

The Promise of Containers

Package once, run anywhere.

A container bundles:

  • Your application code
  • Runtime (Java, Node.js)
  • Dependencies
  • Configuration

Result: Same environment from laptop → staging → production.

Why Containers ♥ Microservices

  1. Isolation: Each service in its own container
  2. Density: Run 50 containers on one server
  3. Fast Startup: New instance in seconds, not minutes
  4. Immutable: No "config drift" - recreate, don't update
  5. Portability: Cloud-agnostic (AWS, Azure, On-prem)

Microservices amplify container benefits.

Container vs. Virtual Machine

┌─────────────────┐   ┌─────────────────┐
│   App A         │   │   App A         │
│   ├─ Java       │   │   ├─ Java       │
│   └─ Libraries  │   │   └─ Libraries  │
├─────────────────┤   ├─────────────────┤
│ Container Engine│   │ Guest OS (3GB)  │
├─────────────────┤   │ Hypervisor      │
│ Host OS         │   │ Host OS         │
└─────────────────┘   └─────────────────┘
   CONTAINER              VIRTUAL MACHINE
   (100MB, <1s start)     (3GB, 30s start)

The 12-Factor App Principles

Conventions for Production-Ready Services

Created by Heroku, now industry standard.

Key Factors Relevant to Microservices:

  1. Codebase: One repo per service
  2. Dependencies: Explicitly declared
  3. Config: Environment variables, not hardcoded
  4. Backing Services: Treat DB/cache as attached resources
  5. Stateless Processes: Share nothing, store state externally

Factor III: Config via Environment Variables

Bad (Hardcoded):

String dbUrl = "jdbc:postgresql://prod-db:5432/patients";

Good (Environment Variable):

@ConfigProperty(name = "db.url")
String dbUrl;

Deployment:

docker run -e DB_URL=jdbc:postgresql://prod-db:5432/patients \
           practicemanager:1.0

Same container image, different config per environment.

Factor VI: Stateless Processes

Stateful (Bad):

public class PatientCache {
    private static Map<String, Patient> cache = new HashMap<>();
}

Problem: Scaling to 5 instances means 5 different caches.

Stateless (Good):

@Inject
RedisClient redis;

public Patient getPatient(String id) {
    return redis.get("patient:" + id);
}

State in external store (Redis, DB). Any instance can handle any request.

Exercise 1: Containerize practicemanager

From Jar to Docker Image

Objective: Build a Docker image for practicemanager.

Exercise 1: Step 1 - Create Dockerfile

# Multi-stage Dockerfile to optimize image size
# Stage 1: Build
FROM registry.access.redhat.com/ubi9/openjdk-21:1.23 AS build
ARG APP_VERSION="0.0.0-dev-SNAPSHOT"

USER root

RUN microdnf install -y gzip && microdnf clean all

RUN mkdir -p /home/jboss && chown -R 185:0 /home/jboss

WORKDIR /home/jboss

# Copy Maven wrapper and pom.xml first (for caching)
COPY --chown=185:0 mvnw .
COPY --chown=185:0 .mvn/ .mvn/
COPY --chown=185:0 pom.xml .

USER 185

RUN ./mvnw -B org.apache.maven.plugins:maven-dependency-plugin:3.1.2:go-offline

# Copy source code
USER root
COPY --chown=185:0 src/ src/

USER 185
RUN ./mvnw versions:set -DnewVersion=${APP_VERSION}
RUN ./mvnw package

# Stage 2: Runtime (using Quarkus JVM image)
FROM registry.access.redhat.com/ubi9/openjdk-21:1.23

ENV LANGUAGE='en_US:en'

# Copy the built application from build stage (note: /home/jboss not /app)
COPY --from=build --chown=185 /home/jboss/target/quarkus-app/lib/ /deployments/lib/
COPY --from=build --chown=185 /home/jboss/target/quarkus-app/*.jar /deployments/
COPY --from=build --chown=185 /home/jboss/target/quarkus-app/app/ /deployments/app/
COPY --from=build --chown=185 /home/jboss/target/quarkus-app/quarkus/ /deployments/quarkus/

EXPOSE 8080
USER 185
ENV JAVA_OPTS_APPEND="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
ENV JAVA_APP_JAR="/deployments/quarkus-run.jar"

ENTRYPOINT [ "/opt/jboss/container/java/run/run-java.sh" ]

Exercise 1: Step 2 - Build Image

cd practicemanager

docker build -t practicemanager:1.0 .

# Verify image size
docker images practicemanager:1.0

Expected: ~400-400MB (compare to 3GB VM!)

Exercise 1: Step 3 - Run Container

docker run -d \
  --name practicemanager \
  -p 8080:8080 \
  -e JAVA_OPTS_APPEND="-Dquarkus.http.port=8080" \
  practicemanager:1.0

# Check logs
docker logs -f practicemanager

# Test
curl http://localhost:8080/api/patients

Exercise 1: Step 4 - Optimize native image (optional)

# Build native image
```bash
./mvnw package -Pnative -Dquarkus.native.container-build=true

Exercise 2: Externalize Configuration

12-Factor Config in Practice

Objective: Use environment variables for all configuration.

Exercise 2: Step 1 - Identify Hardcoded Config

Review application.properties:

# Hardcoded - BAD
document.storage.path=./documents
rabbitmq-host=localhost

Exercise 2: Step 2 - Externalize with Env Vars

Update application.properties:

# Use environment variables with defaults
document.storage.path=${DOCUMENT_STORAGE_PATH:./documents}
rabbitmq-host=${RABBITMQ_HOST:localhost}
rabbitmq-port=${RABBITMQ_PORT:5672}
rabbitmq-username=${RABBITMQ_USERNAME:guest}
rabbitmq-password=${RABBITMQ_PASSWORD:guest}
....

Exercise 2: Step 3 - Run with Config

docker run -d \
  --name practicemanager \
  -p 8080:8080 \
  -e DOCUMENT_STORAGE_PATH=/tmp/documents \
  practicemanager:1.0

Same image, different behavior!

Exercise 3: Docker Compose for Multi-Service

Local Development Stack

Objective: Run all services + dependencies with one command.

Exercise 3: docker-compose.yml

version: '3.8'

services:
  rabbitmq:
    image: rabbitmq:3.13-management
    container_name: microservices-rabbitmq
    ports:
      - "5672:5672"    # AMQP protocol port
      - "15672:15672"  # Management UI port
    environment:
      RABBITMQ_DEFAULT_USER: guest
      RABBITMQ_DEFAULT_PASS: guest
      RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
    healthcheck:
      test: ["CMD-SHELL", "test -f /var/lib/rabbitmq/.erlang.cookie && rabbitmq-diagnostics -q ping"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s
    networks:
      - microservices-network

  practicemanager:
    build:
      context: .
      dockerfile: Dockerfile.multistage
    image: practicemanager:latest
    container_name: practicemanager
    ports:
      - "8080:8080"
    environment:
      QUARKUS_HTTP_PORT: 8080
      DOCUMENT_STORAGE_PATH: /data/documents
      RABBITMQ_HOST: rabbitmq
      RABBITMQ_PORT: 5672
      RABBITMQ_USERNAME: guest
      RABBITMQ_PASSWORD: guest
    volumes:
      - documents-data:/data/documents
    depends_on:
      rabbitmq:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/q/health/live"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    networks:
      - microservices-network

  consultations:
    build:
      context: ../consultations
      dockerfile: Dockerfile.multistage
    image: consultations:latest
    container_name: consultations
    ports:
      - "8084:8084"
    environment:
      QUARKUS_HTTP_PORT: 8084
      RABBITMQ_HOST: rabbitmq
      RABBITMQ_PORT: 5672
      RABBITMQ_USERNAME: guest
      RABBITMQ_PASSWORD: guest
      PATIENT_API_URL: http://practicemanager:8080
    depends_on:
      rabbitmq:
        condition: service_healthy
      practicemanager:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8084/q/health/live"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    networks:
      - microservices-network

networks:
  microservices-network:
    driver: bridge

volumes:
  documents-data:
    name: practicemanager-documents

The service definition in compose calls /q/health/live for health checks.
You need to add the proper health check endpoints in your Quarkus applications by adding the following dependency in your pom.xml:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

You will need to rebuild your Docker images after adding this dependency.

Exercise 3: Start Everything

# Build and start all services
docker-compose up -d

# View logs
docker-compose logs -f

# Stop all
docker-compose down

# Stop and remove volumes
docker-compose down -v

One command to rule them all!

Part 2: Kubernetes Orchestration

Why Kubernetes?

Docker Compose is great for dev, but in production:

  • What if a container crashes? (Auto-restart)
  • What if Server 1 dies? (Reschedule on Server 2)
  • How do I scale to 10 instances? (Autoscaling)
  • How do I do zero-downtime deployments? (Rolling updates)
  • How do services discover each other? (Service mesh)

Kubernetes (K8s) solves these problems.

Kubernetes Architecture

┌────────────────────────────────────────┐
│           CONTROL PLANE                │
│  ┌──────────┐  ┌──────────┐           │
│  │API Server│  │Scheduler │           │
│  └──────────┘  └──────────┘           │
│  ┌──────────┐  ┌──────────┐           │
│  │Controller│  │   etcd   │           │
│  └──────────┘  └──────────┘           │
└────────────────────────────────────────┘
           │
┌──────────┴──────────────────────────────┐
│              WORKER NODES               │
│  ┌─────────────┐   ┌─────────────┐     │
│  │   Node 1    │   │   Node 2    │     │
│  │ ┌─────────┐ │   │ ┌─────────┐ │     │
│  │ │  Pod A  │ │   │ │  Pod B  │ │     │
│  │ │ ┌─────┐ │ │   │ │ ┌─────┐ │ │     │
│  │ │ │Cont.│ │ │   │ │ │Cont.│ │ │     │
│  │ │ └─────┘ │ │   │ │ └─────┘ │ │     │
│  │ └─────────┘ │   │ └─────────┘ │     │
│  └─────────────┘   └─────────────┘     │
└─────────────────────────────────────────┘

Key Kubernetes Concepts

Pod: Smallest unit - one or more containers
Deployment: Manages replicas of pods
Service: Stable network endpoint for pods
ConfigMap: Configuration data
Secret: Sensitive data (passwords, tokens)
Ingress: External access (like API Gateway)

Exercise 4: Deploy to Kubernetes

From Docker Compose to K8s

Objective: Run practicemanager in Kubernetes.

Prerequisites:

  • Minikube or Docker Desktop with K8s enabled

Exercise 4: Step 1 - Start Local Cluster

# Using Minikube
minikube start

# Or enable in Docker Desktop
# Settings → Kubernetes → Enable

# Verify
kubectl cluster-info
kubectl get nodes

Exercise 4: Step 2 - Deploy RabbitMQ

Before deploying practicemanager, we need RabbitMQ. Create k8s/rabbitmq.yaml:

apiVersion: v1
kind: Service
metadata:
  name: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 15672
    name: management
  selector:
    app: rabbitmq
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rabbitmq
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      containers:
      - name: rabbitmq
        image: rabbitmq:3.13-management
        ports:
        - containerPort: 5672
        - containerPort: 15672
        env:
        - name: RABBITMQ_DEFAULT_USER
          value: "guest"
        - name: RABBITMQ_DEFAULT_PASS
          value: "guest"

Exercise 4: Step 3 - Deploy RabbitMQ

kubectl apply -f k8s/rabbitmq.yaml

# Verify RabbitMQ is running
kubectl get pods -l app=rabbitmq
kubectl logs -l app=rabbitmq

# Access management UI (optional)
kubectl port-forward service/rabbitmq 15672:15672
# Then open: http://localhost:15672 (guest/guest)
Exercise 4: Step 4 - Create practicemanager Deployment
apiVersion: v1
kind: Service
metadata:
  name: practicemanager
spec:
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    app: practicemanager
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: practicemanager
  labels:
    app: practicemanager
spec:
  replicas: 2
  selector:
    matchLabels:
      app: practicemanager
  template:
    metadata:
      labels:
        app: practicemanager
    spec:
      containers:
      - name: practicemanager
        image: localhost/practicemanager:latest
        imagePullPolicy: Never
        ports:
        - containerPort: 8080
        env:
        - name: RABBITMQ_HOST
          value: "rabbitmq"
        - name: RABBITMQ_PORT
          value: "5672"
        - name: RABBITMQ_USERNAME
          value: "guest"
        - name: RABBITMQ_PASSWORD
          value: "guest"
        - name: DOCUMENT_STORAGE_PATH
          value: "/data/documents"
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /q/health/live
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /q/health/ready
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        volumeMounts:
        - name: documents
          mountPath: /data/documents
      volumes:
      - name: documents
        emptyDir: {}

Note: Use imagePullPolicy: Never for locally loaded images.

Exercise 4: Step 5 - Load Image and Deploy

# load practicemanager image into minikube (if using Docker)
minikube image load practicemanager:latest
# Load practicemanager image into minikube (if using Podman)
docker save localhost/practicemanager:latest -o /tmp/practicemanager.tar
minikube image load /tmp/practicemanager.tar
rm /tmp/practicemanager.tar

# Deploy practicemanager
kubectl apply -f k8s/practicemanager.yaml

# Verify
kubectl get pods
kubectl get services

You should see RabbitMQ running and 2 practicemanager pods running!

Exercise 4: Step 6 - Test

# Port-forward to access the service
kubectl port-forward service/practicemanager 8080:8080

# Test in another terminal
curl http://localhost:8080/api/patients

# Check logs to verify RabbitMQ connection
kubectl logs -l app=practicemanager --tail=20

You should see "Connection with RabbitMQ broker established" in the logs.

Exercise 4: Step 6 - Consultations MS

Do the same for consultations microservice.

Exercise 5: ConfigMaps & Secrets

Kubernetes-Native Config Management

Objective: Externalize configuration the K8s way.

Exercise 5: Step 1 - Create ConfigMap

Create k8s/configmap.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: practicemanager-config
data:
  DOCUMENT_STORAGE_PATH: "/data/documents"
  QUARKUS_LOG_LEVEL: "INFO"
  QUARKUS_HTTP_PORT: "8080"
  RABBITMQ_HOST: "rabbitmq"
  RABBITMQ_PORT: "5672"

Exercise 5: Step 2 - Create Secret

Create k8s/secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: practicemanager-secrets
type: Opaque
data:
  RABBITMQ_USERNAME: Z3Vlc3Q=  # base64 encoded "guest"
  RABBITMQ_PASSWORD: Z3Vlc3Q=  # base64 encoded "guest"

Or create from command line:

kubectl create secret generic practicemanager-secrets \
  --from-literal=RABBITMQ_USERNAME=guest \
  --from-literal=RABBITMQ_PASSWORD=guest

Exercise 5: Step 3 - Use in Deployment

Update k8s/practicemanager.yaml to use ConfigMap and Secret:

spec:
  template:
    spec:
      containers:
      - name: practicemanager
        image: localhost/practicemanager:latest
        imagePullPolicy: Never
        ports:
        - containerPort: 8080
        envFrom:
        - configMapRef:
            name: practicemanager-config
        - secretRef:
            name: practicemanager-secrets
        volumeMounts:
        - name: documents
          mountPath: /data/documents
        livenessProbe:
          httpGet:
            path: /q/health/live
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /q/health/ready
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
      volumes:
      - name: documents
        emptyDir: {}

Exercise 5: Step 4 - Apply Configuration

# Create ConfigMap and Secret
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml

# Update deployment
kubectl apply -f k8s/practicemanager.yaml

# Verify pods restarted with new config
kubectl get pods -w
kubectl logs -l app=practicemanager --tail=20

Now config is managed separately from deployment!

Exercise 6: Scaling

Horizontal Pod Autoscaling

Objective: Automatically scale based on CPU usage.

Exercise 6: Step 1 - Enable Metrics Server

# Minikube
minikube addons enable metrics-server

# Verify
kubectl top nodes
kubectl top pods

Exercise 6: Step 2 - Create HPA

kubectl autoscale deployment practicemanager \
  --cpu-percent=70 \
  --min=3 \
  --max=10

# Or via YAML
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: practicemanager-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: practicemanager
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Exercise 6: Step 3 - Load Test

# Generate load with busybox
kubectl run -i --tty load-generator --rm \
  --image=busybox --restart=Never -- /bin/sh

# Inside the busybox pod, run multiple parallel processes:
for i in $(seq 1 10); do
  while true; do wget -q -O- http://practicemanager:8080/q/health/live; done &
done

Alternative: Use a proper load testing tool:

# Use hey for better load generation
kubectl run -i --tty load-generator --rm \
  --image=williamyeh/hey:latest --restart=Never -- \
  -z 5m -c 50 http://practicemanager:8080/q/health/live

Watch scaling in another terminal:

kubectl get hpa -w
kubectl top pods -l app=practicemanager

You'll see replicas increase from 2 → 10 as CPU hits 70%!

Note: The & runs each wget loop in background, creating 10 parallel processes.

Exercise 7: Zero-Downtime Deployment

Rolling Updates

Objective: Update practicemanager without downtime.

Exercise 7: Step 1 - Update Application

Change version in deployment.yaml:

spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1        # Max 1 extra pod during update
      maxUnavailable: 0  # No pod should be unavailable
  template:
    spec:
      containers:
      - name: practicemanager
        image: practicemanager:2.0  # New version!

Exercise 7: Step 2 - Apply Update

kubectl apply -f practicemanager/k8s/deployment.yaml

# Watch rolling update
kubectl rollout status deployment/practicemanager

# See pod updates in real-time
kubectl get pods -w

Sequence:

  1. Start new pod with v2.0
  2. Wait for readiness probe to pass
  3. Terminate one old pod
  4. Repeat until all 3 pods are v2.0

Result: Zero downtime!

Exercise 7: Step 3 - Rollback

If v2.0 has bugs:

# View rollout history
kubectl rollout history deployment/practicemanager

# Rollback to previous version
kubectl rollout undo deployment/practicemanager

# Or rollback to specific revision
kubectl rollout undo deployment/practicemanager --to-revision=1

Exercise 8: Persistent Storage

Handling Stateful Data

Objective: Store documents persistently across pod restarts.

Exercise 8: Step 1 - Create PersistentVolumeClaim

Create practicemanager/k8s/pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: documents-pvc
spec:
  accessModes:
    - ReadWriteMany  # Multiple pods can read/write
  resources:
    requests:
      storage: 10Gi
  storageClassName: standard

Exercise 8: Step 2 - Mount in Deployment

Update deployment.yaml:

spec:
  template:
    spec:
      containers:
      - name: practicemanager
        volumeMounts:
        - name: documents-volume
          mountPath: /data/documents
      volumes:
      - name: documents-volume
        persistentVolumeClaim:
          claimName: documents-pvc

Exercise 8: Step 3 - Test Persistence

# Upload document
curl -X POST http://localhost:8080/api/patients/P001/documents \
  -F "type=LAB_RESULT" \
  -F "file=@test.pdf"

# Delete pod
kubectl delete pod <pod-name>

# Wait for new pod to start
kubectl get pods -w

# Verify document still exists
 kubectl exec -it practicemanager-777c5756c4-mckc4 bash # Use proper container name
 >  ls /data/documents/

Document survives pod restart!

Kubernetes Best Practices

  1. Always Set Resource Limits: Prevent one service from starving others
  2. Use Health Probes: Liveness (restart if dead), Readiness (remove from load balancer)
  3. Never Use :latest Tag: Pin versions (practicemanager:1.0.3)
  4. Separate Config from Code: Use ConfigMaps and Secrets
  5. Enable RBAC: Least-privilege access
  6. Use Namespaces: Isolate dev/staging/prod

Service Mesh (Advanced Topic)

Problem: Microservices need:

  • Mutual TLS
  • Traffic splitting (canary)
  • Retry logic
  • Circuit breaking

Without Service Mesh: Implement in every service (Java, Node.js, Python...)

With Service Mesh (Istio/Linkerd): Sidecar proxy handles it automatically.

Service Mesh Architecture

┌────────────────────────────────┐
│         practicemanager        │
│  ┌──────────┐   ┌──────────┐  │
│  │   App    │   │  Envoy   │  │
│  │ Container│←→ │  Proxy   │  │
│  └──────────┘   └─────┬────┘  │
└────────────────────────│───────┘
                         │ mTLS
┌────────────────────────│───────┐
│         consultations  │       │
│  ┌──────────┐   ┌─────▼────┐  │
│  │   App    │   │  Envoy   │  │
│  │ Container│←→ │  Proxy   │  │
│  └──────────┘   └──────────┘  │
└────────────────────────────────┘

App code unchanged - infrastructure handles it.

Containers vs. Serverless

Containers (Kubernetes):

  • Full control over infrastructure
  • Best for long-running services
  • Pay for reserved capacity

Serverless (AWS Lambda, Google Cloud Run):

  • No infrastructure management
  • Best for event-driven, bursty workloads
  • Pay per request

Microservices fit both models!

Summary

  • Containers: Package app + dependencies for portability
  • 12-Factor App: Config via env vars, stateless processes
  • Docker Compose: Multi-service dev environment
  • Kubernetes: Production orchestration (auto-scaling, self-healing)
  • ConfigMaps/Secrets: K8s-native config management
  • Rolling Updates: Zero-downtime deployments

Containers are the foundation of modern microservices.

Cheat Sheet

# Docker
docker build -t myapp:1.0 .
docker run -d -p 8080:8080 myapp:1.0
docker logs -f <container>

# Kubernetes
kubectl apply -f deployment.yaml
kubectl get pods
kubectl logs <pod>
kubectl exec -it <pod> -- /bin/sh
kubectl scale deployment myapp --replicas=5
kubectl delete pod <pod>

Resources

Next: API Gateway & Security