Getting Started¶
Overview¶
This guide walks you through deploying HAPTIC (HAProxy Template Ingress Controller) and creating your first template-driven configuration. You'll learn how to:
- Install the controller and HAProxy using Helm
- Create a basic Ingress configuration
- Verify the deployment and test routing
The entire process takes approximately 15-20 minutes on a local Kubernetes cluster.
Prerequisites¶
- Kubernetes cluster (1.19+) - kind, minikube, or cloud provider
- kubectl configured to access your cluster
- Helm 3.0+
Webhook Validation
This guide disables webhook validation for simplicity. For production, enable it with cert-manager for CRD schema enforcement before applying configurations.
Step 1: Install with Helm¶
Install the controller and HAProxy using Helm:
# Install from OCI registry (deploys both controller and HAProxy pods)
helm install haptic oci://registry.gitlab.com/haproxy-haptic/haptic/charts/haptic \
--version 0.1.0 \
--set webhook.enabled=false \
--namespace haptic --create-namespace
The Helm chart deploys:
- Controller: Watches Kubernetes resources and generates HAProxy configurations
- HAProxy pods: Load balancers with Dataplane API sidecars (2 replicas by default)
- RBAC: Permissions for watching Ingress, Service, and EndpointSlice resources
- HAProxyTemplateConfig: CRD resource with the default template configuration, including template libraries for Ingress and Gateway API out of the box
Verify both components are running:
# Check controller
kubectl get pods -n haptic -l app.kubernetes.io/component=controller
# Check HAProxy pods
kubectl get pods -n haptic -l app.kubernetes.io/component=loadbalancer
You should see the controller pod and two HAProxy pods in Running state.
HAProxy version
The chart defaults to HAProxy 3.2. To select a different version (e.g. 3.0 LTS or 3.3), set --set haproxyVersion=3.0. See HAProxy Versions for details.
Step 2: Deploy a Sample Application¶
Create a simple echo service to test routing:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: ealen/echo-server:latest
ports:
- containerPort: 80
env:
- name: PORT
value: "80"
---
apiVersion: v1
kind: Service
metadata:
name: echo
namespace: default
spec:
selector:
app: echo
ports:
- port: 80
targetPort: 80
Save as echo-app.yaml and apply:
Step 3: Create an Ingress Resource¶
Create an Ingress resource that the controller will process:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
namespace: default
spec:
ingressClassName: haproxy
rules:
- host: echo.example.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo
port:
number: 80
Save as echo-ingress.yaml and apply:
The controller will automatically detect this new Ingress, render the HAProxy configuration, validate it, and deploy it to the HAProxy pods. See What's Happening Behind the Scenes for details.
Step 4: Verify the Configuration¶
Check Controller Logs¶
Watch the controller process the Ingress:
kubectl logs -n haptic -l app.kubernetes.io/name=haptic,app.kubernetes.io/component=controller --tail=50 -f
You should see log entries showing:
- Ingress resource detected
- Template rendering completed
- Configuration validation passed
- Deployment to HAProxy instances succeeded
Inspect HAProxy Configuration¶
Verify the generated HAProxy configuration was deployed:
# Get one of the HAProxy pods
HAPROXY_POD=$(kubectl get pods -n haptic -l app.kubernetes.io/component=loadbalancer -o jsonpath='{.items[0].metadata.name}')
# View the generated configuration
kubectl exec -n haptic $HAPROXY_POD -c haproxy -- cat /etc/haproxy/haproxy.cfg
You should see:
- A frontend section with routing rules
- A backend section referencing the echo service
- Server entries pointing to the echo pod endpoints
Step 5: Test the Routing¶
Port-Forward to HAProxy¶
HAProxy is running inside the cluster and isn't directly reachable from your machine. Port-forward creates a temporary tunnel from your local port to the HAProxy service:
Test the Endpoint¶
In another terminal:
# Test with Host header
curl -H "Host: echo.example.local" http://localhost:8080/
# You should receive a response from the echo server showing:
# - Request headers
# - Host information
# - Environment variables
Test Load Balancing¶
Make multiple requests to see load balancing across echo pods:
for i in {1..10}; do
curl -s -H "Host: echo.example.local" http://localhost:8080/ | grep -o '"HOSTNAME":"[^"]*"'
done
You should see responses from different echo pods.
What's Happening Behind the Scenes¶
When you created the Ingress resource, the controller:
- Detected the change via Kubernetes watch API
- Rendered templates using the default HAProxyTemplateConfig with your Ingress data
- Validated the configuration using HAProxy's native parser
- Compared with current state to determine what changed
- Deployed updates to all HAProxy pods via Dataplane API
- Used runtime API where possible (server addresses) to avoid reloads
The entire process typically completes in under 1 second.
Next Steps¶
Now that you have a working setup, explore these topics:
Customize the Configuration¶
The default configuration is generated from the HAProxyTemplateConfig CRD created by Helm. To customize:
# View the current configuration
kubectl get haproxytemplateconfig -n haptic haptic-config -o yaml
# Edit the configuration
kubectl edit haproxytemplateconfig -n haptic haptic-config
See CRD Reference for all available options.
Template Customization¶
The default template libraries already cover many common use cases: path-based routing, SSL termination, and annotation-driven configuration. You do not need to write or modify templates to use these features.
When you need to go beyond the default libraries — custom annotations, domain-specific logic, or HAProxy features not covered — see the Templating Guide.
Watched Resources¶
Extend the controller to watch additional Kubernetes resources:
- EndpointSlices: Use actual pod IPs instead of service DNS
- Secrets: Load TLS certificates dynamically
- ConfigMaps: Inject custom HAProxy configuration snippets
- Custom CRDs: Define your own resource types
See Watching Resources for configuration details.
High Availability¶
Configure the controller for production deployments:
- Scale to 3+ replicas across availability zones
- Configure PodDisruptionBudgets
- Set up monitoring and alerting
- Enable leader election (already enabled by default)
See High Availability for HA configuration.
Monitoring¶
Set up Prometheus monitoring for the controller:
# Enable ServiceMonitor if using Prometheus Operator
helm upgrade haptic oci://registry.gitlab.com/haproxy-haptic/haptic/charts/haptic \
--version 0.1.0 --reuse-values -n haptic \
--set monitoring.serviceMonitor.enabled=true \
--set monitoring.serviceMonitor.interval=30s
See Monitoring Guide for metrics and dashboards.
Troubleshooting¶
If you run into issues during setup, check these common areas:
- Controller not starting -- check logs for missing HAProxyTemplateConfig, RBAC errors, or API connectivity issues
- HAProxy pods not updating -- verify the Dataplane API sidecar is running and credentials match
- Ingress not routing -- ensure
ingressClassName: haproxyis set and the backend Service has endpoints
For detailed diagnosis steps, see the Troubleshooting Guide.
Clean Up¶
Remove all resources created in this guide:
# Remove Ingress and echo application
kubectl delete ingress echo-ingress -n default
kubectl delete deployment echo -n default
kubectl delete service echo -n default
# Uninstall HAPTIC (removes controller, HAProxy, and all related resources)
helm uninstall haptic -n haptic
# Remove namespace
kubectl delete namespace haptic
# Remove CRD (optional, removes all HAProxyTemplateConfig resources)
kubectl delete crd haproxytemplateconfigs.haproxy-haptic.org