Skip to content

Troubleshooting

Overview

This page covers common issues when deploying and operating the HAPTIC Helm chart.

For controller behavior troubleshooting, see the controller troubleshooting guide.

Controller Not Starting

Check logs:

kubectl logs -f -l app.kubernetes.io/name=haptic,app.kubernetes.io/component=controller

Common issues:

  • HAProxyTemplateConfig missing: kubectl get haproxytemplateconfig — reinstall the Helm chart if absent
  • Credentials Secret missing: kubectl get secret haptic-dataplane-credentials — recreate with the correct keys
  • RBAC permissions incorrect: kubectl auth can-i list ingresses --all-namespaces --as=system:serviceaccount:<namespace>:<serviceaccount>
  • NetworkPolicy blocking access: see Networking

Image Pull Errors

If pods are stuck in ImagePullBackOff:

kubectl describe pod -l app.kubernetes.io/name=haptic

Verify the haproxyVersion value matches an available image tag:

helm get values haptic | grep haproxyVersion

The controller image tag is derived from both the chart version and haproxyVersion. If pulling from a private registry, configure imagePullSecrets.

CRD Not Found

If the controller fails with "no kind HAProxyTemplateConfig is registered":

kubectl get crd haproxytemplateconfigs.haproxy-haptic.org

CRDs are installed by the chart. If missing, reinstall:

helm upgrade --install haptic oci://registry.gitlab.com/haproxy-haptic/haptic/charts/haptic \
  --version <version> --namespace haptic

Ingress Not Processed

If creating an Ingress produces no HAProxy configuration change:

  1. Verify the IngressClass: the Ingress must reference the class created by the chart
kubectl get ingressclass
kubectl get ingress <name> -o jsonpath='{.spec.ingressClassName}'
  1. Check namespace filtering: if controller.config.watchedResources.ingresses.namespace is set, the Ingress must be in that namespace

  2. Check controller logs for watch events:

kubectl logs -l app.kubernetes.io/name=haptic,app.kubernetes.io/component=controller | grep -i ingress

Cannot Connect to HAProxy Pods

  1. Check HAProxy pod labels match pod_selector
kubectl get pods --show-labels
  1. Verify Dataplane API is accessible
kubectl port-forward <haproxy-pod> 5555:5555
curl http://localhost:5555/v3/info
  1. Check NetworkPolicy
kubectl describe networkpolicy

Dataplane API Authentication Failure

If the controller logs show "401 Unauthorized" or "403 Forbidden" when connecting to HAProxy:

kubectl get secret haptic-dataplane-credentials -o jsonpath='{.data}' | base64 -d

The username and password in the Secret must match what is configured in the HAProxy Dataplane API. After updating the Secret, restart the controller:

kubectl rollout restart deployment haptic-controller

HAProxy Returning 503

A 503 usually means HAProxy has no healthy servers for the backend:

  1. Check that backend pods are running and ready
kubectl get pods -l app=<your-app>
kubectl get endpointslices -l kubernetes.io/service-name=<service-name>
  1. Verify servers appear in HAProxy config
kubectl exec <haproxy-pod> -c haproxy -- cat /etc/haproxy/haproxy.cfg | grep -A5 "backend"
  1. Check HAProxy stats for server state (UP/DOWN):
kubectl port-forward svc/haptic-haproxy 8404:8404
curl http://localhost:8404/stats

Configuration Not Updating After Ingress Change

If controller logs show successful deployment but HAProxy still serves the old config:

  1. Confirm the config file was written
kubectl exec <haproxy-pod> -c haproxy -- ls -lh /etc/haproxy/haproxy.cfg
  1. Check that both containers share the config volume — HAProxy and Dataplane API must mount the same volume

  2. Check Dataplane API reload logs

kubectl logs <haproxy-pod> -c dataplane | tail -20

NetworkPolicy Issues in kind

For kind clusters, ensure:

  • Calico or Cilium CNI is installed
  • DNS access is allowed
  • Kubernetes API CIDR is correct

Debug NetworkPolicy:

# Check controller can resolve DNS
kubectl exec <controller-pod> -- nslookup kubernetes.default

# Check controller can reach HAProxy pod
kubectl exec <controller-pod> -- curl http://<haproxy-pod-ip>:5555/v3/info

For NetworkPolicy configuration details, see Networking.