Massive Traefik Crowdsec Plugin/Middleware Kubernetes Issues

Practically begging for help here! I have been dealing with something that has been killing me for over a month now. I simply cannot get Crowdsec to work within my Kubernetes cluster. Which is where I usually deploy all my network infrastructure like Traefik for reverse proxying. It seems everything works okay, but when I apply the Crowdsec-Traefik plugin middleware to an Ingress route. It simply does not work whatsoever. I 100% have set up the Crowdsec-Traefik plugin correctly, with the correct LAPI key and everything. The Traefik dashboard also reports the middleware is recognized within Traefik and the cluster.

I will drop some info on my cluster for context: I am running a six node cluster with metallb as a load balancer for services. Everything has worked great so far. I run an HA 3 pod Traefik deployment within it. I primarily deploy everything with Helm charts.

I will drop some configuration down below:

Crowdsecโ€™s values.yaml:

container_runtime: containerd
# Here you can specify your own custom configuration to be loaded in crowdsec agent or lapi
# Each config needs to be a multi-line using '|' in YAML specs
# for the agent those configs will be loaded : parsers, scenarios, postoverflows, simulation.yaml
# for the lapi those configs will be loaded : profiles.yaml, notifications, console.yaml

tls:
  enabled: true
  bouncer:
    reflector:
      namespaces: ["traefik"]
agent:
  # Specify each pod whose logs you want to process
  persistentVolume:
    config:
      enabled: false
    data:
      enabled: true
      storageClassName: "longhorn" 
  acquisition:
    # The namespace where the pod is located
    - namespace: traefik
      # The pod name
      podName: traefik-*
      # as in crowdsec configuration, we need to specify the program name to find a matching parser
      program: traefik
  env:
    - name: PARSERS
      value: "crowdsecurity/cri-logs crowdsecurity/whitelists crowdsecurity/nextcloud-whitelist"
    - name: COLLECTIONS
      value: "crowdsecurity/linux crowdsecurity/k8s-audit crowdsecurity/apache2 crowdsecurity/traefik crowdsecurity/home-assistant Dominic-Wagner/vaultwarden timokoessler/uptime-kuma firix/authentik LePresidente/jellyseerr LePresidente/jellyfin LePresidente/adguardhome crowdsecurity/nextcloud gauth-fr/immich"
    # When testing, allow bans on private networks
    #- name: DISABLE_PARSERS
    #  value: "crowdsecurity/whitelists"
  image:
    pullPolicy: Always
lapi:
  dashboard:
    enabled: false
    ingress:
      host: dashboard.local
      enabled: false
  persistentVolume:
    config:
      enabled: false
    data:
      enabled: true
      storageClassName: "longhorn"  
  resources:
    limits:
      memory: 200Mi
    requests:
      cpu: 250m
      memory: 200Mi
  env:
    # For an internal test, disable the Online API by setting 'DISABLE_ONLINE_API' to "true"
    - name: DISABLE_ONLINE_API
      value: "false"
    - name: ENROLL_KEY
      value: "placeholder-correct-key"
    - name: ENROLL_INSTANCE_NAME
      value: "k3s"
    - name: ENROLL_TAGS
      value: "homelab"

image:
  pullPolicy: Always

Here is my Traefikโ€™s values.yaml:

globalArguments:
  - "--global.sendanonymoususage=false"
  - "--global.checknewversion=false"

additionalArguments:
  - "--serversTransport.insecureSkipVerify=true"
  - "--log.level=INFO"
  - "--providers.kubernetesingress.namespaces="
  - "--providers.kubernetescrd.namespaces="

deployment:
  enabled: true
  replicas: 3
  annotations: {}
  podAnnotations: {}
  additionalContainers: []
  initContainers: []

ports:
  web:
    redirectTo:
      port: websecure
  websecure:
    tls:
      enabled: true  
      
ingressRoute:
  dashboard:
    enabled: false

providers:
  kubernetesCRD:
    enabled: true
    ingressClass: traefik-external
    allowExternalNameServices: true
  kubernetesIngress:
    enabled: true
    allowExternalNameServices: true
    allowCrossNamespace: true
    publishedService:
      enabled: false

rbac:
  enabled: true

service:
  enabled: true
  type: LoadBalancer
  annotations: {}
  labels: {}
  spec:
    loadBalancerIP: 192.168.30.150 # this should be an IP in the MetalLB range
    externalTrafficPolicy: Local # for crowdsec
  loadBalancerSourceRanges: []
  externalIPs: []

logs:
  access:
    enabled: true

experimental:
  plugins:
    crowdsec-bouncer-traefik-plugin:
      moduleName: "github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin"
      version: "v1.3.0-dev2"

volumes:
  - name: my-crowdsec-bouncer-tls
    mountPath: /etc/traefik/certs/
    type: secret

image:
  pullPolicy: Always

Now for the middleware:

apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
    name: my-crowdsec-bouncer-traefik-plugin
    namespace: default
spec:
    plugin:
        crowdsec-bouncer-traefik-plugin:
            CrowdsecLapiKey: place-holder-correct-key
            Enabled: "true"

Now, here is an Ingress route format Iโ€™ve used with no avail:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: nextcloud-external
  namespace: default
  annotations: 
    kubernetes.io/ingress.class: traefik-external
spec:
  entryPoints:
    - websecure
  routes:
  - match: Host(`www.nextcloud.xxx.xyz`)
    kind: Rule
    services:
      - name: nextcloud-external
        port: 80
    middlewares:
      - name: my-crowdsec-bouncer-traefik-plugin
  - match: Host(`nextcloud.xxx.xyz`)
    kind: Rule
    services:
    - name: nextcloud-external
      port: 80  
  tls:
    secretName: xxx-xyz-tls

I have also experimented with specifying the middleware on the "Host(nextcloud.xxx.xyz) portion such as this:

match: Host(`nextcloud.xxx.xyz`)
    kind: Rule
    services:
      - name: nextcloud-external
        port: 80
    middlewares:
      - name: my-crowdsec-bouncer-traefik-plugin

When I specify the middleware here, I simply get a blank webpage that does not work at all. When I specify the middleware at the Host(www.nextcloud.xxx.xyz) I can access the webpage, but itโ€™s clear crowdsec will not work as I create a cscli decision to ban an IP I can VPN into, and the ban goes into place, yet when I VPN to the IP address, I can still access the webpage no problem.

I have troubleshooted this to no end and I have simply gotten nowhere.

In your middleware configuration I dont see where you have defined where the crowdsec LAPI is on our example we have

apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: bouncer
  namespace: traefik
spec:
  plugin:
    bouncer:
      enabled: true
      crowdsecMode: none
      crowdsecLapiScheme: https
      crowdsecLapiHost: crowdsec-service.crowdsec:8080
      crowdsecLapiTLSCertificateAuthorityFile: /etc/traefik/crowdsec-certs/ca.crt
      crowdsecLapiTLSCertificateBouncerFile: /etc/traefik/crowdsec-certs/tls.crt
      crowdsecLapiTLSCertificateBouncerKeyFile: /etc/traefik/crowdsec-certs/tls.key

ref: https://www.crowdsec.net/blog/integrating-crowdsec-kubernetes-tls

You can know if this is working by execing into the LAPI pod and running cscli bouncers list if you dont see a ip address then there a communication problem between the nodes

Thanks for the middleware configuration! Went ahead and updated mine to reflect my configuration:

apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
    name: my-crowdsec-bouncer-traefik-plugin
    namespace: default
spec:
  plugin:
    crowdsec-bouncer-traefik-plugin:
      CrowdsecLapiKey: api-key
      enabled: "true"
      crowdsecMode: none
      crowdsecLapiScheme: https
      crowdsecLapiHost: crowdsec-service.crowdsec:8080
      crowdsecLapiTLSCertificateAuthorityFile: /etc/traefik/certs/ca.crt
      crowdsecLapiTLSCertificateBouncerFile: /etc/traefik/certs/tls.crt
      crowdsecLapiTLSCertificateBouncerKeyFile: /etc/traefik/certs/tls.key

However, this still doesnโ€™t seem to be working at all. My LAPI is still connected to the bouncer (like the prior configuration as well):

ian@DESKTOP-0S14EUI:~/kube/traefik-cert/traefik$ kubectl exec -it -n crowdsec crowdsec-lapi-7d897997b4-cb4tz -- bin/bash
crowdsec-lapi-7d897997b4-cb4tz:/# cscli bouncers list
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
 Name      IP Address   Valid   Last API pull          Type   Version   Auth Type 
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
 traefik                โœ”๏ธ       2024-04-08T20:39:46Z                    api-key   
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
crowdsec-lapi-7d897997b4-cb4tz:/# 

But I still can access my webpages from an IP I panned via execโ€™ing into one of my agent pods:

crowdsec-agent-62j75:/# cscli decisions add --ip 193.37.254.73
INFO[2024-04-08T22:38:32Z] Decision successfully added                  
crowdsec-agent-62j75:/# cscli decisions list
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚   ID   โ”‚ Source โ”‚   Scope:Value    โ”‚        Reason        โ”‚ Action โ”‚ Country โ”‚ AS โ”‚ Events โ”‚     expiration     โ”‚ Alert ID โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ 836732 โ”‚ cscli  โ”‚ Ip:193.37.254.73 โ”‚ manual 'ban' from '' โ”‚ ban    โ”‚         โ”‚    โ”‚ 1      โ”‚ 3h59m47.305148694s โ”‚ 128      โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
crowdsec-agent-62j75:/# exit

For IP decision bans, do I need to enter the command into all three agent pods? (I have three.) Rather than just a single agent pod? Cannot see as to why this isnโ€™t working right.

It not working correctly because the bouncer isnt connecting either because a misconfiguration or a networking issue.

My recommendation is to walk through our TLS guide again and see what went wrong as clearly there is something just not right

https://www.crowdsec.net/blog/integrating-crowdsec-kubernetes-tls

It not working correctly because the bouncer isnt connecting either because a misconfiguration or a networking issue.

Could you check my configuration I listed above to see if anything is glaringly wrong? I have scoured it and reconfigured about 3-4 times now. With the TLS guide as well. Nothing is working whatsoever.

Thereโ€™s a couple things with the guide though that may be outdated around the current Traefik helm chart.

I was getting errors when trying to deploy Traefik with middleware configuration in the values.yaml with the additional arguments:

I was only able to get it to work with:

experimental:
plugins:
crowdsec-bouncer-traefik-plugin:
moduleName: โ€œGitHub - maxlerebourg/crowdsec-bouncer-traefik-plugin: Traefik plugin for Crowdsec - WAF and IP protectionโ€
version: โ€œv1.2.1โ€

I think the helm chart has updated itโ€™s default configuration since this tutorial was written.

Aside from that, I am running into issues with specifying the crowdsec-traefik bouncer in the Traefik namespace as stated in the TLS guide. Even with RBAC and crossname space configuration in the Traefikโ€™s values.yaml, the middleware cannot work with services deployed in the default namespace or others.

Okay, following back as I was able to get Crowdsec to properly work as global middleware. Primarily through the flags on:

"--entrypoints.web.http.middlewares=traefik-bouncer@kubernetescrd"
"--entrypoints.websecure.http.middlewares=traefik-bouncer@kubernetescrd"

I had to remove any reference of middleware on my IngressRoutes, then Crowdsec was able to work perfectly fine and block traffic.

However, I really want to avoid using Crowdsec globally. As I have IngressRoutes that are used specifically on services that are only resolvable via local network. Of which, thereโ€™s no need for the overhead of Crowdsec as they are behind my firewall. I would like to be able to finely tune where I am using the Crowdsec/Traefik bouncer plugin and when I go to IngressRoute UptimeKuma specifically, while removing Crowdsec as global middleware I get errors. Even though this should work:

---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: uptime-kuma
  namespace: default
  annotations: 
    kubernetes.io/ingress.class: traefik-external
spec:
  entryPoints:
    - websecure
  routes:    
    - match: Host(`uptimekuma.xxx.xyz`)
      kind: Rule
      middlewares:
        - name: bouncer
          namespace: traefik
      services:
        - name: uptime-kuma-external
          port: 3001
  tls:
    secretName: xxx-xyz-tls

Unsure as to why, in configuring the bouncer plugin via middleware doesnโ€™t work, but it works globally. I am using Traefik 2.11.1, and the Traefik-Crowdsec Plugin v1.1.16, any possibility specifying middleware specifically is only workable on a later plugin version?

Might be something to raise with the original maintainers of the plugin, as myself donโ€™t run traefik and havent configured it as your use case.

1 Like

This solve what I believe is the same problem that I am having. After 3 days going through docs, reading forums, I never saw anything about the above traefik settings. But it works, it finally works!

I had the same issue
Without applying the global middleware like you are seeing, you wont see ingressRoutes pick up the middleware annotations

Regular ingress works fine

What i ended up doing was this

#!/bin/env bash

TRAEFIK_BOUNCER_KEY="${TRAEFIK_BOUNCER_KEY:-$(openssl rand -hex 16)}"                                           # Key to use for traefik bouncer. Generate random 32 char key if not set.
TRAEFIK_LOAD_BALANCER_IP="$(ip addr show dev tailscale0 | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1)"     # Set from interface tailscale0 IP address
TRAEFIK_DASHBOARD_PASSWORD="${TRAEFIK_DASHBOARD_PASSWORD:-$(openssl rand -hex 16)}"                             # Password for traefik dashboard basic auth. Generate random 32 char password if not set.

# MARK: Install K3s
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - --flannel-backend=none --disable-kube-proxy --disable servicelb --disable-network-policy --disable traefik

# Copy Kubeconfig
mkdir -p $HOME/.kube
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# MARK: Install Cilium CLI

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi 
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

# Verify Cli Install

cilium status

# Get k8s api server address
K8S_API_SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}' | sed 's|https://||' | cut -d: -f1)

# MARK: Create Cilium Values File

cat <<EOF > cilium-values.yaml
k8sServiceHost: "${K8S_API_SERVER}"  # The Kubernetes API server address
k8sServicePort: "6443"  # Kubernetes API server port
kubeProxyReplacement: true  # Enables eBPF-based kube-proxy replacement for better performance
l2announcements:
  enabled: true  # Enables Layer 2 announcements for external IP management
externalIPs:
  enabled: true  # Allows services to use external IPs for better connectivity
k8sClientRateLimit:
  qps: 50  # API request rate limit to avoid overwhelming the K8s API
  burst: 200  # Maximum burst rate for API requests
operator:
  replicas: 1  # Ensures a single replica of the Cilium operator, suitable for small clusters
  rollOutPods: true  # Ensures smooth rolling updates of Cilium components
  rollOutCiliumPods: true  # Ensures that Cilium pods are updated properly during upgrades
gatewayAPI:
  enabled: true  # Enables support for the Kubernetes Gateway API
envoy:
  enabled: true  # Enables Envoy integration for advanced networking and security features
securityContext:
  capabilities:
    keepCapNetBindService: true  # Ensures correct capabilities for networking
debug:
  enabled: true  # Enables debug logging for troubleshooting
EOF

# MARK: Install Cilium with the specified values

cilium install -f cilium-values.yaml

# Verify Cilium Installation

cilium status

# MARK: Create cilium Addresspool with tailscale ip range
cat <<EOF | sudo kubectl apply -f -
---
apiVersion: cilium.io/v2alpha1
kind: CiliumL2AnnouncementPolicy
metadata:
  name: announce-all-lb
spec:
  serviceSelector: {}   # all LoadBalancer services
  nodeSelector: {}      # all nodes
  loadBalancerIPs: true
---
apiVersion: cilium.io/v2alpha1
kind: CiliumLoadBalancerIPPool
metadata:
  name: k8s
spec:
  blocks:
    - cidr: "${TRAEFIK_LOAD_BALANCER_IP}/32"
EOF

# MARK: Install Helm CLI
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# MARK: Add Traefik and Crowdsec Helm Repo
sudo kubectl create namespace networking
helm repo add traefik https://helm.traefik.io/traefik
helm repo add crowdsec https://crowdsecurity.github.io/helm-charts
helm repo update

# MARK: Setup CrowdSec Values File
cat <<EOF > crowdsec-values.yaml
container_runtime: containerd

agent:
  enabled: true
  acquisition:
    - namespace: networking
      podName: traefik-*
      program: traefik
  env:
    - name: COLLECTIONS
      value: "crowdsecurity/linux crowdsecurity/traefik crowdsecurity/base-http-scenarios crowdsecurity/http-cve"
  metrics:
    enabled: true
    serviceMonitor:
      enabled: false

lapi:
  enabled: true
  replicas: 1
  metrics:
    enabled: true
  persistentVolume:
    data:
      enabled: true
      size: 1Gi
    config:
      enabled: true
      size: 100Mi
tls:
  enabled: false
config:
  config.yaml.local: |
    api:
      server:
        auto_registration:
          enabled: true
          token: "\${REGISTRATION_TOKEN}"
          allowed_ranges:
            - "127.0.0.1/32"
            - "10.0.0.0/8"
EOF

# MARK: Install CrowdSec
helm install crowdsec crowdsec/crowdsec -n networking -f crowdsec-values.yaml

# MARK: Create Traefik Bouncer Lapi Token
LAPI_POD=$(sudo kubectl -n networking get pod -l type=lapi -o jsonpath='{.items[0].metadata.name}')
sudo kubectl -n networking exec -it "$LAPI_POD" -- cscli bouncers add traefik-bouncer --key "${TRAEFIK_BOUNCER_KEY}"

# MARK: Create bouncer api key secret
sudo kubectl create secret generic crowdsec-bouncer-api-key \
  -n networking \
  --from-literal=api-key="${TRAEFIK_BOUNCER_KEY}"

# MARK: Setup Traefik Values File
cat <<EOF > traefik-values.yaml
deployment:
  enabled: true
  kind: DaemonSet
  replicas: 1

experimental:
  plugins:
    crowdsec-bouncer-traefik-plugin:
      moduleName: github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin
      version: v1.4.6

globalArguments:
  - "--global.checknewversion=false"
  - "--global.sendanonymoususage=false"
  - "--api.insecure=true"
  - "--providers.kubernetescrd"
  - "--providers.kubernetescrd.throttleduration=60s"
  - "--providers.kubernetescrd.ingressclass=traefik"
  - "--providers.kubernetesingress.ingressclass=traefik"
  - "--providers.kubernetesingress.ingressendpoint.publishedservice=networking/traefik"

ingressClass:
  enabled: true
  isDefaultClass: false
  name: "traefik"

providers:
  kubernetesCRD:
    enabled: true
    allowCrossNamespace: true

ingressRoute:
  dashboard:
    enabled: true
    matchRule: Host(\`traefik.canary.testdomain.net\`)
    entryPoints: ["websecure"]
    middlewares:
      - name: traefik-dashboard-auth

kubernetesIngress:
  enabled: true
  publishedService:
    enabled: true
    pathOverride: "networking/traefik"

service:
  enabled: true
  single: true
  type: LoadBalancer
  loadBalancerIP: "${TRAEFIK_LOAD_BALANCER_IP}"

logs:
  general:
    level: DEBUG
  access:
    enabled: true
    format: json
    fields:
      headers: 
        defaultmode: keep

extraObjects:
  - apiVersion: v1
    kind: Secret
    metadata:
      name: traefik-dashboard-auth-secret
      namespace: networking
    type: kubernetes.io/basic-auth
    stringData:
      username: admin
      password: "${TRAEFIK_DASHBOARD_PASSWORD}"
  
  - apiVersion: traefik.io/v1alpha1
    kind: Middleware
    metadata:
      name: traefik-dashboard-auth
      namespace: networking
    spec:
      basicAuth:
        secret: traefik-dashboard-auth-secret

  - apiVersion: traefik.io/v1alpha1
    kind: Middleware
    metadata:
      name: crowdsec-bouncer
      namespace: networking
    spec:
      plugin:
        crowdsec-bouncer-traefik-plugin:
          Enabled: true
          logLevel: DEBUG
          crowdsecMode: live
          crowdsecLapiHost: crowdsec-service.networking.svc.cluster.local:8080
          crowdsecLapiScheme: http
          crowdsecLapiKey: "${TRAEFIK_BOUNCER_KEY}"
          UpdateIntervalSeconds: 10

env:
  - name: CROWDSEC_BOUNCER_API_KEY
    valueFrom:
      secretKeyRef:
        name: crowdsec-bouncer-api-key
        key: api-key
EOF

# MARK: Install Traefik with CrowdSec Bouncer
helm install traefik traefik/traefik -n networking -f traefik-values.yaml

# MARK: Run WhoAmI Test
cat <<EOF > test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: whoami
  labels:
    app: whoami
spec:
  containers:
    - name: whoami
      image: traefik/whoami:v1.10
      ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
spec:
  selector:
    app: whoami
  ports:
    - port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: whoami-ingress
  namespace: default
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
    traefik.ingress.kubernetes.io/router.middlewares: networking-crowdsec-bouncer@kubernetescrd
spec:
  ingressClassName: traefik
  rules:
    - host: whoami.canary.testdomain.net
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: whoami
                port:
                  number: 80
EOF

sudo kubectl apply -f test-pod.yaml

# Test with Ingress Route
cat <<EOF | sudo kubectl apply -f -
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: whoami-ingressroute
  namespace: default
spec:
  entryPoints:
    - web
  routes:
    - match: Host(\`whoamitest.canary.testdomain.net\`)
      kind: Rule
      services:
        - name: whoami
          port: 80
      middlewares:
        - name: crowdsec-bouncer
          namespace: networking
EOF

# Cleanup test pod after verification
# sudo kubectl delete -f test-pod.yaml

The key markers here were allowing crossNamespace resolution of traefik middleware etc with

providers:
  kubernetesCRD:
    enabled: true
    allowCrossNamespace: true

Once that was in, both ingress, and ingressRoutes functioned properly