First off thanks for making this wonderful piece of software.
I have a specific setup like this: Client ⇒ VPS (Caddy L4 TLS passthrough) ⇒ (Wireguard) ⇒ Local machine (another Caddy do TLS termination)
Currently I have successfully configure the local machine to forward Caddy access log to the VPS by rsyslog and have caddy-bouncer does the blocking there, but I get stuck at how to implement AppSec in this setup as my understanding
WAF need to decrypt the traffic in order to detect attack, but currently my VPS is acting as a “dumb pipe”.
In turn, the local machine can see the traffic but can not do the blocking as all the traffic is coming from Wireguard IP from its viewpoint (correct me if I’m wrong)
I did some search and found this guide: About multi-server setup | CrowdSec but it seems talking about installing bouncer at local machine to block bad IP which is not practical as point (2) above.
So I want to ask if current setup is a good way of doing it and how to implement AppSec in this specific scenerio.
if your wireguard is a bidirectional tunnel you can forward alerts using the multiserver back to the VPS caddy.
However, on the Caddy vps I think you should be able to enable the proxy protocol at layer 4 so it can preserve the real IP when it passed downstream to your tls terminating caddy?
I have exactly such a setup since last week, but approached it differently from an architectural side. It seems to be working, but I haven’t done extended testing yet.
I have an NGINX on the VPS, that terminates TLS for some services on the VPS. Based on SNI, it sends down certain requests via proxy protocol to a local traefik instance via Wireguard. This terminates TLS for the local services. Both machines are running only log processors, NGINX/traefik-bouncers, and firewallbouncers. The LAPI is running on a different LXC container in the local net (this is exactly the setup like it is described in the multi-server setup). All log processors and bouncers (including appsec) are allowed to send and receive HTTP(S) requests to and from this LAPI machine.
The decisions are then based on the LAPI container. The VPS firewall and/or NGINX bouncer can then block the request from even entering the pipe, even if it originally was based on the local machine “attack”.
You could also place the LAPI on the local machine (if you don’t want a separate server/container).
This is exactly the way to go in this case: my traefik access logs sees the external IP (if configured correctly as a proxy protocol endpoint!) and so does the traefik bouncer (for Appsec).
If I remember correctly, caddy (on the VPS) does not allow for proxy protocol passthrough (at least not based on SNI). So I am wondering, how you are currently passing the traffic down the pipe behind the VPS caddy. This is the reason, why I went with NGINX on the VPS: because it can split the traffic without breaking TLS using proxy protocol, while being able to serve other services directly.
Yes I’m enable proxy protocol to preserve real IP.
I think I figure it out, there is no need to install Crowdsec in the local machine at all.
On the VPS, I just set listen address 0.0.0.0 for Crowdsec LAPI and AppSec, restrict port 8080 and 7422 to allow in on Wireguard interface only. Then pre-generate Caddy Bouncer API key.
On Local machine, only Caddy bouncer are installed, and in Caddyfile, I declare step 1 Crowdsec API key, Crowdsec url, Appsec url point to VPS Wireguard IP.
I don’t see this setup documented anywhere so I don’t know if is following the security best practices. Please critique if you see anything strange.
You can pass proxy protocol base on SNI now using the Caddy L4 module. For example on my VPS that do TLS passthrough:
This looks fine to me, but there is one key thing to keep in mind. The request is decrypted on your home Caddy side (TLS) and then sent back through the WireGuard tunnel in plain text (HTTP). As long as you are comfortable treating the WireGuard tunnel as the encryption boundary, meaning the VPS can ultimately see the traffic unencrypted, then it is acceptable.