Hello everyone,
I am currently running a Traefik reverse proxy in combination with a Crowdsec container in front of a Nextcloud. Now I have the problem that as soon as I upload a file larger than 100mb to the Nextcloud, the Crowdsec container restarts without an obvious error in the logs and thus the upload is terminated. Is there a known problem with large files or does this have something to do with the community editon? If anyone knows of a workaroud I would of course be very grateful.
Here is my Docker Compose:
name: pangolin
networks:
default:
driver: bridge
name: pangolin
services:
crowdsec:
command: -t
container_name: crowdsec
depends_on:
- gerbil
environment:
ACQUIRE_FILES: /var/log/traefik/*.log
COLLECTIONS: crowdsecurity/traefik crowdsecurity/appsec-virtual-patching crowdsecurity/appsec-generic-rules
ENROLL_INSTANCE_NAME: pangolin-crowdsec
ENROLL_TAGS: docker
GID: "1000"
PARSERS: crowdsecurity/whitelists
expose:
- 9090
- 6060
- 7422
healthcheck:
test:
- CMD
- cscli
- capi
- status
image: crowdsecurity/crowdsec:latest
labels:
- traefik.enable=false
ports:
- 9090:9090
- 6060:6060
restart: unless-stopped
volumes:
- ./config/crowdsec:/etc/crowdsec
- ./config/crowdsec/db:/var/lib/crowdsec/data
- ./config/crowdsec_logs/auth.log:/var/log/auth.log:ro
- ./config/crowdsec_logs/syslog:/var/log/syslog:ro
- ./config/crowdsec_logs:/var/log
- ./config/traefik/logs:/var/log/traefik
gerbil:
cap_add:
- NET_ADMIN
- SYS_MODULE
command:
- --reachableAt=http://gerbil:3003
- --generateAndSaveKeyTo=/var/config/key
- --remoteConfig=http://pangolin:3001/api/v1/gerbil/get-config
- --reportBandwidthTo=http://pangolin:3001/api/v1/gerbil/receive-bandwidth
container_name: gerbil
depends_on:
pangolin:
condition: service_healthy
image: fosrl/gerbil:1.0.0-beta.3
ports:
- 51820:51820/udp
- 443:443
- 80:80
restart: unless-stopped
volumes:
- ./config/:/var/config
pangolin:
container_name: pangolin
healthcheck:
interval: 3s
retries: 5
test:
- CMD
- curl
- -f
- http://localhost:3001/api/v1/
timeout: 3s
image: fosrl/pangolin:1.0.0-beta.15
restart: unless-stopped
volumes:
- ./config:/app/config
traefik:
command:
- --configFile=/etc/traefik/traefik_config.yml
container_name: traefik
depends_on:
pangolin:
condition: service_healthy
image: traefik:v3.3.3
network_mode: service:gerbil
restart: unless-stopped
volumes:
- ./config/traefik:/etc/traefik:ro
- ./config/letsencrypt:/letsencrypt
- ./config/traefik/logs:/var/log/traefik
Here are the logs from the Crowedsec container:
crowdsec | time="2025-03-02T18:43:40Z" level=info msg="127.0.0.1 - [Sun, 02 Mar 2025 18:43:40 UTC] \"HEAD /v1/decisions/stream HTTP/1.1 200 703.02µs \"Go-http-client/1.1\" \""
crowdsec | time="2025-03-02T18:43:57Z" level=info msg="172.18.0.2 - [Sun, 02 Mar 2025 18:43:57 UTC] \"GET /v1/decisions?ip=77.12.132.237&banned=true HTTP/1.1 200 39.312015ms \"Crowdsec-Bouncer-Traefik-Plugin/1.X.X\" \""
crowdsec exited with code 0
crowdsec | Local agent already registered
crowdsec | Check if lapi needs to register an additional agent
crowdsec | sqlite database permissions updated
crowdsec | /etc/crowdsec was found in a volume
crowdsec | Running hub update
crowdsec | Skipping hub update, index file is recent
crowdsec | /var/lib/crowdsec/data was found in a volume
crowdsec | Running hub upgrade
crowdsec | parsers:crowdsecurity/whitelists - not downloading local item
crowdsec | Running: cscli parsers install "crowdsecurity/docker-logs"
crowdsec | Nothing to do.
crowdsec | Running: cscli parsers install "crowdsecurity/cri-logs"
crowdsec | Nothing to do.
crowdsec | Running: cscli collections install "crowdsecurity/traefik"
crowdsec | Nothing to do.
crowdsec | Running: cscli collections install "crowdsecurity/appsec-virtual-patching"
crowdsec | Nothing to do.
crowdsec | Running: cscli collections install "crowdsecurity/appsec-generic-rules"
crowdsec | Nothing to do.
crowdsec | Object parsers/crowdsecurity/whitelists is local, skipping
crowdsec | time="2025-03-02T18:44:23Z" level=info msg="Enabled feature flags: none"
crowdsec | time="2025-03-02T18:44:23Z" level=info msg="Crowdsec v1.6.5-72b4354b"
I would be very grateful for feedback or a potential solution.