Add rate limiting on source IP to prevent DDoS

Bug #1918439 reported by Benjamin Allot
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Content Cache Charm
Fix Released
Medium
Haw Loeung

Bug Description

We had a situation with a small subset of IP was generating a lot of request, causing service disruption
```
Number of request | IP
  60645 | 150.136.170.161
1577168 | 150.136.216.209
1425381 | 150.136.228.9
 866199 | 150.136.33.22
```

We should add a rate limit per IP (or at least an option allowing to) in either haproxy or iptables to prevent this kind of thing from happening.

For haproxy:
* https://www.haproxy.com/blog/bot-protection-with-haproxy/#vulnerability-scanners
* https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/

For iptables, using the hashlimit extension seems relevant for the purpose here.
* http://manpages.ubuntu.com/manpages/xenial/man8/iptables-extensions.8.html

The threshold are yet to be determined but at least not allowing 100 connections from a single IP in a short period of time seems a good start.

Related branches

Tom Haddon (mthaddon)
Changed in content-cache-charm:
status: New → Confirmed
importance: Undecided → Medium
Haw Loeung (hloeung)
description: updated
Revision history for this message
Romain Couturat (romaincout) wrote :

Let's make sure the threshold is high enough so we don't block things like VPN or university http proxies

Revision history for this message
Haw Loeung (hloeung) wrote :

I think we should try go down the HAProxy route first and perhaps have it per-site with the default disabled. The main reason being that we already generate the HAProxy config file used so should be more straight forward to implement vs. iptables where we'll then have to manage.

Also, I'm not sure what metrics the haproxy telegraf plugin exposes but it would be nice to extend it to support feeding the limits and existing connections metrics so we could alert when close to reaching said threshold.

Revision history for this message
Robin Winslow (nottrobin) wrote :

We would very much appreciate this. We've been experiencing large spikes on searches on ubuntu.com from a few IPs - see https://portal.admin.canonical.com/C155805/ (Canonical internal) for reference.

We're experimenting with adding rudimentary rate limiting in our applications (https://github.com/canonical/canonicalwebteam.search/pull/33) but it would be much better to have this at the HTTP cache / CDN layer.

I also think have it off by default with the option to turn it on per site would make the most sense.

Haw Loeung (hloeung)
Changed in content-cache-charm:
status: Confirmed → In Progress
assignee: nobody → Haw Loeung (hloeung)
Revision history for this message
Haw Loeung (hloeung) wrote :

https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/:

"""
* Sliding Window Rate Limiting

frontend website
    bind :80
    stick-table type ipv6 size 100k expire 30s store http_req_rate(10s)
    http-request track-sc0 src
    http-request deny deny_status 429 if { sc_http_req_rate(0) gt 20 }
    default_backend servers
"""

Haw Loeung (hloeung)
Changed in content-cache-charm:
status: In Progress → Fix Committed
Haw Loeung (hloeung)
Changed in content-cache-charm:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.