Back in November 2022 I was browsign the then Twittosphere (now Exosphere?) when I ran across an interesting tweet from @manishrjain:
What is happening?
— Manish R Jain (@manishrjain) November 23, 2022
Caddy is 2x outperforming NGINX in my reverse proxy test 🚀 With Caddy, there's practically no difference in HTTPS vs HTTP performance.
If legit, this is clearly a David vs Goliath story. @mholt6
It caught my attention, but not because of the claim that Caddy outperforms Nginx by 2x, but instead because often, when it comes to benchmarking and comparing technologies, especially when it comes to Servers, Internet and Network Requests, you could often overlook specific details.
Interestingly enough, the claim then went from 2x to 4x more performance! Moar performance!
While this article talks about Caddy and Nginx, I’m going to focus on the benchmark parts, and we won’t make this discussion fall into the what software is better category. At the end of the day, when it comes to performance, nobody can tell you what the best option is for your specific use case.
Always perform your own benchmarks to fit your own criteria and scenarios, and I would even go the extra mile to say that you should not trust a tweet benchmark (no offence neither to Manish nor any other person on the Internet) since your requirements are different than theirs.
Now, trying to give credit where credit is due, I said that the numbers do look impressive, but Nginx is a good old dog that has seen a lot of progress on what we know of the Internet today and while it isn’t written in Go, it was written in C, which is a pretty fast language anyways, so I was curious.
Manish’s response was that these were the “default settings for nginx. Didn’t modify much beyond that. If you think I should try something, suggest and I’ll give it a shot.”
This was my first assumption about the benchmark before even going into detail. You see, Nginx and Caddy are very different pieces of Software. Nginx, since it’s old, has had to support multiple HTTP protocol versions (from 1.0, to 1.1, to 2, to 3, to some of the in-betweeners like SPDY). It also has to support configurations meant for old devices that may still have some awkward quirks that Nginx will flawlessly address that you could turn off to increase performance if your setup or environment doesn’t need it.
In the same length, there are also quite a lot of configurations to correctly tune Nginx “subprocesses” so they more efficiently capture the bang-for-your-buck on your own machine and, improve core utilization, and do better parallelism of the incoming requests and serve them as efficiently as possible. These “defaults” might be the greatest for a generic installation but not for a specific, made-just-for-you benchmark.
Now, don’t get me wrong: Caddy’s out-of-the-box configurations are superb if you want to quickly spin up an environment with out-of-the-box configurations with some of the settings tailor-made by Matt Holt and the tremendous amount of contributors that have had the experience to see it in both testing and production environments.
It’s truly a feat that you can get great performance out of the box with minimal configuration.
Because of that, Nginx is doing a ton of work under the hood even before it starts serving requests.
My response to Manish’s answer was that while I could suggest a few ideas to potentially see if we can improve the performance, it’s hard to know what to suggest, considering I had no clue what machine, environment, operating system and the like he was using to run his tests! Since, as I mentioned before, you can configure, for example, to have Nginx to use CPU cores more efficiently, you’ll need to know what kind of CPU you have, how many cores, how many threads, etc.
I proposed a link to the first result of a quick Google search (because I’m also not an Nginx expert) that could potentially help out from the SysOpsTechnix guys. In return, Manish added some configurations (it’s unclear which ones specifically) and the results ended up being very similar, with Caddy still ahead.
I had to try a benchmark where I can show the rubric and code being used to perform it, so people can comment and challenge whatever configuration I’m using (considering Manish’s original claim did not include the source code since it seems this was done on a live production environment where other Nginx settings could have been biasing the results).
The goal of the benchmark was to:
docker compose
to spin up the environmentThe result was a Github Gist which included a few files, a Caddyfile
and an nginx.conf
. The Caddyfile
was, unsurprisingly, the shortest configuration you could possibly write:
localhost
tls /etc/caddy/localhost.pem /etc/caddy/localhost-key.pem
reverse_proxy http://hello:8000
And the Nginx nginx.conf
file was also pretty simple, but arguably quite more verbose:
worker_processes auto;
events {
worker_connections 1024;
}
http {
upstream docker-nginx {
server hello:8000;
}
server {
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
ssl_certificate /etc/nginx/localhost.pem;
ssl_certificate_key /etc/nginx/localhost-key.pem;
location / {
proxy_pass http://docker-nginx;
proxy_redirect off;
}
}
}
nginx.conf
file in the repository for these benchmarks.To run it, you would use a Docker Compose file. We would pre-generate any TLS certificates so neither has to spend time either requesting them to something like Let’s Encrypt or similar nor self-signing it themselves (plus, Nginx would require additional applications to do this anyways). I used mkcert
from Filippo Valsorda.
The docker-compose.caddy.yml
file would include the Caddyfile
and the certificates in a local directory, then another container that simply serves a hello, $hostname
response from it. This is what it would look like (highlights to the Caddy-specific settings):
version: '3'
services:
caddy:
image: caddy:latest
command: "caddy run --config /etc/caddy/Caddyfile"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- ./localhost.pem:/etc/caddy/localhost.pem
- ./localhost-key.pem:/etc/caddy/localhost-key.pem
networks:
- testing-perf
links:
- hello
ports:
- "443:443"
hello:
image: patrickdappollonio/hello-docker
networks:
- testing-perf
expose:
- 8000
networks:
testing-perf:
Then, for the Nginx docker-compose.nginx.yml
we have (Nginx will automatically load the configuration in /etc/nginx/nginx.conf
):
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./localhost.pem:/etc/nginx/localhost.pem
- ./localhost-key.pem:/etc/nginx/localhost-key.pem
networks:
- testing-perf
links:
- hello
ports:
- "443:443"
hello:
image: patrickdappollonio/hello-docker
networks:
- testing-perf
expose:
- 8000
networks:
testing-perf:
Having all the files in the same directory…
tree
.
├── Caddyfile
├── docker-compose.caddy.yml
├── docker-compose.nginx.yml
├── localhost-key.pem
├── localhost.pem
└── nginx.conf
0 directories, 6 files
… You can then run the tests with:
docker-compose -f docker-compose.caddy.yml up -d
# will run the docker-compose with Caddy and the
# Caddy configuration file, proxying requests to the
# hello-docker container
docker-compose -f docker-compose.nginx.yml up -d
# and this one will run the docker-compose with Nginx
# and the Nginx config, with the proxy same as before
The configuration above only allows running one of these servers at a time due to port 443 conflict.
Stop either Nginx or Caddy before launching its counterpart.
Once one of these servers is running, it will be listening on port 443 on your local machine: fire up your favourite browser, go to https://localhost
, and accept any self-signed certificate warning you might get, since we signed these ourselves.
To benchmark the requests and throughput, you can use your favourite tool. Mine is tsenart/vegeta
. Back then, I used this command:
echo "GET https://localhost/" | \
vegeta attack --format=http --rate=500 --duration=10s | \
vegeta report --type=text
And surprisingly, my results at the time, looked somewhat like this, with Caddy first:
Requests [total, rate, throughput] 5000, 500.11, 500.09
Duration [total, attack, wait] 9.998200887s, 9.99778805s, 412.837µs
Latencies [mean, 50, 95, 99, max] 385.502µs, 325.001µs, 613.057µs, 811.483µs, 20.940099ms
Bytes In [total, mean] 100000, 20.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:5000
Error Set:
Then Nginx with the custom-yet-still-basic configuration:
Requests [total, rate, throughput] 5000, 500.11, 500.09
Duration [total, attack, wait] 9.998173726s, 9.997755045s, 418.681µs
Latencies [mean, 50, 95, 99, max] 337.36µs, 287.403µs, 523.78µs, 681.93µs, 19.36402ms
Bytes In [total, mean] 100000, 20.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:5000
Error Set:
At the time, I ran these tests on an old laptop I had laying around:
inxi
CPU: 6-core Intel Core i7-10710U (-MT MCP-) speed/min/max: 2448/400/4700 MHz
Kernel: 6.0.6-76060006-generic x86_64 Up: 4h 5m
Mem: 8390.5/15691.5 MiB (53.5%) Storage: 476.94 GiB (41.7% used) Procs: 424
Shell: Bash inxi: 3.3.13
Yet you can see that Nginx had better latencies than Caddy in this case scenario under my conditions. It isn’t a 2x or 4x improvement, it’s merely a tiny improvement which I’m sure it could skew either way the more I would keep running vegeta
attacks.
I have to be pedantic on the “my scenario and my conditions” part since I’m sure back then I had a handful of tabs open, probably listening to music, doing some CPU, RAM and disk operations that might have skewed the results a bit. They were by no means ideal conditions, but they were good enough to show that the results can be very different depending on the environment.
Chances are, if you run these tests yourself, the results will probably be different, even more so if new versions of Nginx and Caddy are released since November 2022. You can probably also criticize any of the configurations and details that have been given here, for example, the number of RPS one could argue are not as meaningful as they should, but then again, I was running this from an old laptop while doing other things, the test would’ve been skewed if I would’ve increased it because by then both Caddy and Nginx would’ve been fighting with other apps in my computer for CPU and RAM.
In fact, someone else called the RPM part out as well in the thread that ended up following.
If you want a more formal benchmark with a better testing scenario, I highly suggest giving Tjll’s blog post serving 35 million requests with these a read!
Once we established a baseline configuration, several people reached out to me via Twitter DMs to recommend their own tunning to Nginx and see how much better the throughput could be, so I ended up publishing these in a GitHub repository where people could add additional settings (although the benchmark wind down once the discussion was over)
At one point, on small requests, Nginx always won while Caddy took the advantage on responses with around ~256 KB data.
Is this conclusive? Absolutely not. You should always do your tests yourself and see what the outcome is. Potentially tweaking Nginx and Caddy to your own needs and see the results.
Here’s the clone command for convenience:
git clone git@github.com:patrickdappollonio/nginx-vs-caddy-benchmark.git
Short answer: there isn’t a winner. Mostly because your definition of “better” is different than mine.
I’ve used both softwares to great success in different environments, from serving live games to millions of users, to handling static websites to trillions of requests and petabytes of information and both are good in their own space.
For Nginx’s case, I’m a fan of how much configuration you can put in and definitely consider it a strength rather than a weakness. With so many scenarios, you might want to tweak buffering, for example, for different types of requests (I had to in one situation, for WebSockets) and the flexibility it offers is unmatched.
Or take for example the ability to use Nginx’s auth_request
directive and module to effortlessly add Authentication and Authorization to your website with something like oauth2-proxy
: you can seamlessly integrate it with your own backend without adding extra code to it.
Now this doesn’t mean I don’t like Caddy. All the contrary: whenever I need to quickly step up a website with Let’s Encrypt support and a handful of other features, I’ll go with Caddy. It’s easy to use, it’s easy to configure and it’s easy to deploy. More often than not I see myself using it as a reverse proxy to serve my own http-server
’s requests.
The elefant in the room, too, is the fact that Caddy performs quite well without having to add much configuration, yet Nginx needs at least 4 times the configuration to even match the speed of Caddy. This is a huge win for Caddy and it’s something I’m sure the maintainers are proud of. Most of the benefits here, past the maintainer’s ability to keep things well fine-tuned, comes down to the Go stack as well and the HTTP/1, 2 and 3 servers available.
I wouldn’t call it that. I think nowadays solutions like Caddy that are quick and easy to set up are gaining traction, and more companies are moving away from Nginx into software more tight to their environments. I’ve been using lately Istio, for example, and HAProxy to great success (due to the Layer 4 proxying).
Nginx’s creators have also decided to launch a new application which they call a “Universal App Web Server” called “Unit” and it’s meant to be more friendly towards cloud deployments (compared to how annoying it was to reload Nginx’s configuration by sending an OS signal to the process, for example).
With fierce competition and a plethora of other tools offering better benefits, I think Nginx will have to adapt to the new times and I’m sure they will. They’ve been around for a long time and they’ve seen the Internet evolve. Plus, the company behind it has a business to sustain (and several of the Fortune 500 companies use Nginx in their infrastructure).
To wrap it up, benchmarking is hard:
The best we can do is avoid taking these by face value and instead, try these softwares in our own environments and see your trusty Grafana or Datadog dashboard to see whether there are improvements to your 95th percentile or not 😁
Anything you think we should add to the final Nginx configuration? How about Caddy’s config? Leave it down below or feel free to send a pull request here.