by George Damian , 3 years ago
What is Swoole?
Swoole is an open-source C extension for PHP that enables true event-driven, async programming for PHP via its coroutines implementation and promises to offer highly available and extremely performant PHP apps.
PHP Swoole is different from the traditional PHP model, running in CLI mode, like Node.js but with a different design, using coroutines similar to the ones from Golang, without ending up in the callback hell. Main differences and advantages between Swoole server PHP and traditional PHP-FPM are:
This is basically how the requests are distributed and processed by the Swoole server, in comparison with PHP-FPM on Apache/Nginx.
Then at the app level, things are getting even more interesting, as with Swole, you can preload and share resources like autoload/vendor libraries or DB connections between requests, which can be beneficial even for apps that are not that I/O intensive or needs concurrency in particular.
Besides the out-of-the-box HTTP server that it comes with, it also offers built-in async clients for Redis, MySQL, and Postgres, creating even more room for performance improvements.
Swoole is primarily developed by Chinese devs, working on large-scale applications for the enterprise Chinese market and it has been proven to be reliable for highly reliable and scalable production environments.
Context
PHP authors recently went and voted the inclusion of Fibers RFC into php-core. Many PHP authors, developers, and also Swoole authors have expressed their feelings regarding this recent RFC, mainly complaining that taking concurrency over userland is not a good idea and that the RFC does not magically make PHP asynchronous. More notes at #113419, #112538.
This made me want to give Swoole a try on some regular applications, see if I can get any improvements running some real-life apps over apache/nginx.
Now, the ideal use of Swoole is contextualized microservices, IOT stuff, live chat/gaming systems or simple REST APIs. However, it promises to offer between 2-5X performance improvements of regular apps too, so why not give that a test.
Benchmarks
For the following benchmarks, I've used Digital Ocean's Regular $20 4GB 2vCPUs servers, running on the latest CentOS 7 and PHP 7.2.
As for the applications we've tested, we got a Laravel 5.5 real life application, with lots of vendor libraries, quite a big database, and lots of logic, and also a clean, untouched Lumen 6.0 app with no database.
For the Laravel/Lumen apps, we've used an easy-to-use and implement package called laravel-swoole, which is preloading a bunch of Laravel bloatware on the fly.
Vs how Laravel runs served by Swoole
We tried to keep things as simple as possible, all servers running on standard configuration, with no tunning for neither of the web servers. Nevertheless, let's dive into it!
Apache + PHP-FPM
Default lumen with no DB connection
user@desktop:wrk$ ./wrk -t4 -c1000 http://apache.lumen
Running 10s test @ http://apache.lumen 4 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 59.74ms 34.57ms 1.66s 98.69% Req/Sec 208.21 60.88 350.00 67.17% 8254 requests in 10.07s, 2.09MB read Socket errors: connect 0, read 0, write 0, timeout 36 Requests/sec: 819.47 Transfer/sec: 212.16KB
user@desktop:wrk$ ./wrk -t4 -c1000 http://apache.laravel
Running 10s test @ http://134.122.73.88/Qdev-Ak2/public/
4 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 826.48ms 370.13ms 1.98s 72.39%
Req/Sec 9.95 9.13 40.00 82.66%
254 requests in 10.05s, 18.68MB read
Socket errors: connect 0, read 0, write 0, timeout 120
Requests/sec: 25.28
Transfer/sec: 1.86MB
Nginx + PHP-FPM
Default lumen with no DB connection
user@desktop:wrk$ ./wrk -t4 -c1000 http://nginx.lumen
Running 10s test @ http://nginx.lumen
4 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 64.16ms 24.48ms 667.45ms 94.29%
Req/Sec 348.76 103.53 585.00 66.92%
13832 requests in 10.07s, 3.69MB read
Socket errors: connect 0, read 0, write 0, timeout 32
Requests/sec: 1373.31
Transfer/sec: 375.46KB
Laravel app, static content, big DB, lots of logic
user@desktop:wrk$ ./wrk -t4 -c1000 http://nginx.laravel
Running 10s test @ http://157.230.16.191
4 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.48s 448.18ms 1.98s 66.04%
Req/Sec 12.05 9.62 60.00 84.76%
356 requests in 10.08s, 24.98MB read
Socket errors: connect 0, read 0, write 0, timeout 250
Requests/sec: 35.33
Transfer/sec: 2.48MB
PHP Swoole
Default lumen with no DB connection
user@desktop:wrk$ ./wrk -t6 -c1000 http://swoole.lumen Running 10s test @ http://swoole.lumen 6 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 50.73ms 18.57ms 647.81ms 98.32% Req/Sec 368.16 146.19 700.00 67.17% 21873 requests in 10.06s, 5.03MB read Socket errors: connect 0, read 0, write 0, timeout 27 Requests/sec: 2173.97 Transfer/sec: 511.65KB
Laravel app, static content, big DB, lots of logic
user@desktop:wrk$ ./wrk -t4 -c1000 http://nginx.lumen http://swoole.laravel
Running 10s test @ http://165.227.161.234/
4 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.24s 357.85ms 1.97s 65.51%
Req/Sec 12.56 7.52 40.00 64.02%
397 requests in 10.05s, 37.22MB read
Socket errors: connect 0, read 0, write 0, timeout 63
Requests/sec: 39.50
Transfer/sec: 3.70MB
A note before drawing any conclusions from the numbers above is that DigitalOcean seems to somehow CPU throttle wrk benchmarks across the board, for all of the tests. I expect that on normal circumstances / under different providers, the peak requests would have been bigger.
So my results can be taken with a grain of salt. Apparently, there are multiple benchmarks on the web, showing up to x30 performance increases, even for laravel/lumen simple APIs.
Results
It seems like for the simple lumen installation, Nginx is performing around 40% better than Apache with PHP-FPM, while Swoole packs up 62% more requests than Apache PHP and around 37% more requests than PHP-FPM on Nginx.
As for the real-life Laravel application, things don't look that drastic anymore, as Nginx manages to pack around 29% more requests than Apache, while Swoole reaches around 40% more times requests than Apache and 10% more than Nginx.
Drawbacks
No technology is perfect and Swoole is no exception, here's a quick list of thing you might want to consider before using it.
Conclusions
As mentioned before, Swoole excels at concurrency-heavy, I/O-heavy processes, APIs and microservices. So yeah, it won't perform at its best by just integrating it into any PHP project but rather use it for services and APIs that need to be highly responsive and available.
But even though we haven't reached crazy RPSs, we were able to notice a clear improvement over concurrent requests and failed/timed out connections for both our lumen and laravel app, over their Apache/Nginx counterparts.
Additional Resources
https://www.php.net/manual/en/book.swoole.php
https://github.com/swooletw/laravel-swoole
https://github.com/kenashkov/swoole-performance-tests
https://www.techempower.com/benchmarks/
If you got any questions or notes, please let me know!