So… I’ve been reading a lot about nginx and how awesome it performs. It sounds magical!
My blog (running WordPress) hasn’t been running as fast as I (or Google) would like. The TTFB (time to first bite) is often > 500 ms which is slower than it should be. I thought this would be a great opportunity for nginx to prove itself and impress me!
Right now the live site that you are viewing is running on Windows Server 2008, PHP (of course), MySQL and WinCache. TTFB ~ 500 ms.
The Second Location
I set up another location on a totally different server – running Windows Server 2012, same version of PHP, MySQL, and still running WinCache. I took a backup of the MySQL database and restored it to the new server, and took a copy of all the site files and copied them over. Everything was left default (no additional specific performance tuning). Got it up and running nice and easily (setting up sites on Windows + IIS is so painless!) and started some tests. Guess what… TTFB right around 500 ms. That wasn’t a huge surprise but it at least told me that the issue wasn’t something specific to my Windows 2008 install / configuration.
The Third Location
Okay, so now I moved over to a Basic CentOS 6 configuration. I installed nginx, PHP, and MySQL. There were a lot more steps to get everything set up on Linux, but I’ve been through the routine a few times so it was just a matter of running all the right commands. Got the content copied over, got the MySQL database restored, and got ready for some testing. I’m thinking “I wonder how close to 100 ms it will be.” I run an external page speed test against it… wait eagerly for the results… and there they are! Huh? What? 600 ms? What the…!?!? OK. Maybe something’s wrong so I opened up Chrome developer tools and run some tests there to see what it thinks about the TTFB. Pretty close. Hmm…
Test Results Summary
So, using the Developer Tools I run some more timing tests on Windows Server 2008, Windows Server 2012, and CentOS/NGINX so that I can get some TTFB averages. When looking at averages the numbers of all three were REALLY close. Just as often as not, the NGINX site was slightly slower (by a minimal number of milliseconds). No clear winner on speed here. With everything being the same, I’ll stick with Windows. Call me crazy but I like Windows Server – always have and always will. I like Linux for certain situations too, but running WordPress has not made it onto that short list of preferred-Linux scenarios.
What Does it Mean?
Other than all three platforms running about the same in speed, which is a pretty interesting point in itself, what else do we know? Well, I’m going to say that the issue is with WordPress itself rather than the platform. To be more precise, and perhaps fair, I’ll say that it is either WordPress, MySQL, PHP, some WordPress plugin, or something in that part of the stack – it isn’t the operating system nor the web service that is causing the less-than-ideal TTFB.
Great test and conclusion. But, what’s your WinCache setup and do you use caching plugins? Is MySQL local or remote? By not using WinCache as session handler and an alternative session save patch I have TTFB down to > 200ms(of course with numerous cache plugins (maybe too Mary).
Hmm. I’m not sure what the WinCache setup is honestly. I’d have to look into that. It’s probably the default settings.
The MySQL is local on the machines in all cases, and the server resources are the same too.
You are saying that *without* WinCache you get better performance? We tested previously with and without and definitely saw better results with WinCache enabled.
What caching plugins have you noticed that helped?
With WinCache enabled I get better performance. But with WinCache’s Session Cache (session.save_handler=wincache) disabled. I put my session files locally within my webroot for faster access by PHP, on a location where the website can’s read of course.
I use WP Super Cache, but had to fix the rewrite-rules in order to properly load the generated .html files. That made a huge difference. I also use DB Cache Reloaded Fix for MySQL query cache, MO Cache for translations, Tribe Object Cache to utilize WinCache as back-end for WordPress and WP Widget Cache for caching widgets.
To see if WordPress is causing the slowness, you could try a plugin like Debug Queries. This made me add some indices on tables, and I set them to InnoDB (my DB is old, and MyISAM was default then).
@Otto: I also tried to get ARR caching to work, but to no avail (yet).
I installed WP Super Cache and that seems to have made a huge difference. I didn’t change any rewrite rules though. What did you have to change?
Great it made a huge difference! If all goes well you’re now serving cached .html files.
Do you use URL Rewrite (web.config), or something which parses .htaccess (ISAPI_Rewrite, Ape)? I had to change some rewrite conditions, to get to the point of
See http://www.headcrash.us/blog/2010/11/using-wp-super-cache-with-iis-7-2/ and http://www.saotn.org/9-wordpress-plugins-you-need/
I have the Redirection plugin installed but I didn’t make any specific URL ReWrite changes. I assume it (SuperCache) is working because the TTFB has drastically improved.
Did it not work for you until you enabled additional rules? Or did you add rules for additional optimization?
I work on the pagespeed ports for both IIS and nginx – IIS is certainly performing awesome – and still delivers real good API’s for module developers at the same time.
I would be nice if someone figured out how to cache parts of the wordpress site with Application Request Routing, to reduce ttfb to almost none for most hits (e.g. port SuperCache/TotalCache rules to ARR ). :-)
I’ve tried to get the ARR caching to work with my WP site but after many hours of tinkering, I couldn’t get it to work.
You are running PHP 5.3 on Server 2008 R2. I suggest you try PHP 5.5 with OpCache enabled and WinCache disabled.
See OpCache installation
We are getting 30-40% response time improvement.
For CentOS, APC helps.
Thanks. I’ll check that out for sure.
Your test is interesting, however it would be more interesting to see the performance of these three locations with 100 or more online users at a time…