This vulnerability is about as bad as it gets, and my heart stopped while I was reading the intro (it's so trivially simple to compromise a site).
Then I reached this sentence, which I felt needed to be bolded and underlined:
A large number of sites run PHP as either an Apache module through mod_php or using php-fpm under nginx. Neither of these setups are vulnerable to this.
<insert immense sigh of relief>. Thank God.
That said, some blackhats are going to be really sore that a backdoor that's been wide open for "at least 8 years" is finally being closed.
The vulnerability can only be exploited if the HTTP server follows a fairly obscure part of the CGI spec. Apache does this, but many other servers do not.
You're right, my title was a bit sensational. I softened it a bit by appended mod_cgi. I think it is very telling that the PHP core recognizes that the people using mod_cgi probably can't upgrade so they're offering a .htaccess adjustment - very commendable.
It basically only affects PHP as CGI (one PHP CGI process is started and stopped for each request). Anything using an alternative communication channel or API to process requests between frontend and backend is safe (for now).
Anyone using a php CGI app could easily have more serious problems than source code disclosure. Those kinds of apps have often been around over a decade with little or no modifications or auditing, because if someone cared enough about the apps to maintain or audit them it's likely they'd have moved to a more performant backend.
Exposing source code is the least of your problems. With creative use of command-line arguments, you can run arbitrary PHP code via any vulnerable URL.
It took me a bit to figure out _how_, but it's nothing obscure or difficult. In fact it relies on _other_ bozotic PHP behavior to work!
The point of the question here is if anybody remembers why we decided not
to parse command line args for the cgi version? I could easily see it
being useful to be able to write a cgi script like:
#!/usr/local/bin/php-cgi -d include_path=/path
<?php
...
?>
and have it work both from the command line and from a web context.
As far as I can tell this wouldn't conflict with anything, but somebody at
some point must have had a reason for disallowing this.
Perfectly illustrating the utility of well-chosen comments in code.
No, the problem is that they check the decoded query string for `=` signs, but Apache checks the raw query string. If you pass an encoded `=` anywhere in the query string then you can bypass the fix.
As has been mentioned, using CGI for php is quite outdated so it probably doesn't impact that many sites, that said this sort of vulnerability is exactly why you should put all but the minimum front controller PHP in a folder that's outside of the public folder your site is being served from.
FastCGI makes little to no sense on shared web hosting machines. With FastCGI, each user on the machine needs at least one long-running process to handle requests. This is a waste when you consider that large numbers of the sites may be idle 99% of the time. With CGI, you only have PHP processes running when they are actually handling requests, it's really a much better solution for shared hosting.
With PHP 5.3.9 or 5.3.10 and php-fpm a new method of process management called ondemand was introduced that handles this use care very well. With a super short idle timeout (a few seconds) this could work very well for shared hosting, but I doubt there are many folks using it since it does still require the process manager to stay running in order to start the workers (this makes it hard to have user configurable php.ini settings since it would require restarting the master).
You can send a signal to php-fpm to reload configuration without restarting (at least, the way I read it). My /etc/init.d/php-fpm has "reload" with `kill -USR2 $PHP_PID`
What is it wasting? If the process is idle, then it is only taking up space in memory.
If that space isn't needed, then the cost is nil (it takes the same amount of power to store a 1 as a 0; the real power cost is in moving data in and out of memory, not in storing it).
If the space is needed, then the idle process can be swapped out to disk. So again, no practical cost.
FastCGI may not be the best solution, but CGI is not an improvement (the overhead of starting a new runtime to handle every request will introduce a lot of latency, which will be especially noticeable on pages that make a lot of asynchronous requests).
Caveat: As they say, one good test is worth a thousand expert opinions, and I'm no expert.
I'm inclined to say yes, it would still have a benefit. My thoughts:
- Swapping the FastCGI process back in shouldn't be any worse than loading a CGI process cold and initializing it (disk caching will probably be no help to CGI here: with so little free memory, the cache will be small or nonexistent and aggressively purged);
- Once swapped in, the FastCGI process will be able to handle multiple requests in less time and less memory than it would take just to start the many CGI processes necessary to do the same work.
Also, if the performance of your server is an issue, your FastCGI process should never be in a situation where it would be swapped out. You need to add more memory, and/or reduce the other loads on your server.
"we had a bug in our bug system that toggled the private flag of a bug report to public on a comment to the bug report causing this issue to go public before we had time to test solutions to the level we would like" Thats having a really bad day!
Then I reached this sentence, which I felt needed to be bolded and underlined:
A large number of sites run PHP as either an Apache module through mod_php or using php-fpm under nginx. Neither of these setups are vulnerable to this.
<insert immense sigh of relief>. Thank God.
That said, some blackhats are going to be really sore that a backdoor that's been wide open for "at least 8 years" is finally being closed.