Uploading huge files to Apache 2.4 + mod_fcgid
There was a time when I thought “Setting up a webserver? That’s easy!” Well yeah, not with fancy optimizations to reduce the mean response time by yet another microsecond, but you know, the usual shared webserver.
Well, not only was php-fpm harder/more impossible than expected, but also something as old as the Web 2.0: File uploading.
Our newest webserver with Apache 2.4, mod_fcgid communicating with PHP would refuse to accept files larger than around 512 MB, no matter what settings for
FcgidMaxRequestLen and various PHP settings we used. The Apache thread would reproducibly segfault and produce a coredump of several hundred MB, the backtrace beginning with something like “file_bucket_read in /build/buildd/apr-util-1.5.3/buckets/apr_buckets_file.c:125”.
Almost two hours of debugging later, stracing both PHP and Apache at the time of the crash, I now found the problem. While mod_fcgid correctly accepts the POST request, writes it to
/tmp/fcgid.tmp.* (because it is larger than
FcgidMaxRequestInMem) and then executes the
php-cgi, it then has to “pipe” the whole request to PHP. It therefor uses
mmap() – normally a good idea if you want to speed stuff up, as I have heard. But it allocates the file in chunks of 8 KiB or 8192 Bytes, which will result in 65536 mmap’ed regions when you reach 512 MB, plus some others for I-don’t-know.
Now who would have thought, but there is a limit for the number of mmap’ed areas.
man 2 mmap regrettably only says:
ENOMEM No memory is available, or the process’s maximum number of mappings would have been exceeded.
Not what this “maximum number of mappings” is, usually is or how I can find or even set it. Well, a quick search turned up, that it is the
vm.max_map_count sysctl, that defaults (at least in Ubuntu 14.04) to 65530 – that’s pretty close to the estimated 512 MB or 65536 mmaps.
So as usual, a horribly hard to debug problem had an almost trivial solution: