Java is the language for building large web applications. One reason
Java is better than C is that it's garbage collected, making
programming easier. But garbage collection isn't free, it can have
significant overhead.
Web applications have a simple memory allocation pattern. An HTTP request comes in, some work is done in a thread over a second or two, and then you're done. Unless you're using session objects heavily (and you shouldn't), at the end of a request there's really nothing more in server memory than there was at the beginning. This simplicity suggests a better memory management strategy for web applications. Allocate all objects in a zone specific to the HTTP request. Don't bother reclaiming memory during a request; when the request is done, just drop the entire zone at once. Poof. This won't work if you need to reclaim some of the working set during a request, but I think that's unnecessary for many apps. I wonder if anyone's done this? It'd be easy enough to code in a C web application framework. Impossible in Java unless you can modify the JVM. Python sort of implicitly does this, since it collects most objects as soon as they fall out of scope.
Update: thanks
to Tim for pointing out
Apache
does this in its C code.
|