If you need to run a website which uses perl, it is worthwhile looking into having part of your site served by mod_perl, which increases the effective operational speed of your perl by pre-compiling the perl with an interpreter "embedded" in the Apache webserver.
There is a lack of paper documentation for the Apache 2.0 mod_perl, but the POD has improved tremendously in the past few years, the website is not at all bad, and there are plenty of helpful mailing lists around.
5 out of 9 found this review helpful. Was this review helpful to you? Yes No
Ignore any review that downgrades mod_perl by comparing it to content-phase-only solutions, like mod_php or FastCGI. mod_perl allows you to script the entire Apache web page delivery cycle, from how the headers are interpreted down to how to log the response.
10 out of 11 found this review helpful. Was this review helpful to you? Yes No
mod_perl offers complete access to the Apache C API in Perl. for more on what that means, visit perl.apache.org/start/index.html
5 out of 6 found this review helpful. Was this review helpful to you? Yes No
Allow me to go against the grain here, but the only thing that mod_perl has going for it is that it is widely known and a lot of material and modules have been written for it.
Architecturally, it's a poor choice for building dynamic web applications. The problem is that the same process(es) are serving simple files as well as generating dynamic pages. This means that while images, etc are being served to clients by apache processes, all of the memory consumed by the Perl interpreter is essentially wasted.
The answer involves:
* hacking your application to fork() at just the right moment (which helps but doesn't address the real problem)
* setting up a Squid `reverse proxy' (Squid, using a select() loop model, is a far more efficient web server than Apache)
* running another version of Apache that the first one forwards requests to - the two processes communicating via HTTP.
* Using Apache/mod_perl 2.0, and hope that the threading model is stable enough for you.
It is also not possible (without installing a seperate Apache instance for each user) to use the Unix process model to seperate scripts for different web sites to run with the permissions of different users, for instance. Using Apache's `suexec' feature is incompatible with mod_perl and effectively reduces you to CGI performance.
It is widely accepted that once you hit a certain traffic level, you need to seperate front end processing from building dynamic page requests, and HTTP was not really designed for this; it works most of the time, but the integration is far from seamless.
People mention that some of the biggest sites in the world (Slashdot seems to be a common example, for some reason) use mod_perl, but they do not mention the blood, sweat and tears that were involved. Actually, the biggest sites in the world use extremely small web server processes, and binary-optimized protocols to talk to the real applications - these include NSAPI (SunOne/Netscape iPlanet), ISAPI (IIS), etc.
There is a binary optimized, Open Market standard that was designed for just this - FastCGI. See the FCGI module on CPAN and www.fastcgi.com/. FastCGI typically adapts quickly into Web Applicaiton Development Frameworks, and can increase performance markedly whilst requiring a lot less memory. FastCGI works with Apache, and the performance is on a par with mod_perl (actually, typically the performance is less subject to fluctuations).
It is for these reasons that I hold my opinion that Apache/mod_perl will be one day seen in the same light as Sendmail - filled with the cruft of years of creeping featuritis.
10 out of 13 found this review helpful. Was this review helpful to you? Yes No
3 hidden unhelpful reviews