Please don't use mod_perl unless you know that you have a very good reason to do so, versus making use of more modern solutions.
Mod_perl was a great technology in its day, from the late 1990s through the mid 2000s (when, you'll notice, the last review here was added). Apache::Registry all by itself provided a great boost to Perl-based web solutions. But that was a long time ago.
Since then, the Perl community has effectively moved away from mod_perl in favor of Plack, which itself works great with the Apache web server (among others). Please closely examine Plack, especially for new projects, but also when considering ways to quickly adapt and modernize old CGI-based codebases.
If you need to run a website which uses perl, it is worthwhile looking into having part of your site served by mod_perl, which increases the effective operational speed of your perl by pre-compiling the perl with an interpreter "embedded" in the Apache webserver.
There is a lack of paper documentation for the Apache 2.0 mod_perl, but the POD has improved tremendously in the past few years, the website is not at all bad, and there are plenty of helpful mailing lists around.
Ignore any review that downgrades mod_perl by comparing it to content-phase-only solutions, like mod_php or FastCGI. mod_perl allows you to script the entire Apache web page delivery cycle, from how the headers are interpreted down to how to log the response.
Allow me to go against the grain here, but the only thing that mod_perl has going for it is that it is widely known and a lot of material and modules have been written for it.
Architecturally, it's a poor choice for building dynamic web applications. The problem is that the same process(es) are serving simple files as well as generating dynamic pages. This means that while images, etc are being served to clients by apache processes, all of the memory consumed by the Perl interpreter is essentially wasted.
The answer involves:
* hacking your application to fork() at just the right moment (which helps but doesn't address the real problem)
* setting up a Squid `reverse proxy' (Squid, using a select() loop model, is a far more efficient web server than Apache)
* running another version of Apache that the first one forwards requests to - the two processes communicating via HTTP.
* Using Apache/mod_perl 2.0, and hope that the threading model is stable enough for you.
It is also not possible (without installing a seperate Apache instance for each user) to use the Unix process model to seperate scripts for different web sites to run with the permissions of different users, for instance. Using Apache's `suexec' feature is incompatible with mod_perl and effectively reduces you to CGI performance.
It is widely accepted that once you hit a certain traffic level, you need to seperate front end processing from building dynamic page requests, and HTTP was not really designed for this; it works most of the time, but the integration is far from seamless.
People mention that some of the biggest sites in the world (Slashdot seems to be a common example, for some reason) use mod_perl, but they do not mention the blood, sweat and tears that were involved. Actually, the biggest sites in the world use extremely small web server processes, and binary-optimized protocols to talk to the real applications - these include NSAPI (SunOne/Netscape iPlanet), ISAPI (IIS), etc.
There is a binary optimized, Open Market standard that was designed for just this - FastCGI. See the FCGI module on CPAN and www.fastcgi.com/. FastCGI typically adapts quickly into Web Applicaiton Development Frameworks, and can increase performance markedly whilst requiring a lot less memory. FastCGI works with Apache, and the performance is on a par with mod_perl (actually, typically the performance is less subject to fluctuations).
It is for these reasons that I hold my opinion that Apache/mod_perl will be one day seen in the same light as Sendmail - filled with the cruft of years of creeping featuritis.