I understand this seem just like a question for ServerFault however i realize that designers frequently obtain the blame when servers are battling and so i thought a publish here may be helpful to individuals who still use Perl on the internet.

The Storyline:

We'd serious difficulties with defunct processes on our old Apache server therefore we made the decision to maneuver to Apache 2. The brand new server works far better, no denying that. Tests reveal however, that within heavy load (~100 customers each minute) defunct processes start rapidly ramping on the server and taking advantage of SSH it's obvious these processes are utilizing the CPU. To beat these problems we made the decision to implement CGI::Fast which is a kind of FastCGI in Perl. Getting that in position the zombies have left, however performance smart the server isn't coping much better.

The outcomes brought me to visualize there is not a real point applying CGI::Fast if Apache 2 will effectively reclaim the assets anyway.

Does any one of you have started to another conlusion?

For me it isn't worth moving to not a PSGI/Plack-based solution this year, separate from every other particulars you pointed out.

  • The Plack ecosystem is so much greater than FastCGI’s. (advantage)
  • You are able to deploy to a lot of more servers than FastCGI supports. (advantage)
  • You might run CGI scripts unmodified with e.g. Plack::App::CGIBin whereas CGI::Fast takes a rewrite. (advantage)
  • Spinning a CGI program to adapt towards the PSGInterface requires a little more effort than spinning it to CGI::Fast. (disadvantage)

FastCGI is faster than plain CGI because Apache does not need to load perl for every new request. However, the scripts have to be reworked to get rid of the assumption that they're performed once for every request. A FastCGI script at its core typically signifies some form of event loop, processing the demands as they are available in.

You should use CGI::Fast for plain CGI scripts without reworking the script around a celebration loop, however, you lose the "Fast" a part of FastCGI by doing this, as perl still must be run once for each script.

FastCGI also only supplies a large benefit when the greatest a part of your CGI script is loading perl or performing one-time code. For a lot of web programs, this is correct. However, in case your script must perform a large amount of work with each request, so that the overhead of loading perl is small, then you definitely will not visit a large performance benefit by utilizing FastCGI.


It had been too inefficient for not small sites. CGI spawns a brand new process for each incoming request to carry out a script, a really resource intensive and inefficient method of doing things. No surprise it faded away with time as web programs grew to become more complicated.

FastCGI was brought to avoid a few of the difficulties with running languages within the Apache process, in addition to staying away from the ineffectiveness of CGI.

A FastCGI application is performed outdoors from the web server (Apache or any other smart), and waits for demands on the internet server utilizing a socket. The net server and also the FastCGI application can also be on separate physical machines and communicate within the network.

Since the web server adn the applying processes are separate better isolation can be done.