I'd phone causes of FastCGI (fcgi-2.4.) and really there is no manifestation of fork. If I am correct the net server spwans a procedure for FastCGI module (put together inside it or loaded like a SO/DLL) and handles charge of the primary socket (the main harbour TCP:80 usually) to it.

On *nix the FastCGI module "locks" that socket utilizing a file write lock (libfcgi/os_unix.c:989) overall file descriptor (the listen socket indeed) by doing this when new connections earnings just the FastCGI module has the capacity to process these. The incoming socket lock is launched right before handing to the HTTP req processing.

As viewed as FastCGI module isn't multi process/thread (no internal using fork/pthread_create) I suppose the concurrent handling of multiple synchronised connections is acquired through spwaning on the internet server (through OS_SpawnChild) of n FastCGI module processes. As we spawn, as example, 3 FastCGI processes (Apache calls 3 x OS_SpawnChild), does which means that that people could have only max 3 demands offered at the same time?

A) Is my vision of FastCGI method of working correct?

B) When the cost for that OS to spawn a brand new process/produce a link with local DB might be considered minimal, do you know the benefits of FastCGI against an traditional executable approach?

Thanks, Ema! :-)

FastCGI created processes are persistent, they are not wiped out when the request is handled, rather they are "put".

The rate gain from FastCGI over normal CGI would be that the processes are persistent. e.g. for those who have any database handles to spread out, that you can do them once. Same for just about any caching.

The primary gain originates from not needing to produce a new php/perl/etc. interpreter every time, that takes an unexpected period of time.

Should you desired to have multiple concurrent connections handled, you have to have multiple processes FastCGI Processes running. FastCGI isn't a method of handling more connections through any type of special concurrency. It's a method to accelerate individual demands, which allows handling more demands. But you actually are right, more concurrent demands requires more processes running.

Indeed,

in order seen (A) is alright, ok now what about (B)? If I am speaking about executables (correctly put together C/C++ programs, not scripts like perl/php/...), and when we think about the process spwan cost and DB new connection cost minimal, than the approach (FastCGI) is simply a kind of small gain in comparison to plain CGI executables?

I am talking about, considering the fact that Linux becomes manifest pretty quickly in breeding (forking) a procedure and when the DB is running in your area (eg. same host MySQL), time it requires to begin a brand new executable and fasten towards the DB is virtually . Within this situation, without absolutely nothing to be construed, only Apache C/C++ modules could be faster than this.

While using FastCGI approach then you're much more susceptible to mem leaks as viewed as the procedure is not forked/restarted each time...At this time, if you need to develop your CGI in C/C++ would not be easier to use old-fashioned CGI and/or Apache C/C++ modules directly?

Again, I am not speaking about scripts (perl/php/...), I am speaking about put together CGI.

Many thanks, Cheers, Ema! :-)

B, yes IF the price of breeding is zero then legacy CGI could be very good. So without having lots of hits common CGI is okay, gone with it. The purpose of fast cgi does stuff that take advantage of lots of persistent storage, or structures that has to be built An email psychic reading your projects done, like running queries against large databases, where you need to leave the DB libraries in memory rather than needing to reload the entire shebang each time you need to operate a query.

It matters if you have Plenty Of HITS.