do you know me how server handles different http request at any given time. If 10 customers drenched inside a site and send request a webpage simultaneously what's going to happen?

Usually, each one of the customers transmits a HTTP request the page. The server receives the demands and associates these to different employees (processes or threads).

With respect to the URL given, the server reads personal files and transmits it to the consumer. When the file is really a dynamic file like a PHP file, the file is performed of all time send to the consumer.

When the asked for file continues to be delivered back, the server usually shuts the bond following a couple of seconds.

For additional, see: HowStuffWorks Web Servers

HTTP uses TCP the industry connection-based protocol. That's, clients begin a TCP connection while they are interacting using the server.

Multiple customers are permitted for connecting towards the same destination port on a single destination machine simultaneously. The server just reveals multiple synchronised connections.

Apache (and many other HTTP servers) possess a multi-processing module (MPM). This accounts for allocating Apache threads/ways to handle connections. These processes or threads may then run in parallel by themselves connection, without obstructing one another. Apache's MPM also has a tendency to keep open "spare" threads or processes even if no connections are open, which will help accelerate subsequent demands.

This program ab (short for ApacheBench) which will come with Apache allows you test what goes on whenever you open multiple connections for your HTTP server at the same time.

Apache's configuration files will usually set a restriction for the amount of synchronised connections it'll accept. This is set to some reasonable number, so that throughout normal operation this limit will not be arrived at.

Note too the HTTP protocol (from version 1.1) enables for any link with be stored open, to ensure that the customer could make multiple HTTP demands before closing the bond, potentially reducing the amount of synchronised connections they have to make.

More about Apache's MPMs:

Apache itself may use a variety of multi-processing modules (MPMs). Apache 1.x used a module known as "prefork", which produces numerous Apache processes ahead of time, to ensure that incoming connections can frequently be delivered to a current process. This is because I referred to above.

Apache 2.x normally uses an MPM known as "worker", which uses multithreading (running multiple execution threads inside a single process) to offer the same factor. The benefit of multithreading over separate processes is the fact that threads is much more light-weight in comparison to opening separate processes, and might use a little less memory. It is extremely fast.

The drawback to multithreading is that you simply can't run such things as mod_php. When you are multithreading, all of your add-in libraries have to be "thread-safe" - that's, they should be conscious of running inside a multithreaded atmosphere. It's harder to create a multi-threaded application. Because threads inside a process share some memory/assets together, this could easily create race condition bugs where threads read or email memory when another thread is incorporated in the procedure for conntacting it. Making your way around this involves techniques for example securing. A lot of PHP's built-in libraries aren't thread-safe, so individuals wanting to make use of mod_php cannot use Apache's "worker" MPM.

Apache 2 has two different modes of operation. The first is running like a threaded server another is applying a mode known as "prefork" (multiple processes).