Before writing this, I've read a number of assets online, such as the mod_wsgi wiki, however i am confused about how precisely Apache processes/threads communicate with mod_wsgi.

This really is my current understanding: Apache could be set up to operate such that certain or even more child processes are designed for incoming demands, and all these child processes could be set up to consequently use a number of threads to service demands. Next, things get hazy for me personally. My doubts are:

  1. Exactly what is a WSGIDaemonProcess, and who really calls my Django application while using python sub interpreter?
  2. Basically have my Django application running within mode where multiple threads are permitted in one Apache child process - does which means that that multiple demands might be concurrently being able to access my application simultaneously? If that's the case - would doing something similar to setting a module level variable (state that of the user's ID) might be over-compiled by other parallel demands and result in non-thread safe behavior?
  3. For that situation above, with Python's global interpreter lock, would the threads really be performing in parallel?

Solutions to each one of the points.

1 - WSGIDaemonProcess/WSGIProcessGroup indicate that mod_wsgi should fork of the separate process for running the WSGI application in. This can be a fork only and never a fork/professional, so mod_wsgi continues to be in charge from it. When it's detected that the URL maps to some WSGI application running in daemon mode, then mod_wsgi code within the Apache child worker processes will proxy the request particulars right through to the daemon mode process in which the mod_wsgi code there reads it and calls up to your WSGI application.

2 - Yes, multiple demands could be operating at the same time and become attempting to customize the module global data simultaneously.

3 - For that time that execution is at Python itself then no, they are not strictly running in parallel because the global interpreter lock implies that just one thread could be performing Python code at any given time. The Python interpreter will periodically switch which thread is dealing with run. If among the threads calls into C code and releases the GIL then a minimum of for that time that thread is for the reason that condition it may run in parallel with other threads, running in Python or C code. As example, when calls are created lower in to the Apache/mod_wsgi layer to create back response data, the GIL is launched. Which means that the particular writing back of response data in the lower layers isn't prevent other threads from running.