I've django running through WSGI such as this :
<VirtualHost *:80> WSGIScriptAlias / /home/ptarjan/django/django.wsgi WSGIDaemonProcess ptarjan processes=2 threads=15 display-title=% WSGIProcessGroup ptarjan Alias /media /home/ptarjan/django/mysite/media/ </VirtualHost>
But when in python I actually do :
def handler(request) : data = urllib2.urlopen("http://example.com/really/unresponsive/url").read()
the entire apache server dangles and it is unresponsive with this particular backtrace
# 0x00007ffe3602a570 in __read_nocancel () from /lib/libpthread.so. #1 0x00007ffe36251d1c in apr_file_read () from /usr/lib/libapr-1.so. #2 0x00007ffe364778b5 in ?? () from /usr/lib/libaprutil-1.so. #3 0x0000000000440ec2 in ?? () #4 0x00000000004412ae in ap_scan_script_header_err_core () #5 0x00007ffe2a2fe512 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #6 0x00007ffe2a2f9bdd in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #7 0x000000000043b623 in ap_run_handler () #8 0x000000000043eb4f in ap_invoke_handler () #9 0x000000000044bbd8 in ap_process_request () #10 0x0000000000448cd8 in ?? () #11 0x0000000000442a13 in ap_run_process_connection () #12 0x000000000045017d in ?? () #13 0x00000000004504d4 in ?? () #14 0x00000000004510f6 in ap_mpm_run () #15 0x0000000000428425 in primary ()
on Debian Apache 2.2.11-7.
Similarly, are we able to be shielded from :
def handler(request) : while (1) : pass
In PHP, I'd set some time and memory limits.
It's not 'deadlock-timeout' you would like as per another, that's for any special purpose that won't assist in this situation.
So far as attempting to use mod_wsgi features, you rather want the 'inactivity-timeout' selection for WSGIDaemonProcess directive.
Even so, this isn't an entire solution. The reason being the 'inactivity-timeout' choice is particularly to identify whether all request processing with a daemon process has stopped, it's not a per request timeout. It only translates to some per request timeout if daemon processes are single threaded. In addition to assistance to unstick a procedure, the choice can also get side-effect of restarting daemon process if no demands arrive whatsoever for the reason that time.
In a nutshell, there's not a way at mod_wsgi level to possess per request timeouts, the reason being there's no real method of stifling a request, or thread, in Python.
What you will need to implement is really a timeout around the HTTP request inside your application code. Am unsure where it can be and whether available already, but perform a Search for 'urllib2 socket timeout'.
Basically understand well the question, you need to safeguard apache from securing up when running some random scripts from people. Well, if you are running untrusted code, I believe you've other activities to bother with which are worst than apache.
Nevertheless, you should use some configuration directives to regulate a safer atmosphere. Both of these here are very helpful:
WSGIApplicationGroup - Sets which application group WSGI application goes to. It enables to split up configurations for every user - All WSGI programs inside the same application group will execute inside the context of the identical Python sub interpreter from the process handling the request.
WSGIDaemonProcess - Configures a definite daemon process for running programs. The daemon processes could be run like a user dissimilar to what the Apache child processes would normally be run as. This directive accepts lots of helpful options, I'll list a number of them:
Defines the UNIX user and groupname title or number user uid or group gid from the user/group the daemon processes ought to be run as.
The quantity of virtual memory in bytes to become allotted for that stack akin to each thread produced by mod_wsgi inside a daemon process.
Defines the utmost quantity of seconds permitted to pass through prior to the daemon process is shutdown and restarted following a potential deadlock around the Python GIL continues to be detected. The default is 300 seconds.
Read much more about the configuration directives here.