I am creating a Django site. I am making my changes around the live server, simply because it's simpler this way. The issue is, every occasionally it appears to love to cache among the *.py files I am focusing on. Sometimes basically hit refresh a great deal, it'll switch backwards and forwards between a mature version from the page, along with a more recent version.
My setup seems like what's referred to within the Django lessons: http://paperwork.djangoproject.com/en/dev/howto/deployment/modwsgi/#howto-deployment-modwsgi
I am speculating it's carrying this out since it is firing up multiple cases of from the WSGI handler, and based on which handler the the http request will get delivered to, I might receive different versions from the page. Restarting apache appears to repair the problem, but it is annoying.
I truly aren't well versed about WSGI or "MiddleWare" or some of that request handling stuff. I originate from a PHP background, where everything just works :)
Anyway, what is a nice method of solving this problem? Will running the WSGI handler is "daemon mode" alleviate the issue? If that's the case, how do you have it to operate in daemon mode?
You are able to resolve this issue by not editing your code around the live server. Seriously, there is no excuse for this. Develop in your area using version control, and when you have to, run your server from the live checkout, having a publish-commit hook that inspections your latest version and restarts Apache.
Browse the mod_wsgi documentation instead of depending around the minimal information for mod_wsgi hosting contained around the Django site. In partcular, read:
This informs you just how source code reloading works in mod_wsgi, together with a monitor will implement same kind of source code reloading that Django runserver does. Also see which discusses how you can apply that to Django.
Running the procedure in daemon mode won't help. Here's what is happening:
mod_wsgi is breeding multiple identical ways to handle incoming demands for the Django site. All these processes is its very own Python Interpreter, and may handle an incoming web request. These processes are persistent (they aren't raised and torn lower for every request), so just one process may handle 1000's of demands one by one. mod_wsgi has the capacity to handle multiple web demands concurrently since you will find multiple processes.
Each process's Python interpreter will load your modules (your custom Python files) whenever an "import module" is performed. Poor django, this can happen whenever a new view.py is required because of an internet request. When the module is loaded, it resides in memory, and thus any changes you are making towards the file won't be reflected for the reason that process. Weight loss web demands are available in, the process's Python interpreter only will make use of the version from the module that's already loaded in memory. You're seeing incongruencies between refreshes since each web request you're making can be remedied by different processes. Some processes might have loaded your Python modules throughout earlier revisions of the code, while some might have loaded them later (since individuals processes hadn't received an internet request).
The easy solution: When you modify your code, restart the Apache process. Most occasions that's as easy as running as root in the spend "/etc/init.d/apache2 restart". In my opinion an easy reload works too, that is faster, "/etc/init.d/apache2 reload"
The daemon solution: If you work with mod_wsgi in daemon mode, then all that you should do is touch (unix command) or modify your wsgi script file. To clarify scrompt.com's publish, modifications for your Python source code won't lead to mod_wsgi reloading your code. Reloading only happens once the wsgi script file continues to be modified.
Last indicate note: I only spoke about wsgi as using approaches for simplicity. wsgi really uses thread pools inside each process. I didn't feel this detail to apply to this answer, however, you can discover more by reading through about mod_wsgi.