I've got a small VPS server which has a Nginx front-end that delivers static media files and passes Django demands to an Apache 2.2 prefork MPM server running mod_wsgi.
With one (very) small site loaded and dealing, it's presently using 143MB of 256MB of RAM.
top command I can tell that Apache is applying 52.9% of accessible RAM, with memcache in second using 2.1%.
Thinking about that I am thinking about putting a number of Django projects on that one server, I am wondering if there's anything I'm able to caused by trim the quantity of RAM that Apache is applying?
If you wish to stay with Apache, a couple of suggestions, roughly so as of difficulty:
- make use of the Apache worker MPM rather than prefork. Real memory used per client connection is going to be lower, but remember that the virtual memory allotted for Apache on Linux can be displayed high, because of the 8MB Linux allocates for every thread's stack. This does not really matter, unless of course your VPS is brain-dead and caps virtual memory instead of actual RSS (resident set size) memory. For the reason that situation you can study how you can lower the thread stack size here (underneath the Memory-restricted VPS section).
- edit your Apache config file and lower the StartServers, MaxClients, MinSpareThreads, and MaxSpareThreads configurations roughly compared. The right levels is a balance involving the preferred memory usage and the amount of concurrent clients you have to have the ability to serve.
- change to mod_wsgi (in daemon mode) rather than mod_python.
For that record, the OP's utilisation of the term MPM is non sensical. The MPM in Apache is not a choice, you're always utilizing an MPM when utilizing Apache. The option is which MPM you're using. On UNIX the 2 primary MPMs or Multiprocessing Modules, are prefork and worker. On Home windows the winnt MPM is definitely used. Particulars concerning the different MPMs are available in Apache documentation on Apache site. Poor mod_wsgi though, you may be best reading through:
In a nutshell though:
- prefork MPM is multi process/single threaded.
- worker MPM is multi process/multi threaded.
- winnt MPM in single process/multi threaded.
You may think about using Breeding for deployment.
you can run Django on FastCGI. nginx could then drive it directly rather than dealing with Apache.