I've got a Pylons web application offered by Apache (mod_wsgi, prefork). Due to Apache, you will find multiple separate processes running my application code at the same time. A few of the non-critical tasks the application does I wish to defer for processing in background to enhance "live" response occasions. So I am considering task queue, many Apache processes adding tasks for this queue, just one separate Python process processing them one-by-one and getting rid of from queue.

The queue should ideally be endured to disk so queued natural jobs are not lost due to energy outage, server restart etc. Now you ask , what will be a reasonable method to implement such queue?

For the items I have attempted: I began with simple SQLite database and single table inside it for storing queue products. In load testing, when growing degree of concurrency, I began getting "database locked" errors, not surprisingly. The quick'n'dirty fix ended up being to replace SQLite with MySQL--it handles concurrency issues well but feels as though an overkill for that simple factor I have to do. Queue-related DB procedures also appear conspicuously during my profiling reviews.

A note broker like Apache's ActiveMQ is a perfect solution here.

The pipeline might be following:

  • Application that's accountable for handling HTTP demands creates replies rapidly and transmits low-priority, heavy tasks to AMQ queue.
  • A number of another processes are activated to eat AMQ queue and do what's intended related to these heavy tasks.

The advantages of queue persistence is satisfied as they are since ActiveMQ stores messages which are not consumed in persistent storage. In addition it scales very well since you are liberated to deploy multiple HTTP-applications, multiple consumer applications and AMQ itself on different machines each.

We use something similar to this within our project designed in Python utilizing STOMP as underlying communication protocol.

An internet server (any web server) is multi-producer, single-consumer process.

An easy option would be to construct a wsgiref or Werkzeug after sales server to deal with your after sales demands.

Because this "after sales" server is build using WSGI technology, it is extremely, much like the leading-finish web server. Except. It does not produce HTML reactions (JSON is generally simpler). Apart from that, it is extremely straightforward.

You design Peaceful transactions with this after sales. You utilize all the various WSGI features for URI parsing, authorization, authentication, etc. You -- generally -- have no need for session management, since Peaceful servers seldom offer periods.

When you get into serious scalability issues, you just wrap your after sales server in lighttpd as well as other web engine to produce a multi-threaded after sales.