I'm running Django on Apache. I've several client computer systems that ought to call urllib2.urlopen() and send over some data which my server will process and immediately send back an answer. However, after i am testing i found a really tricky problem. I've one client frequently send exactly the same data to become processed. The very first time, it requires around ~20 seconds, second time, it takes approximately 40 seconds, third time I recieve a 504 (gateway timeout) error. Basically attempt to send data more 504 errors at random appear. I think it is really an problem with Postgres because the function that processes the data makes many database calls, however, I don't know why the performance of Postgres would decline a lot. I've attempted several database optimisation methods, including that one: (http://stackoverflow.com/questions/1125504/django-persistent-database-connection), with no success.

Thanks ahead of time.

Edit: The demands aren't coming at the same time. They're arriving consecutive and every query involves lots of Chooses and JOINs, and you will find a couple of Card inserts and UPDATEs too. The apache error logs reveal that it's really a simple timeout, in which the function to process the customer published data gets control 90 seconds.

Perhaps you have checked the Apache error_logs? Perhaps you have set django DEBUG = True or ADMINS = ('email@addr.com',) to get an in depth error report by what the particular reason for the problem is? If that's the case, what about pasting some good info here.

The reason for sure that it's postgres? Excuses have you employed diagnostics arrive at that conclusion? If that's the case, please tell us.

Are you currently running apache with mod_wsgi? The number of processes and threads perhaps you have allotted for your django application?

Also, 20 seconds to process the very first transaction is a lot of time. Possibly you can show us the vista code that's leading to time out. We might have the ability to help there.

I sincerely doubt that it will likely be postgres alone that's leading to the problem. It most likely has something related to application code, or server configuration.

Whether it's really Postgres, then you definitely should switch on the logging of slow claims within the Postgres configuration to discover which statement exactly takes a lot time.

You can do this by setting the configuration property log_min_duration.

Particulars have been in the manual: http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT

You the function makes "many database calls" so I'd begin with a really low number, as well as to log the amount of all claims, then you definitely might have the ability to identify the slow ones.

It may be a securing released. Maybe the very first call doesn't finish its transaction correctly and subsequent calls encounter a timeout when awaiting an origin.

You are able to verify this by checking the machine view pg_locks following the first call.