i am approaching having a bottleneck that my server can't pass a 20000x40000 benchmark test regardless of what changes i made. the server have 128G Ram with an Xeon 6 core cpu, centos5.6-64bit, inside a very good condition.

i attempt combinations including:

nginx + uwsgi + python2.7
nginx + apache + mod_wsgi + python2.7
apache + mod_wsgi + python2.7

not one of them might make with the apache benchmarking:

ab -c 20000 -n 40000 (without -k)

and coincidentally, just about all tests unsuccessful around 32000 demands

detail about nginx and uwsgi:

nginx:

worker_processes  24
use epoll
worker_connections  65535

uwsgi:

listen 2048
master true
workers 24
uwsgi -x /etc/uwsgi_conf.xml --async 256 --file /var/www/example.py &

anybody have understanding of it? thanks ahead of time to the possible solutions and suggestions

Permitting such quantity of concurrent connections on one system takes a huge listing of kernel tuning and most likely you won't ever have the ability to manage such load being produced.

To begin with you need to increase the amount of ephemeral port, the socket backlog queue, the amount of permitted file descriptor per process and so forth...

Additionally for this (that needs to be already enough to prevent such impractical test) you need to increase the amount of async core in uWSGI to 20k. No problem with taht (each core consume under a webpage of memory), however, you will finish with a minimum of 40k opened up socket in your body.

To nginx+uwsgi.

With apache you'll finish with 20k processes or threads, that's even worst compared to 40k opened up electrical sockets.