I've got a trivial WSGI application running on pesto, mod_wsgi and Apache:

def viewData(request):
    return Response("aaaaaaaaaa" * 120000) # return 1,2MB of data

On my small test machine, I recieve about 100kb/of throughput, meaning the request takes about 12 seconds to accomplish. Installing static files in the same Apache instance provides me with about 20MB/s. Exactly why is there this type of massive difference, and just how can one accelerate the WSGI application?

Software versions: Ubuntu 10.04, Apache 2.2.14, Python 2.6.5, mod_wsgi 2.6 (all Ubuntu's default packages), pesto-18

edit: The actual application symbolized with this example doesn't attempt to send static files, but dynamically produces a lot of HTML. HTML generation happens fast (I went it through cProfile and timeit), however the transmission is slow, and Let me fix that one problem.

edit 2: I examined current versions of pesto (21) and mod_wsgi (3.3) on a single stack, throughput didn't change considerably. I additionally changed mod_wsgi with breeding .9.5 behind apache's mod_proxy - this elevated throughput with a factor of 4, but it is still miles from what I'd want it to be.

In WSGI the applying or even the framework should return an iterable. Have no idea if that's what Pesto does.

Improve your code to:

def viewData(request):
    return Response(["aaaaaaaaaa" * 120000])

And check out again.