We want an internet content accelerator for static images to sit down before our Apache web front-end servers

Our previous hosting partner used Tuxedo with positive results and that i such as the fact it's a part of Red-colored Hat Linux which we are using, nevertheless its last update is at 2006 there appears little possibility of future development. Our Web service provider suggests we use Squid backwards caching proxy role.

Any ideas between Tuxedo and Squid? Compatibility, reliability and future support are as vital to us as performance.

Also, I just read in other threads here about Varnish anybody have real-world connection with Varnish in comparison with Squid, and/or Tuxedo, acquired in high-traffic conditions?



UPDATE: We are testing Squid now. Using ab to drag exactly the same image 10,000 occasions having a concurrency of 100, both Apache by itself and Squid/Apache burned with the demands very rapidly. But Squid made merely a single request to Apache for that image then offered all of them from RAM, whereas Apache alone needed to fork a lot of employees to be able to serve the pictures. It appears like Squid works well in clearing in the Apache employees to deal with dynamic pages.

In my opinion varnish is a lot faster than squid, but equally importantly it's a smaller amount of the black box than squid is. Varnish provides you with use of very detailed logs which are helpful when debugging problems. It's configuration language can also be easier plus much more effective that squid's.

@Daniel, @MKUltra, to elaborate on varnish's supposed issues with snacks, you will find really any. It's totally normal To not cache a webpage whether it returns a cookie by using it. Snacks are mainly intended to be accustomed to distinguish different user preferences, and so i don't believe one may wish to cache these (particularly if you they include some secret information just like a session id or perhaps a password!).

Should you server transmits snacks together with your .js and pictures, this is a problem in your after sales side, this is not on Varnish's side. As recommended by @Daniel (link provided), you are able to pressure the caching of those files anyway, because of the truly awesome language/DSL integrated in Varnish...

If you are searching to push static images and lots of them, you might want to take a look at some fundamentals first.

The application should make sure that all correct headers are now being passed, Cache-Control and Expires for instance. Which should increase the risk for clients browsers caching individuals images in your area and reducing in your request count.

Make use of a CDN (whether it's inside your budget), this brings the pictures nearer to your customers (generally) and can lead to a much better consumer experience on their behalf. For that CDN to become a productive investment you'll again have to make certain all of your necessary caching headers are correctly set, according to the purpose I made in the earlier paragraph.

In the end that if you're still going to utilize a reverse proxy, I suggest using nginx in proxy mode, over Varnish and squid. Yes Varnish is fast, and as quickly as nginx, but what you are thinking of doing is simple, Varnish makes it's own when for you to do complex caching, and ESI. So Make It Simple, Stupid. nginx is going to do your work very nicely indeed.

I've no training with Tuxedo, and so i can't discuss it sorry.

For which it's worth, I lately setup nginx like a reverse-proxy before Apache on the 6-year-old low-energy webserver (running Fedora Core 2) that was within mild Web sites attack (10K req/sec). Pages loading was snappy (<100ms) and system load remained low around 20% CPU utilization, and incredibly little memory consumption. The attack survived 7 days, and site visitors saw no harmful effects.

Pretty good for more than 500, 000 hits each minute sustained. Be sure that you log to /dev/null.

We use Varnish on http://world wide web.mangahigh.com and also have had the opportunity to scale from around 100 concurrent pre-varnish to in excess of 560 concurrent publish-varnish (server load continued to be at at this time, so there's lots of space to develop!). Documentation for varnish might be better, but it's quite flexible when you get accustomed to it.

Varnish is intended to be much faster than Squid (getting not used at all Squid, I can not say for several) - and http://customers.linpro.no/ingvar/varnish/stats-2009-05-19 shows Twitter, Wikia, Hulu, perezhilton.com and a large number of other large names also utilizing it.

both Squid and nginx are particularly created for this. nginx is especially simple to configure for any server farm, and may also be a frontend to FastCGI.