What various techniques and technologies perhaps you have accustomed to effectively address scalability and gratifaction concerns of the website? I'm an ASP.Internet webmaster exploring .Internet remoting with WCF with SQL clustering and am curious in regards to what other approaches exist (like the ‘cloud’). By which cases can you apply various approaches (for instance method a for roughly x many ‘active’ customers).

A good example of the reason, a bebo example: http://highscalability.com/bebo-architecture

This can be a very broad question which makes it hard to answer, but I'll try to give a couple of general suggestions.

1 - Unless of course you do several things seriously wrong then you will likely not be concerned about perf or scale before you hit a lot of traffic (over a million page sights per month).

2 - Your greatest performance problems initially could be the page load occasions using their company nations. Try the Gomez Instance Site Test to determine the page load occasions from all over the world, and employ YSlow like a guide for optimizing.

3 - Whenever you do start striking performance problems it'll first probably be because of the database work. Make use of the SQL Server Profiler to look at your SQL traffic searching for lengthy running queries to test optimizing, as well as use dm_db_missing_index_particulars to search for indexes you need to add.

4 - In case your web servers start becoming the performance bottleneck, make use of a profiler to (like the Bugs Profiler) to search for methods to optimize your webpages code.

5 - In case your web servers are very well enhanced but still running too hot, search for more caching possibilities, but you are most likely gonna need to simply increase the web servers.

6 - In case your database is well enhanced but still running too hot, take a look at adding a distributed caching system. This most likely will not happen until you are over ten million page sights per month.

7 - In case your database is beginning to obtain overcome despite distributed caching, take a look in a sharding architecture. This most likely will not happen until you are over 100 million page sights per month.

I have done a couple of sites that will get millions/hits/month. Here are a few fundamentals:

  1. Cache, cache, cache. Caching is among the easiest and best ways to lessen strain on your webserver and database. Cache page content, queries, costly computation, anything that's I/O bound. Memcache is dead easy and effective.
  2. Use multiple servers when you are at their maximum. You could have multiple web servers and multiple database servers (with replication).
  3. Reduce overall # of request for your webservers. This entails caching JS, CSS and pictures using expires headers. You may also move your static content to some CDN, that will accelerate your user's experience.
  4. Measure &lifier benchmark. Run Nagios in your production machines and load test in your dev/qa server. You should know whenever your server will become popular fire to help you prevent it.

I'd recommend reading through Building Scalable Websites, it had been compiled by among the Flickr engineers and is a superb reference.

Take a look at my blog publish about scalability too, it provides extensive links to presentations about scaling with multiple languages and platforms: http://world wide web.ryandoherty.internet/2008/07/13/unicorns-and-scalability/

There's velocity from MS in addition to MEMCache includes a port to .Internet now as well as indeXus.Internet

There's velocity from MS in addition to MEMCache includes a port to .Internet now as well as indeXus.Internet