Apparently the reason behind the BigTable architecture has related to the problem scaling relational databases when you are coping with the huge quantity of servers that Google needs to cope with.
But from a technical perspective just what causes it to be a hardship on relational databases to scale?
Within the enterprise data centers of huge companies they appear to have the ability to do that effectively so I am wondering why you cannot function this in a greater order of magnitude so as for this to scale on Google's servers.
Whenever you execute a query which involves associations that are physically distributed, you need to pull that data for every relationship right into a central place. That clearly will not scale well for big volumes of information.
A properly set-up RDBMS server will work nearly all it's queries on hot-pages in RAM, with little physical disk or network I/O.
If you're restricted by network I/O, then the advantages of relational data become lessened.
Additionally to Mitch's answer, there's another facet: Webapps are usually poorly suitable for relational databases. Relational databases put focus on normalization - basically, making creates simpler, but reads harder (when it comes to work done, not necessarially for you personally). This is effective for OLAP, ad-hoc query type situations, although not very well for webapps, which can be massively weighted in support of reads over creates.
The process taken by non-relational databases for example Bigtable may be the reverse: denormalize, to create reads much simpler, at the expense of creating creates more costly.
The primary reason as mentioned is location and network IO. Furthermore, even large companies cope with a small fraction of the information that search engines like google cope with.
Consider the index on the standard database, perhaps a couple of feilds... search engines like google need fast text search, on large text fields.