I heard my team leader state that in certain past projects they needed to eliminate normalization to create the queries faster.
It might have something related to table unions.
Is getting more lean tables really less capable than getting couple of body fat tables?
It is dependent ... joining tables is naturally reduced than getting one large table that's 'pre-joined' ie p-normalised. However, by denormalising you are likely to create data duplication as well as your tables will be bigger. Normalisation is viewed as a positive thing, since it produces databases that may answer 'any' question, if it's correctly done you are able to develop a choose to get at your computer data. This isn't the situation in certain other kinds of DB, and individuals are actually (mostly) historic irrelevancies, the normalised/relation DB won that fight.
To your question, using p-normalisation to create things go faster is really a well recognized technique. It's normally better to run your DB for some time so guess what happens to p-normalise and things to leave alone, also it's present with leave the information in the 'correct' normalised form and pull data into some p-normalised confirming tables regularly. In the event that process is performed included in the report run itself then your information is always current too.
To illustrate over-normalisation I have seen DBs previously where the era of the week, and several weeks of the season were drawn out into separate tables - dates themselves were normalised - you are able to get carried away.
In a nutshell, database systems that are concerned mainly with recording transactions (OLTP) are often structured inside a more stabilized fashion, reducing data duplication and reducing the creation and upgrading of records at the fee for enhanced data retrieval.
Database systems for worried about data retrieval and analysis (OLAP) are often structured inside a less stabilized fashion, compromising data storage optimisation to maximize querying and analysis speed.
Shaun wrote about this, then a heated discussion. It's also subject of great importance and discussion on SO, e.g. whats the better database design more tables or more columns. As others have pointed, use good sense and don't over-normalize.
During my lengthy knowledge about Oracle OLTP databases, a number of them large and busy, I'm able to honestly say I can not remember ever getting stumbled upon a situation where "denormalisation for performance" was truly needed. I've, however, seen most cases where someone has made the decision ahead of time that denormalisation ought to be applied due to their fear, uncertainty and doubt about potential performance issues. It has usually been done with no benchmarking, and almost always I've found that no performance improvement continues to be accomplished actually - however the data maintenance code is becoming much more complex of computer could have been.
OLAP is an extremely different animal, and I am not capable of comment about this.
This recurs altogether too frequently. The main reason is the fact that SQL, typically the most popular database language with a huge margin, and every one of its most widely used implementations, conflate logical table design with physical table design.
The eternal answer is you must always normalize your logical tables, however the practical response is complicated because the only method to implement certain optimizations under existing SQL implementations would be to denormalize your physical table design (itself not necessarily a bad factor) which, in individuals implementations, requires denormalizing your logical table design.
In a nutshell, it is dependent. Sometimes denormalization is essential for performance, but like anything else performance-related you need to measure, measure, measure even before you consider heading down this route.