I simply heard that you ought to create a catalog on any column you are joining or querying on. When the qualifying criterion is simple, why can't databases instantly produce the indexes they require?
Well, they are doing to some degree a minimum of...
See SQL Server Database Engine Tuning Advisor, for example.
However, creating optimal indexes isn't as simple while you pointed out. A level simpler rule would be to create indexes on every column (that is not even close to optimal)!
Indexes have a price. You create indexes at the expense of storage increase performance amongst other things. They must be carefully thought going to be optimal.
This is an excellent question. Databases could produce the indexes they require according to data usage designs, but which means that the database could be slow the very first time certain queries were performed after which get faster as time continues. For instance if there's a table such as this:
ID USERNAME -- --------
: then your username would be employed to lookup the customers very frequently. After a while the database often see that say 50% of queries did this, by which situation it might add a catalog around the username.
However why this has not been implemented in great detail is that it's not a killer feature. Adding indexes is carried out relatively couple of occasions through the DBA, by automating this (the industry very large task) is most likely simply not worthwhile for that database suppliers. Keep in mind that every query must be examined make it possible for auto indexes, as well as the query response time, and result set size too, so it's non-trivial to implement.
Because databases simply store and retrieve data - the database engine doesn't have clue how you want to retrieve that data before you really get it done, by which situation it's past too far to produce a catalog. And also the column you're joining on might not be appropriate to have an efficient index.
An RDBMS could easily self-tune and make indices because it saw fit but this could only work with simple cases with queries that don't have demanding execution plans. Most indices are produced to optimize for specific reasons and most of these optimizations be more effective handled by hand.
It is a non-trivial problem to resolve, and in some cases a sub-optimal automatic solution might really worsen. Make a database whose read procedures were increased by automatic index creation but whose card inserts and updates got hosed consequently from the overhead of controlling the index? Whether that's bad or good is dependent around the character of the database and also the application it's serving.
If there have been a 1-size-fits-all solution, databases would likely do that already (and you will find tools to point out exactly this kind of optimisation). But tuning database performance is basically an application-specific function and it is best accomplished by hand, a minimum of for the time being.
Every index you add may boost the speed of the queries. It will reduce the speed of the updates, card inserts and removes also it will increase disk space usage.
I, for just one, prefer to keep your control to myself, using tools for example DB Visualizer and explain claims to supply the data I have to evaluate what ought to be done. I actually do not desire a DBMS unilaterally determining notebook computer.
It is better, for me, that the truly intelligent entity make choices re database tuning. The DBMS can suggest all it wants however the ultimate decision ought to be left as much as the DBAs.
What goes on once the database usage designs change for just one week? You may not want the DBMS creating indexes and wrecking them not much later? That seems like an administration nightmare scenario right up alongside Skynet :-)