Suppose I've got a database table with two fields, "foo" and "bar". Neither seem to be unique, but all of them are indexed. However, instead of being indexed together, both possess a separate index.

Now suppose I execute a query for example SELECT * FROM sometable WHERE foo='hello' AND bar='world'; My table a large number of rows that foo is 'hello' and a small amount of rows that bar is 'world'.

So the best factor for that database server to complete underneath the hood is make use of the bar index to locate all fields where bar is 'world', then return only individuals rows that foo is 'hello'. This really is O(n) where n is the amount of rows where bar is 'world'.

However, I imagine it is possible the process happens backwards, in which the fo index was adopted and also the results looked. This is O(m) where m is the amount of rows where foo is 'hello'.

Same with Oracle wise enough to look effectively here? How about other databases? Or perhaps is there a way I will tell it during my query to look within the proper order? Possibly by putting bar='world' first within the WHERE clause?

Oracle will likely make use of the most selective index they are driving the query, and you will make sure that using the explain plan.

In addition, Oracle can mix using both indexes in a few ways -- it may convert btree indexes to bitmaps and execute a bitmap ANd operation in it, or it may execute a hash join around the rowid's came back through the two indexes.

One essential consideration here may be any correlation between your values being queried. If foo='hello' makes up about 80% of values within the table and bar='world' makes up about 10%, then Oracle will estimate the query will return .8*.1= 8% on the table rows. However it isn't really correct - the query may really return 10% from the rwos as well as % from the rows for the way correlated the values are. Now, with respect to the distribution of individuals rows through the table it might not be efficient to make use of a catalog to locate them. You might still have to access (say) 70% or even the table blocks to retrieve the needed rows (google for "clustering factor"), by which situation Oracle will execute a ful table scan whether it will get the estimation correct.

In 11g you are able to collect multicolumn statistics to assist with this particular situation In my opinion. In 9i and 10g you should use dynamic sampling to obtain a excellent estimation of the amount of rows to become retrieved.

To find the execution plan do that:

explain plan for
SELECT *
FROM   sometable
WHERE  foo='hello' AND bar='world'
/
select * from table(dbms_xplan.display)
/

Contrast by using:

explain plan for
SELECT /*+ dynamic_sampling(4) */
       *
FROM   sometable
WHERE  foo='hello' AND bar='world'
/
select * from table(dbms_xplan.display)
/

Yes, you are able to give "hints" using the query to Oracle. These hints are disguised as comments ("/* HINT */") towards the database and therefore are mainly vendor specific. So one hint for just one database won't focus on some other database.

I'd use index hints here, the very first hint for that small table. See here.

However, should you frequently search of these two fields, why don't you create a catalog on both of these? I don't possess the right syntax, but it might be something similar to

CREATE INDEX IX_BAR_AND_FOO on sometable(bar,foo);

By doing this data retrieval ought to be pretty fast. And just in case the concatenation is exclusive hten you just produce a unique index that ought to be lightning fast.

Same with Oracle wise enough to look effectively here?

The easy response is "most likely". You will find lots'o' very vibrant people at each one of the database suppliers focusing on optimizing the query optimizer, therefore it is most likely doing stuff that you have not even considered. And when you update the data, it'll most likely do much more.