I've got a query which has appropriate indexes and it is proven within the query plan by having an believed subtree price of circa 1.5. The program shows a catalog Seek, then Key Research - that is acceptable for a question likely to return 1 row from some between 5 and 20 rows (i.e. the Index Seek should find between 5 and 20 rows, and after 5 - 20 Key Searches, we ought to return 1 row).

When run interactively, the query returns quickly. However, DB traces today show runtimes from live (an internet application) that vary extremely often the totally taking < 100 DB Reads, and effectively runtime... but we're obtaining a couple of runs that consume > 170,000 DB Reads, and runtime as much as 60s (more than our timeout value).

What could explain this variation in disk reads? I've attempted evaluating queries interactively and taking advantage of Actual Execution plans from two parallel runs with filter values obtained from fast and slow runs, but interactively these show effectively no difference within the plan used.

I additionally attempted to recognize other queries that may be securing that one, but I don't know that will impact the DB Reads a lot... and the point is this question were rather the worst for runtime during my trace logs.

Update: Here is a sample from the plan created once the totally run interactively:

alt text

Please disregard the 'missing index' text. It is correct that changes to the present indexes could allow a quicker query with less searches, but that's not the problem here (you will find already appropriate indexes). It is really an Actual Execution Plan, where we have seen figures like Actual Quantity of Rows. For instance, around the Index Seek, the particular quantity of rows is 16, and also the I/O price is .003. The I/O cost is identical around the Key Research.

Update 2: The outcomes in the trace with this query are:

exec sp_executesql N'select [...column list removed...] from ApplicationStatus where ApplicationGUID = @ApplicationGUID and ApplicationStatusCode = @ApplicationStatusCode;',N'@ApplicationGUID uniqueidentifier,@ApplicationStatusCode bigint',@ApplicationGUID='ECEC33BC-3984-4DA4-A445-C43639BF7853',@ApplicationStatusCode=10

The totally built while using Gentle.Framework SqlBuilder class, which develops parameterised queries such as this:

SqlBuilder sb = new SqlBuilder(StatementType.Select, typeof(ApplicationStatus));
sb.AddConstraint(Operator.Equals, "ApplicationGUID", guid);
sb.AddConstraint(Operator.Equals, "ApplicationStatusCode", 10);
SqlStatement stmt = sb.GetStatement(true);
IList apps = ObjectFactory.GetCollection(typeof(ApplicationStatus), stmt.Execute());

Is the data be being taken off the cache? That might be a reason why having a hot cache (data already in memory), the reads recorded are extremely low....after which once the information is no more in RAM, the reads would increase because it needs to see clearly off disk again.

Only one idea to obtain things moving.

Run profiler to ascertain if statistics are now being up-to-date around the same time frame. Or just to determine what else is certainly going.

Also, please add the SQL query too the customer code.

Ideas:

  • It may sound much like your "5-20" rows might be way over that
  • Having a bad plan/parameter sniffing at you'd get consistently bad performance
  • How may creates happen about this table: enough to update statistics?
  • Can there be some datatype problem? (eg concatenating parameters and presenting a datatype conversion)