What's generally considered the best method of doing this kind of query?

There exists a database of ten years price of laboratory data and we wish to choose out performance data for a number of tests. This question for instance will choose the amount of hrs its come to perform a make sure calculate a typical turnaround some time and let us plot a sparkline of avg TAT daily.

Say we've 100 test names could it be acceptable when it comes to performance to iterate within the test names inside a loop and fire this question off once per loop? Or it is possible to more effective way?

  , Result_Set.Date_Booked_In
  , avg(DATEDIFF('hh',Result_Set.Date_Time_Booked_In,Result_Set.Date_Time_Authorised)) as HrsIn
  , count(Date_Authorised_Index.Date_Authorised) as numbers
  , Date_Authorised_Index.Registration_Number
  , Date_Authorised_Index.Request_Row_ID
  , Date_Authorised_Index.Specimen_Number
  , Result_Set.Authorised_By
  , Result_Set.Namespace
  , Result_Set.Set_Code
  , Result_Set.Date_Time_Authorised
  , Request.Date_Time_Received
  , Request.Location 
  Date_Authorised_Index Date_Authorised_Index
  , Result_Set Result_Set
  , Request 
  Date_Authorised_Index.Date_Authorised = Result_Set.Date_Authorised 
  AND Date_Authorised_Index.Request_Row_ID = Request.Request_Row_ID
  AND Date_Authorised_Index.Request_Row_ID = Result_Set.Request_Row_ID 
  AND (Date_Authorised_Index.Discipline='C') AND Result_Set.Set_Code=? 

To begin with I'd rewrite this question therefore it uses explicit join syntax.
Also despite the fact that MySQL doesn't pressure you to definitely restate every non-aggregate column within the group by clause that does not mean this is a positive thing.
Unless of course the Result_Set.Date_Booked_In distinctively identifies a row, you're choosing random values from the multiple of rows.

  , rs.Date_Booked_In
  , avg(DATEDIFF('hh',rs.Date_Time_Booked_In,rs.Date_Time_Authorised)) as HrsIn
  , count(dai.Date_Authorised) as numbers
  , dai.Registration_Number
  , dai.Request_Row_ID
  , dai.Specimen_Number
  , rs.Authorised_By
  , rs.Namespace
  , rs.Set_Code
  , rs.Date_Time_Authorised
  , r.Date_Time_Received
  , r.Location 
  Date_Authorised_Index dai    
INNER JOIN Result_Set rs ON (dai.Date_Authorised = rs.Date_Authorised
                         AND dai.Request_Row_ID = rs.Request_Row_ID)
INNER JOIN Request R ON (dai.Request_Row_ID = r.Request_Row_ID)
  (dai.Discipline= 'C') AND rs.Set_Code=? 

If you wish to choose one hundred rows all at once, simply make a brand new table using the set_codes you need to choose and join against that.
Make certain you index the area sc.set_code (or even better allow it to be the main key)

SELECT lots_of_columns
FROM table1 as dai
INNER JOIN table2 as rs ON (what you joined on before)
INNER JOIN table3 as r ON (same here)
INNER JOIN Setcodes as sc ON (sc.Set_code = rs.SetCode)  <<-- extra join.
  dai.discipline = 'C'
GROUP BY  rs.Date_Booked_In

Or make use of a `IN (...) like below, although which will propably be reduced than the usual join.

SELECT lots_of_columns
FROM table1 as dai
INNER JOIN table2 as rs ON (what you joined on before)
INNER JOIN table3 as r ON (same here)
  dai.discipline = 'C' AND rs.Set_Code IN (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
GROUP BY  rs.Date_Booked_In