I've got a table where each row includes a stop and start date-time. These may be randomly short or lengthy spans.

I wish to query the sum amount of the intersection of rows with two stop and start date-occasions.

How will you do that in MySQL?

Or is it necessary to choose the rows that intersect the query stop and start occasions, then calculate the particular overlap of every row and sum it client-side?


To provide a good example, using milliseconds to really make it clearer:

Some rows:

ROW  START  STOP
1    1010   1240
2     950   1040
3    1120   1121

And you want to be aware of sum time these rows were between 1030 and 1100.

Allows compute the overlap of every row:

ROW  INTERSECTION
1    70
2    10
3     0

Therefore the sum within this example is 80.

In case your example must have stated 70 within the first row then

presuming @range_start and @range_finish as the condition paramters:

SELECT SUM( LEAST(@range_end, stop) - GREATEST(@range_start, start) )
FROM Table
WHERE @range_start < stop AND @range_end > start

while using greatest/least and date functions you need to have the ability to get the thing you need directly operating around the date type.

I'm afraid you are at a complete loss.

Since you do not know the amount of rows that you'll be "cumulatively intersecting", you'll need whether recursive solution, or perhaps an aggregation operator.

The aggregation operator you'll need isn't any option because SQL doesn't have the information type that it's designed to work on (that type becoming an interval type, as referred to in "Temporal Data and also the Relational Model").

The recursive solution might be possible, but chances are it will be a challenge to create, hard to read with other developers, which is also questionable if the optimizer can change that question in to the optimal data access strategy.

Or I misinterpreted your question.

There is a fairly interesting solution knowing the utmost time you'll have. Produce a table with the amounts inside it in one for your maximum time.

millisecond
-----------
1
2
3
...
1240

Refer to it as time_dimension (this method is frequently utilized in dimensional modelling in data warehousing.)

Than the:

SELECT 
  COUNT(*) 
FROM 
  your_data 
    INNER JOIN time_dimension ON time_dimension.millisecond BETWEEN your_data.start AND your_data.stop
WHERE 
  time_dimension.millisecond BETWEEN 1030 AND 1100

...provides you with the entire quantity of milliseconds of running time between 1030 and 1100.

Obviously, whether this can be used technique is dependent on whether you are able to securely predict the utmost quantity of milliseconds which will be inside your data.

This really is frequently utilized in data warehousing, when i stated they fit well with a few types of problems -- for instance, I have tried on the extender for insurance systems, in which a total length of time between two dates was needed, and in which the overall time frame from the data was simple to estimate (in the earliest customer birth date to some date a few years in to the future, past the finish date associated with a guidelines which were being offered.)

May not meet your needs, however i figured it had been worth discussing being an interesting technique!

Once you added the example, it's obvious that indeed I misinterpreted your question.

You aren't "cumulatively intersecting rows".

The steps that will take you to some solution are :

intersect each row's start and finish point using the given start and finish points. This ought to be possible using Situation expressions or something like that of this character, something in design for :

Choose (Situation startdate < givenstartdate : givenstartdate, Situation startdate >= givenstartdate : startdate) as retainedstartdate, (likewise for enddate) as retainedenddate FROM ... Look after nulls which kind of stuff when needed.

Using the retainedstartdate and retainedenddate, make use of a date function to compute the size of the maintained interval (the overlap of the row using the with time section).

Choose the SUM() of individuals.