I am searching for a "best practise" method to handle incoming time series data.
One data point consists for instance of your time, height, width etc. for each "tick". Could it be smart to save n data points in-memory having a collection class and then "flush" the indicates a database after reaching the limits from the collection?
Or if the data points be directly written towards the database to begin with, to ensure that my object can run queries against it?
I understand this is nothing details about my needs, so now you ask , how quickly may be the data use of a database in comparison to some hybrid in-memory and database solution.
Say you will find for the most part 500 data points per second to deal with and also the data needs to be calculated in some way on every point incoming. Having a pure database solution, one needs to operate a store query on every incoming point. I suppose this isn't effective, but I'm not sure if this type of database has the capacity to "listen" and do that fast.
A pleasant feature for that database is always to send the indicates subcribers. Is possible with SQL server?
Putting the "delivering to customers" requirement aside, do not get in to the trap of premature optimisation.
I'd try the easiest solution first, that is most likely just writing the information in to the database because it arrives. Then run stress tests. When the performance is not as much as scratch, discover the bottlenecks and optimize them out.
Embracing the "delivering to customers" requirement, this is not really a thing that relational database platforms are usually created for (they're much more about storing data and subjecting it for on-demand retreival). A pub-sub type requirement is generally best solved with a couple type of message bus. Possibly have a look at something similar to NServiceBus.
If it's not multi-user then data points in-memory having a collection class is definitive a champion.
If it's multi-user i quickly would choose some kind of shared in memory data structure on server side
continues it day to day in db.
The bigger real question is the way you intend on storing this in SQL. I'd queue the datapoints in memory for time (1 second?) after which write just one row towards the database having a blob area, or nvarchar area that contains all of the data for your second because this means the database will scale further, the row could contain some summary information of the items happened within this second for you to use when when carrying out queries around the data to lessen load when doing chooses... Of-course this would not be feasable if you wish to perform direct queries about this data.
Everything is dependent that which you plan related to the information...