I am wishing Laurion Burchall reads this :-)

I have to place millions of small records as rapidly as you possibly can.

At this time I am in an exceedingly tight loop where, for each record, I

a) start a transaction  (JetBeginTransaction)
b) prepare an update (JetPrepareUpdate)
c) add the row (JetSetColumns)
d) commit the transaction (JetCommitTransaction)

At this time, throughout this method, I am inside a tight loop on a single processor. The prospective machine has multiple Processor chips, great disks, and a lot of free RAM.

I am wondering ways to get better performance.

So far as transactions go, Used to do some experiments coupled with troubles where errors returned basically put an excessive amount of data in a single transaction. Let me better understand what's happening there - have i got a bug, or perhaps is how big a transaction assigned, if assigned can one enlarge the cap? I am only looking into this because I am speculating that the transaction give Ose a chance to do more caching in RAM, minimzing disk flushes? - case a guess?

Generally how do you take advantage of multiple processors/plenty of RAM/and nice disks? will i open the database two times and move from there? I am less than sure what goes on regarding thread safety and transactions. Basically have two handles towards the DB, each inside a transaction, will a write on a single handle be accessible for that second immediately, prior to the commit, or should i have to commit first?

any tips are appreciated

    here are the constraints    

a) I've got a million records that need to be written into the DB as fast as possible
b) to fully generate the record for insertion there are two searches that need to occur within the same table (seeking keys)
c) This is a rebuild/regeneration of the DB - it either worked, or it didnt.  
   If it didnt there is no going back, a fresh rebuild/regeneration is
   needed.  I cannot restart mid process and without all the data none of 
   the data is valuable.  READ: having one big transaction is fine if it 
   improves perf.  I'd like ESE to cache, in ram, if that helps perf.


For single-threaded performance the most crucial factor to check out is the transaction model.

For those who have attempted putting more data in a single transaction also it unsuccessful you most likely got a JET_errOutOfVersionStore. Esent needs to track undo information for those procedures carried out inside a transaction (make it possible for rollback) which details are saved within the version store. The default size the version store is very small. You are able to increase it using the JET_paramMaxVerPages system parameter. Something of 1024 (64MB of version store) will enable quite large transactions. I would recommend doing 100-1000 insertions per transaction.

Whenever you call JetCommitTransaction Esent will flush the log to disk, producing a synchronous I/O. To prevent that pass JET_bitCommitLazyFlush to JetCommitTransaction. Your transactions it's still atomic although not durable within the situation of the crash (things is going to be fine should you exit normally). It appears like that needs to be find to use.

If you're placing records in climbing order then you definitely might have the ability to pull off just one-threaded application. If you're able to improve your implementation to complete consecutive card inserts you need to -- they are much faster. For random card inserts multiple threads could be helpful. To make use of multiple threads you need to simply create new periods (JetBeginSession) and also have them open the database (JetOpenDatabase). Esent uses snapshot isloation (http://en.wikipedia.org/wiki/Snapshot_isolation) so cannot see modifications produced by other periods which are not committed or commit after your transaction starts. This differs from read-committed where one can see changes once another session commits. You will need to consider how you can divide in the work to cope with this.