I've a lot of records (say around 10 to 100 million), which I wish to have the ability to query.

This can be a research study, the database will probably be mostly read only, and that i just have one connection at any given time. I'd like the queries to become reasonably fast.

Is SQLite an acceptable choice for this function?

My knowledge about SQLite is it might be quite slow on large recordsets, for the way you structure your queries. In case your information is p-stabilized and you will manage querying just one table against its primary key then it is acceptably fast, but when your computer data is fully stabilized as well as your queries involve several joins then it may be much reduced than the usual client-server database.

SQLite's principal advantage is its small size and single file character making it simple to distribute baked into an application. As that does not appear to become a requirement of you though, I believe you would be best choosing another thing. SQL Server Express is nice if you are using Home windows, MySQL or Postgres otherwise will be a sensible choice.

As stated in the earlier posts, SQLite is a superb SQL library, however it can exhaust gas once the data set will get large. Berkeley DB lately introduced a SQL API that is completely SQLite compatible. It had been put into Berkeley DB to be able to provide the very best of both mobile phone industry's to SQLite customers -- the ubiquity, simplicity and simplicity of use of SQLite using the concurrency, scalability and toughness for Berkeley DB.

The Berkeley DB SQL API was created to become a drop-in alternative for SQLite programs, especially individuals that particularly require the Berkeley DB features and scalability that is not obtainable in native SQLite. Read much more about it within the Berkeley DB SQL API documentation.

Disclaimer: I am among the Product Managers for Berkeley DB, so I am a little biased. However your use situation is among the reasons that people labored with Dr. Hipp and also the SQLite designers to be able to mix the SQLite API using the Berkeley DB storage manager. It enables SQLite application designers to consider their programs into new areas with added abilities, while remaining suitable for their existing implementation.

Please tell us for those who have any queries or maybe there's something that we are able to do in order to help. You'll find an energetic community of Berkeley DB designers around the OTN Forums.

All the best together with your project.



SQLite isn't particularly fast when engaging in the countless records. Results will be different based on what you devote there, schema, quantity of posts, indexes.

The benefit (particularly in your situation) of SQLite is it is really light that giving it a go with a few data would most likely be well worth the effort and time. It's greatly straightforward and it is ideal use situation is definitely for single user access.

I'd say try to construct it track of an agent quantity of data (that you can do an import from the CSV file in the command line, or use one of the numerous wrappers available available). When the speed isn't acceptable, you may have to change to something with increased energy, but of course a little more setup too, like MySQL.