I do not mind whether it's NoSQL or SQL based - as lengthy because it uses int indexes (and stores them in RAM for fast searching) in order to find my data with simple queries according to criteria like
status, or any other common int fields. The particular records could be saved on disk.
My first guess could be SQLite - however it moves gradually when dealing with many different concurrent creates.
Second, it must also have the ability to run in very small quantities of RAM for VPS with limited assets. This excludes MongoDB because it propagates to fill all available RAM (well, the diskcache does really). I additionally can't use MySQL Innodb because it uses about 100MB of RAM simply to load and MyIsam does not support Acidity.
Update: When I only say small databases, I am talking about databases that just use 8-60MB of RAM. I realize that actual data increases this but many of my datasets are often under 1GB for that more compact sites about 5MB of indexes that will have to be saved in RAM. So a perfect database would use about 30MB when running having a fully index dataset around 1GB. Take this website for instance, I doubt the entire stackoverflow site takes a lot more than 1GB to keep.
Update: To clarify, a setup would ideally store all data on disk. However, it might also keep column indexes in RAM (just ints in the end) which may retain the needed pointers to data around the disk. This could avoid a couple of things 1) keeping needless rows in memory like redis and a pair of) keeping indexes around the hard disk slowing down searches (SQLite).
A good example is MySQL which may be set up to simply keep primary and secondary indexed posts in memory and all sorts of other data around the hard disk. However, MySQL either uses 100MB extra RAM simply to add InnoDB or else you forgo Acidity compliance and stay with Myisam which isn't transaction safe.
Again, the prospective is systems which are limited in RAM and should not handle greater than a couple Mb of cached indexes - but that also have to allow frequent creates/updates of normally small data sets inside a safe manner.
Update: apparently finding something which meets each one of these needs is a little much. So, beginning most abundant in important features allow me to list them in climbing down importance.
- Low memory usage
- Indexes (or something like that to imitate them)
- Handles concurrent creates
Growing on #1, it's more essential that data could be written than that reads are fast. That also implies that the quantity of RAM should have no impact on the quantity of data that may be saved.
Growing on #2, ideally (given how small they're) indexes ought to be saved in RAM since indexes ought to be simply int values which are in comparison to filter results before really being able to access the disk for that data.
I'd refuse: you've somewhat contradictory needs
In memory = data cache (that is data and indexes). Covering indexes will require more RAM. However, you want small RAM footprint
Write volumes rely on underlying IO stack, really. Eg Write Ahead Logging, needed for Acidity in certain implementations
What can be most significant, and what's priority from the relaxation?