I"m searching to operate PostgreSQL in RAM for performance enhancement. The database is not a lot more than 1GB and should not ever grow to a lot more than 5GB. Could it be worth doing? What are the benchmarks available? Could it be buggy?

My second major problem is: How easy could it be to back some misconception when it is running purely in RAM. Is much like using RAM as tier 1 HD, or perhaps is it a lot more complicated?

The entire factor about whether or not to hold you database in memory is dependent on size and gratifaction too how robust you would like it to be around creates. I suppose you're writing for your database which you need to persist the information just in case of failure.

Personally, I'd not be worried about this optimisation until I went into performance issues. It simply appears dangerous in my experience.

If you're doing lots of reads and incredibly couple of creates a cache might serve your own personal purpose, Many ORMs include a number of caching systems.

From the performance perspective, clustering across a network to a different DBMS that does all of the disk writing, appears much more inefficient than simply getting a normal DBMS and getting it updated to help keep whenever possible in RAM as you would like.

It may be worthwhile in case your database is I/O bound. Whether it's CPU-bound, a RAM drive can make no difference.

But firstly, you need to make certain that the database is correctly updated, you will get huge performance gains this way without losing any guarantees. A RAM-based database will work badly when not correctly updated. See PostgreSQL wiki about this, mainly shared_buffers, effective_cache_size, checkpoint_*, default_statistics_target

Second, if you wish to avoid syncing disk buffers on every commit (like codeka described in the comment), disable the synchronous_commit configuration option. Whenever your machine manages to lose energy, this can lose some latest transactions, however your database it's still 100% consistent. Within this mode, RAM will be employed to buffer all creates, including creates towards the transaction log. So with unusual checkpoints, large shared_buffers and wal_buffers, it may really approach speeds near to individuals of the RAM-drive.

Also hardware can produce a massive difference. 15000 Revoltions per minute disks can, used, be 3x as quickly as cheap drives for database workloads. RAID remotes with battery-backed cache also create a factor.

In the event that's still insufficient, then it might seem sensible to think about embracing volatile storage.

Really... as lengthy as you've enough memory available your database will be fully running in RAM. Your filesystem will completely buffer all of the data therefore it will not make a difference.

But... there's ofcourse always a little of overhead to help you still try to run everything from the ramdrive.

For the backup copies, that's as with every other database. You could utilize the standard Postgres dump utilities to backup the machine. As well as, allow it to replicate to a different server like a backup.