There exists a Rails project that utilizes PostgreSQL-specific queries, therefore it is uncomfortable to make use of sqlite even just in development mode. The issue is, I'd like to track schema alterations in database and omit running migrations on request, and I'd also like to track db-data changes with git, to ensure that I would not have to dump the db and load it on my small machine. So essentially I simply want to do 'git pull' and find out the applying dealing with the brand new schema and data.

Do you know the possible methods here? The only real that involves my thoughts is by using an easy wrapper that can take an sql-query, inspections whether it has any db-specific parts and rewrites it for development atmosphere, to ensure that we're able to still use sqlite. What else?

I am unsure I recognize all the how to go about your question - specially the comments about using SQLite versus PostgreSQL. If it's to become a multi-DBMS system, then testing with multiple systems is nice if it's to become a single-DBMS system, then dealing with multiple DBMS is making existence pointlessly hard.

Also, you discuss monitoring the schema alterations in the database...is storing the data about schema changes individually from DBMS's own system catalog, or you may not mean that you would like to trace database schema changes (using something outdoors the database - like a VCS)?

Additionally you discuss monitoring 'DB-data changes' that we decide to try mean 'the data within the tables within the database'. Again, I am not obvious if you're considering some kind of dump from the data in the database that covers the variations between that which was there, say, each day ago and what's there now, or something like that else.

These problems may be the reason why you did not obtain a response for more than 4 hrs.

Whenever you discuss a 'simple wrapper', you aren't speaking about something which I'd call simple. It needs to parse arbitrary SQL, exercise whether any one of it's DBMS-specific, after which apply rewrite rules. That's a non-trivial undertaking. Obtaining the wrapper contacted the best places might be non-trivial too - it is dependent around the group of APIs you're using to gain access to the DBMS, among other activities.

What else?

  • Make use of the same DBMS both in production and development?
  • Monitoring just schema changes is non-trivial. You have to track the essence from the schema (for example table title, column names, etc) and never the accidence (yeah, I had been rereading Brooks' "No Silver Bullet" earlier) like the TabID (that might vary with no schema being materially different). However, an analysis would let you know if the schema differs.
  • Monitoring the information changes, separate from schema changes, can also be non-trivial. Generally, the amount of these information is large. You might have the ability to cope with a complete archive or perhaps a full unload or export from the database - but making certain the information is presented within the same sequence every time may need some care from you. If you do not make sure the correct sequencing, the VCS is going to be recording huge changes because of ordering variations.

All of the above comes down to the dreaded "it is dependent" answer. It is dependent on:

  • Your DBMS
  • Your database size
  • The unpredictability of the schema
  • The unpredictability of the data

It only marginally is dependent in your VCS or platform, fortunately.