I've got a Large amount of data open to me, and wish to capture and test out data that is not presently getting used being produced. I don't want to instantly add this to my existing data store since that will unquestionably wreck havoc on production. The apparent solution appears to become to create a copy of production data and integrate it using what I wish to adjust (programs being able to access this data ,etc), but I'm wondering if there is a much better (less costly?) method of doing this.
Both isolation and integration are essential. Let me have the ability to keep lightweight/experimental data assets aside from high volume production data, but additionally have the ability to integrate (RELATIVELY) easily if experimental assets are considered helpful.
I would rather focus on little bit of data initially to begin with and make up a duplicate from the production atmosphere after which start focusing on the migration part.
Should i be successfull, inside a a shorter period, then your same steps could be ongoing for that relaxation.