I wish to produce a history table to trace area changes across numerous tables in DB2.
I understand history is generally completed with copying a whole table's structure and passing on a suffixed title (e.g. user --> user_history). Then use a really quite simple trigger copying that old record in to the history table with an UPDATE.
However, for my application this could use an excessive amount of space. It does not appear like advisable (in my experience a minimum of) copying a whole record to a different table whenever a area changes. And So I thought I will have a generic 'history' table which may track individual area changes:
CREATE TABLE history ( history_id LONG GENERATED ALWAYS AS IDENTITY, record_id INTEGER NOT NULL, table_name VARCHAR(32) NOT NULL, field_name VARCHAR(64) NOT NULL, field_value VARCHAR(1024), change_time TIMESTAMP, PRIMARY KEY (history_id) );
OK, so every table that I wish to track includes a single, auto-produced id area because the primary key, which may be placed in to the 'record_id' area. And also the maximum VARCHAR size within the tables is 1024. Clearly if your non-VARCHAR area changes, it would need to become a VARCHAR before placing the record in to the history table.
Now, this may be a totally retarded method of doing things (hey, tell me why if it's), but It it is a good method of monitoring changes that should be drawn up rarely and have to be saved for a lot of time.
Anyway, I want assist with writing the trigger to include records towards the history table with an update. Let us for instance have a hypothetical user table:
CREATE TABLE user ( user_id INTEGER GENERATED ALWAYS AS IDENTITY, username VARCHAR(32) NOT NULL, first_name VARCHAR(64) NOT NULL, last_name VARCHAR(64) NOT NULL, email_address VARCHAR(256) NOT NULL PRIMARY KEY(user_id) );
So, can anybody assist me to having a trigger with an update from the user table to place the alterations in to the history table? My prediction is the fact that some procedural SQL will have to be accustomed to loop with the fields within the old record, do a comparison using the fields within the new record and when they do not match, adding a brand new entry in to the history table.
It would be more suitable to make use of exactly the same trigger action SQL for each table, no matter its fields, whether it's possible.
I do not think this is an excellent idea, while you generate much more overhead per value having a large table where several value changes. But that is dependent in your application.
In addition you should look at the practical worth of this type of history table. You need to get lots of rows together to even obtain a peek at context towards the value transformed also it requeries you to definitely code another application that does this complex history logic to have an enduser. As well as for an DB-admin it might be cumbersome to revive values from the history.
it might seem a little harsh, but that's not the intend. A skilled programmer within our shop were built with a simmilar idea through table journaling. He first got it ready to go, however it ate diskspace like there is no tomorrow.
Just consider what your history table should certainly accomplish.
Have you thought about carrying this out like a two step process? Implement an easy trigger that records the initial and transformed version from the entire row. Then write another program that runs daily to extract the transformed fields while you describe above.
This will make the trigger simpler, safer, faster and you've got more options based on how to implement the publish processing step.