I am developing an multi-user application which utilizes a (postgresql-)database to keep its data. I question just how much logic I ought to change in to the database?

e.g. Whenever a user will avoid wasting data he just joined. If the application just send the information towards the database and also the database decides when the information is valid? Or if the application function as the wise part within the line and appearance when the information is OK?

Within the last (commercial) project I done, the database was very dump. No constraits, no sights etc, everything was ruled through the application. I believe that's very bad, because whenever a certain table was accesed within the code, there is exactly the same code to see if the access applies repeated again and again again.

By shifting the logic in to the database (with functions, trigers and constraints), I believe we are able to save lots of code within the application (and lots of potential errors). But I am scared of putting to a lot of the buisness-logic in to the database is a boomerang and at some point it will likely be impossible to keep.

Exist some real-existence-approved recommendations to follow along with?

If you do not need massive distributed scalability (think companies with just as much traffic as Amazon . com or Facebook etc.) then your relational database model is most likely likely to be sufficient for the performance needs. By which situation, utilizing a relational model with primary secrets, foreign secrets, constraints plus transactions causes it to be much simpler to keep data integrity, and reduces the quantity of reconciliation that should be done (and believe me, the moment you stop using these things, you'll need reconciliation -- despite them you probably will because of bugs).

However, most validation code is a lot simpler to create in languages like C#, Java, Python etc. than in languages like SQL because that's the kind of factor they are created for. Including such things as validating the formats of strings, dependencies between fields, etc. So I'd tend to achieve that in 'normal' code as opposed to the database.

Meaning the practical solution (and definitely the main one we use) would be to write the code where it seems sensible. Allow the database handle data integrity because that is what it is good at, and allow the 'normal' code handle data validity because that is what it is good at. You will find an entire load of cases when this does not hold true, and where it seems sensible to complete things in various places, so you need to be practical and weigh it on a situation by situation basis.

I've found you need to validate both in the front-end (either the GUI client, for those who have one, or even the server) and the database.

The database can certainly assert for nulls, foreign key constraints etc. i.e. the information is the best shape and linked up properly. Transactions will enforce atomic creates of the. It is the database's responsibility to contain/return data within the right shape.

The server are capable of doing more complicated validations (e.g. performs this seem like an e-mail, performs this seem like a postcode etc.) after which re-structure the input for insertion in to the database (e.g. normalise it and make the right organizations for insertion in to the tables).

In which you place the focus on validation is dependent to some extent in your application. e.g. it's helpful to validate a (say) postcode inside a GUI client and immediately provide feedback, but when your database can be used by other programs (e.g. a credit card applicatoin to bulkload addresses) your layer all around the database must validate too. Sometimes you finish up supplying validation in 2 different implementations (e.g. within the above, possibly a Javascript front-finish along with a Java DAO after sales). I have never found a great proper means to fix this.

Two cents: when you purchase wise, remember to not use the "too wise" area. The database shouldn't cope with incongruencies which are inappropriate because of its degree of knowledge of the information.

Example: suppose you need to place a legitimate (checked having a confirmation mail) current email address inside a area. The database could see if the e-mail really adjusts to some given regular expression, but asking the database to see if the e-mail address applies (e.g. checking when the domain is available, delivering the e-mail and handling the response) it's a little an excessive amount of.

It isn't intended to be a genuine situation example. Here's an example you that the wise database has limits in the smartness anyway, and when an unexistent current email address will get in it, the information continues to be not valid, as well as the database is okay. As with the OSI model, everything should handle data at its degree of understanding. ethernet doesn't care whether it's moving ICMP, TCP, if they're valid or otherwise.