Lots of web programs getting a 3 tier architecture do all of the processing within the application server and employ the database for persistence simply to have database independence. After having to pay a large amount for any database, doing all of the processing including batch in the application server and never while using energy from the database appears to become a waste. I've got a difficulty in convincing people who we have to use better of both mobile phone industry's.
What "energy" from the database are you currently not using inside a 3-tier archiecture? Most probably we exploit SQL fully, and all sorts of the information management, paging, caching, indexing, query optimisation and securing abilities.
I'd reckon that the argument is how what we should might call "business logic" ought to be implemented. Within the application server or perhaps in database saved procedure.
I see two causes of putting it within the application server:
1). Scalability. It's comparatively difficult to increase the datbase engines when the DB will get too busy. Partitioning data across multiple databases is actually tricky. So rather pull the company logic to the application server tier. Now we are able to have numerous application server instances all conducting business logic.
2). Maintainability. In principle, Saved Procedure code could be well-written, modularised and resuable. Used it appears much simpler to create maintainable code within an OO language for example C# or Java. For whatever reason re-use within Saved Methods appears to occur by cut and paste, and thus with time the company logic becomes difficult to maintain. I'd concede by using discipline this do not need to happen, but discipline appears to become an issue at this time.
We need to make sure truly exploit the database query abilities fully, for instance staying away from tugging considerable amounts of information across towards the application server tier.
It is dependent in your application. You need to set some misconception so that your database does things databases are great for. An eight-table join across hundreds of countless records isn't something you are likely to wish to handle inside your application tier. Nor is carrying out aggregate procedures on countless rows to emit little bits of summary information.
However, if you are just doing lots of CRUD, you are not losing much by dealing with that large costly database like a dumb repository. But simple data models that lend themselves to application-focused "processing" sometimes finish up leading you in the future to sneaking unforeseen issues. Design knots. You are processing recordsets within the application tier. Searching some misconception with techniques that start to approximate SQL joins. Eventually you shateringly refactor this stuff to the database tier where they run orders of magnitude more effectively...
So, it is dependent.
I have seen one application designed (with a pretty wise guy) with tables from the form:
id | one or two other indexed columns | big_chunk_of_serialised_data
Use of that within the application is simple: you will find techniques which will load one (or perhaps a set) of objects, deserialising it as being necessary. And you will find techniques which will serialise an item in to the database.
But not surprisingly (only in hindsight, sadly), you will find a lot of cases when you want to query the DB in some manner outdoors that application! This really is labored around is other ways: an advertisement-hoc query interface within the application (which adds several layers of indirection for you to get the information) reuse of certain parts from the application code hands-written deserialisation code (sometimes in other languages) and just needing to do with no fields which are within the deserialised chunk.
I'm able to readily think of the same factor occurring for every application: it is simply handy to have the ability to access your computer data. Consequently I believe I'd be pretty averse to storing serialised data inside a real DB -- with possible exceptions in which the saving exceeds the rise in complexity (a good example being storing a range of 32-bit ints).
No. They must be employed for business rules enforcement too.
Alas the DBMS large dogs are generally not competent enough or otherwise prepared to support this, causeing this to be ideal impossible, and keeping their clients hostage for their major cash cows.