I want some input regarding how to design a database layer.

During my application I've a listing of T. The data in T have information from multiple database tables.

You will find obviously multiple ways to get this done. Two ways in which I think about is :

chatty database layer and cacheable:

List<SomeX> list = new List<SomeX>();
foreach(...) {
    list.Add(new SomeX() {
        prop1 = dataRow["someId1"],
        prop2 = GetSomeValueFromCacheOrDb(dataRow["someId2"])
    });
}

The issue which i see using the above is when we would like a listing of 500 products, it might potentially make 500 database demands. With the network latency which. One other issue would be that the customers might have been erased as we got their email list in the database before we are attempting to have it from cache/db, meaning we'll have null-problems. Which we must handle by hand. The positive thing is the fact that it's highly cacheable.

non chatty although not cacheable:

List<SomeX> list = new List<SomeX>();
foreach(...) {
    list.Add(new SomeX() {
        prop1 = dataRow["someId1"],
        prop2 = dataRow["someValue"]
    });
}

The issue which i see using the above is the fact that its difficult to cache, since potentially all customers have unique lists. Another problem is it is a large amount of joins that could result in many reads from the database. The positive thing is that we understand without a doubt that information is available following the totally run (inner join etc)

non so chatty, but nonetheless cacheable

Another option would be to first loop with the data rows, and collect all necessary someId2 and then suggest yet another database request to obtain all of the SomeId2 values.

"The issue which i see using the above is when we would like a listing of 500 products, it might potentially make 500 database demands. With the network latency which."

True. May also create unnecessary contention and consume server assets maintaining locks while you iterate on the query.

"One other issue would be that the customers might have been erased as we got their email list in the database before we are attempting to have it from cache/db, meaning we'll have null-problems."

Basically take that quote, than the quote:

"The positive thing is the fact that it's highly cacheable."

Isn't true, because you have cached stale data. So strike from the only advantage to date.

But to directly answer your question, the best design, which appears to become what you're asking, is by using the database for what it's great for, enforcing Acidity compliance as well as other constraints, most particularly pk's and fk's, but in addition for coming back aggregated solutions to chop lower on round outings and wasted cycles around the application side.

Which means you either put SQL to your application code, that has been ruled to become Infinite Bad Taste through the Code Thought Police, or visit sprocs. Each one works. Putting the code in to the Application causes it to be more maintainable, but you'll not be asked to the more elegant OOP parties.