Inside my company, there exists a legacy database with assorted tables and for that reason many, many fields. Many of the fields appear to possess large limits (ex: NVARCHAR(MAX)) which are never arrived at. Does randomly making the fields their maximum width or two to three occasions bigger than is generally put adversely affect performance? How should one balance performance with area measures? It is possible to balance?

There are 2 parts for this question:

Does using NVARCHAR over VARCHAR hurt performance? Yes, storing data in unicode fields doubles the storage needs. Your computer data saved in individuals fields is 2x the dimensions it must be (until SQL Server 2008 R2 arrived on the scene, including unicode compression. Your table scans will require two times as lengthy and just half just as much data could be saved in memory within the buffer cache.

Does using MAX hurt performance? In a roundabout way, however when you use VARCHAR(MAX), NVARCHAR(MAX), and individuals types of fields, and when you have to index the table, you will not have the ability to rebuild individuals indexes online in SQL Server 2005/2008/R2. (Denali brings some enhancements around tables with MAX fields so some indexes could be reconstructed online.)

Yes, the query optimizer can guess the number of rows easily fit in a webpage, for those who have lots of varchar fields which are bigger than necessary, SQL Server can internally guess the incorrect quantity of rows.

For performance, the reply is NO for many cases. I believe you are able to evaluate the performance in the application level: collect the information, obtain the needs, then perform some analysis. The bottleneck might be triggered by application code, SQL or schema design.