My fellow programmer includes a strange requirement from his team leader he was adamant on creating varchar fields having a period of 16*2^n.

What's the reason for such restriction?

I'm able to guess that short strings (under 128 chars for instance) a saved directly within the record on the table and from here of see the restriction will assist you to align fields within the record, bigger strings are saved within the database "heap" and just the mention of the this string is held in the table record.

Could it be so?

Is requirement includes a reasonable background?

BTW, the DBMS is MS SQL Server 2008.

Completely pointless restriction so far as I can tell. Presuming standard FixedVar format (instead of the formats combined with row/page compression or sparse posts) and presuming you're speaking about varchar(1-8000) posts

All varchar information is saved in the finish from the row inside a variable length section (or perhaps in offrow pages whether it can't easily fit in row). The quantity of space it consumes for the reason that section (and whether it eventually ends up off row) is entirely depending on the size of the particular data not the column declaration.

SQL Server uses the space declared within the column declaration when allocating memory (e.g. for sort procedures). The assumption it can make for the reason that instance is the fact that varchar posts is going to be filled to 50% of their declared size on average so this can be a much better factor to check out when selecting a size.

It is best to keep data within the data size that suits the information being saved. It's a part of the way the database can maintain integrity. For example suppose you're storing emails. In case your data dimensions are how big the utmost allowable emailaddress, then you'll not have the ability to store bad data that's bigger than that. That's a positive thing. Many people need to make everything nvarchar(max) or varchar(max). However, this will cause only indexing problems.

Personally I'd go to the one who get this to requirement and requested grounds. I Quickly might have presented my reasons why may possibly not be advisable. I woul never just blindly implement something similar to this. In pushing back on the requirement such as this, I'd first do your homework into how SQL Server organizes data around the disk, and so i could show the impact from the requirement will probably dress in performance. I would be also surprised to discover the necessity made sense, however i doubt it at this time.

I have come across this practice before, but after researching this a little I do not think there's an operating reason behind getting varchar values in multiples of 16. I believe this requirement most likely originates from attempting to optimize the area utilized on each page. In SQL Server, pages are positioned at 8 KB per page. Rows are saved in pages, so possibly the thinking is that you may conserve space around the pages if how big each row divided evenly into 8 KB (a far more detailed description of methods SQL Server stores data are available here). However, since the quantity of space utilized by a varchar area is dependent upon its actual content, I do not observe how using measures in multiples of 16 or other plan can help you optimize the quantity of space utilized by each row around the page. The size of the varchar fields must be set to regardless of the business needs dictate.

Furthermore, this covers similar ground and also the conclusion also appears to become exactly the same:
Database column sizes for character based data