Mentioning towards the Postgres Documentation on Character Types, I'm unclear about to indicating an overall length for character different (varchar) types.
- the size of string does not matter towards the application.
- you do not care that somebody puts that maximum size within the database
- you've limitless hard disk drive space
It will mention:
The storage requirement of a brief string (as much as 126 bytes) is 1 byte as well as the actual string, including the area padding within the situation of character. Longer strings have 4 bytes of overhead rather than 1. Lengthy strings are compressed through the system instantly, therefore the physical requirement on disk may be less. Very lengthy values will also be saved in background tables to ensure that they don't hinder rapid use of shorter column values. Regardless, a long possible character string that may be saved is all about 1 GB. (The utmost value that'll be permitted for n within the data type declaration is under that. It can't be helpful to alter this since with multibyte character encodings the amount of figures and bytes can be very different.
This discusses how big string, not how big area, (i.e. seems like it'll always compress a sizable string inside a large varchar area, although not a little string inside a large varchar area?)
I request this as it might be much simpler (and lazy) to specify a significantly bigger size which means you never need to bother about getting a string too big. For instance, basically specify varchar(50) for any place title I'll get locations where convey more figures (e.g. Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch), but when I specify varchar(100) or varchar(500), I am less likley to obtain this problem.
So would you receive a performance hit between varchar(500) and (randomly) varchar(5000000) or text() in case your biggest string was say 400 figures lengthy?
Also from interest if anybody has got the response to this AND knows the response to this for other databases, please include that too.
I've researched, although not found a sufficiently technical explanation.
My understanding is the fact that getting constraints is helpful for data integrity, well, i use column dimensions to both validate the information products in the lower layer, and also to better describe the information model.
Some links around the matter:
My understanding is this fact is really a legacy of older databases with storage that wasn't as flexible as those of Postgres. Some would use fixed-length structures to really make it simple to find particular records and, since SQL is really a somewhat standardized language, that legacy continues to be seen even if it does not provide any practical benefit.
Thus, your "allow it to be large" approach ought to be a completely reasonable one with Postgres, but it might not transfer well with other stiffer RDBMS systems.
The documentation describes this:
If character different can be used without length specifier, the kind accepts strings associated with a size. The second is really a PostgreSQL extension.
The SQL standard takes a length specs for those its types. This really is most likely mainly for legacy reasons. Among PostgreSQL customers, the preference is commonly to omit the space specs, but when you need to write portable code, you need to include it (and pick a random size, oftentimes).