I am creating a quick side project that requires a customers table, and that i would like them to have the ability to store profile data. I had been already grabbing the ASP.Internet profile provider after i recognized that customers is only going to have one profile.

I recognize that frequently altering data will impact performance on such things as indexes and stuff but exactly how frequent is simply too frequent?

Basically have one profile change monthly per user happening for say 1000 customers, is that many?

Or shall we be speaking a lot more like customers altering profile data per hour?

I recognize this is not theory but I am attempting to gauge when the threshold begins to peak, and also, since my customers profile data will most likely rarely change basically should bother the additional work or simply wait a couple of decades for this to become a problem.

Does the profile information really have to be indexed? Or are you currently just likely to be locating it in line with the USER_ID on the table as well as other indexed USER column? When the profile data is not indexed, which appears prone to me, than you will find no performance impacts with other indexes up for grabs.

The only real reason I'm able to think about to stress about putting profile information within the table is that if there's lots of data in comparison towards the information you need to define a person and when the USER table must be full scanned for whatever reason. For the reason that situation, growing how big the table would negatively modify the performance of the table scan. Presuming you don't possess a use situation where it's regularly will make sense to perform a full scan around the USERS table, and considering that the table is only going to have 1000 rows, that's most likely not really a large deal.

One factor to think about is when adding a sizable text column to some table will modify the layout from the rows. Some databases will keep large posts inlined using the other fixed size posts this makes the rows variable sized which means more work with the database when it must pull a row from the disk. Other databases (for example PostgreSQL) store large text posts from the fixed size posts this can lead to fixed sized rows with fast access throughout table scans and so on but an additional little bit of work is required to take out the written text posts.

1000 customers is not much in database terms so there's most likely nothing to bother with one of the ways or another. OTOH, child-off side projects possess a nasty practice of turning out to be real mission critical projects when you are not searching so doing the work right right from the start may be beneficial.

I believe Justin Cave has covered the index problem good enough.

As lengthy while you structure your computer data access correctly (i.e. all use of your user table experiences one isolated pile of code) then altering your computer data schema for customers will not considerably work anyway.