Say there's an internet site with 100,000 customers each one has as much as 1000 unique strings mounted on them to ensure that you will find maximum 100,000,000 strings as a whole. Will it be easier to have 1 table with every string being one record together with it's owner's id. To ensure that you finish track of 1 table with 100,000,000 records with 2 fields (text and user id).

Or have 100,000 tables, one table for every user and also the table's title may be the user's id. after which 1000 records in every table, with only one area (the written text).

Or rather than storing the strings inside a database (there will be a character limit about the size of an SMS message) just store connect to text files where you will find 100,000,000 text files inside a directory and every file includes a unique title (random amounts and/or letters) and consists of among the strings? (or where each user includes a directory after which their strings are for the reason that directory?)

Which will be the most effective option, your directory and database after which which sub use of individuals will be the most effective?

(this is clearly theoretical during my situation, but exactly what does a website like twitter do?)

(by efficiency I am talking about while using least quantity of assets and time)

Or have 100,000 tables

For that passion for $DEITY, no! This can result in horrible code - it isn't what databases are equipped for.

You ought to have one table with 100,000,000 records. Database servers are made to deal with large tables, and you will use indexes and partitioning etc to enhance performance if required.

Option #1

It might be simpler to keep one table having a user id and also the text. It might not become more efficient to produce a table for each user.

Though used you'd want something similar to a Mongo sharded cluster rather than a lone server running MySQL.

You'd have one table, with indexes around the USER_ID.

For speed, you are able to partition the table, duplicate it, use caching, cloud, sharding, ...

Please consider NoSQL databases: http://nosql-database.org/

Certainly one table, and fill with record according to key. OS will crawl having a directory structure of 100,000 file names to examine... your directory mgmt alone will KILL your speed and agility (in the OS level)

It is dependent how much activity the server needs to handle.

A couple of month ago we develop a system that indexed ~20 million Medline article abstracts which each are more than your twitter message. We place the stuff in one lucene index which was ~40GB large. Even through we'd bad hardware (2 GB Ram with no SSD drives - poor interns) we could run looks for ~3 million terms inside a couple of days from the database.

Just one table or (lucene index) ought to be what you want.