I am focusing on a task for any school in which a particular module handles attendance system. I am using Light(PHP 5.2+ MYSQL 5+) stack for development. The school strength is about 1500 and final amount of business days each year is about 250. Plus, I have to help keep records for five years prior to it being removed.

The table structure is

studentId varchar(12) 
date date
fn varchar(1) *forenoon*
af varchar(1) *afternoon*

Basically simply employ just one table, which means 1,875,000 records for any 5 year period. Now rather than this type of humongous database, I considered creating a table for every class (not section). So thinking about you will find 12 classes, I'll have 12 tables, meaning typically 1,55,000 records per table that is workable.

Is the proper way to get it done? Or what are the possible ways?

Your work is known as premature optimisation. This can be a common mistake.

You're better of having your database structure as near to reality as well as in future if there becomes an excuse for optimisation or speed improvement you could do this.

From experience and searching at the example the only table solution looks fine.

A few points.

  • two million records is not a large table.
  • getting another table per class is certainly not stabilized.

You have not really provided enough information re links with other table and what else, contrary, this table will store. But you ought to be beginning with 3NF for those tables and just altering that if you discover performance problems.

As lengthy while you indexed your table posts correctly, there should not be considered a large trouble with the very first table.

I'd disagree with the thought of splitting up in to the 12 classes, because you've got no guarantee that that's the way in which it will stay (classes added, classes merge, etc.).

Mucking your database normalization for any perceived advantage of efficiency is one thing you should think about just for extreme conditions (when)

I recommend that there's you don't need to split this table up. Should you create appropriate indexes for just about any selective queries you may want to perform, the machine should have the ability to discover the needed rows very rapidly. For analytic queries which involve all rows, two million such records should only need another or two to scan, that we imagine wouldn't present an excellent problem.

MySQL now will also support partitioning of information being an optional feature. Partitioning is comparable to your proposal to separate the table up, but it's done in the physical level, therefore it is not visible to customers or designers making use of your schema. This might be a helpful approach in the event that just one-table implementation continues to be not fast enough. This document provides an introduction to partitioning in MySQL 5.4.

Checksum,

I echo Michiel opinin this is premature optimisation.

What you could essentially do later onto improve performance is make use of the database archiving and partitioning features to ensure that your database reads are efficient. I'm able to sugest creating index about this table also. Anyways I don't believe a million records is big. Databases today can handle handling such large amounts. Also you will come across the performance problems three years form now only

So proceed write code instead of considering what fail!