I am thinking about logging all site/user actions towards the database and would really like some input in regards to this. This log could be employed for something more important including throttling (login attempts, etc), costumer service, general maintenance, etc.
Is okay? I imagine it is dependent on the quantity of traffic but would this cause any issues with the continuous card inserts? (I am considering using InnoDB for that FK contraints)
Otherwise, what kind of schema can you suggest to ensure that it's flexible enough to aid different kinds of actions from registered and anonymous customers?
I am considering something similar to:
CREATE TABLE `logs` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `action` varchar(128) COLLATE utf8_bin NOT NULL, `user_id` bigint(20) unsigned DEFAULT NULL, `value` varchar(128) COLLATE utf8_bin DEFAULT NULL, `ip` varchar(40) COLLATE utf8_bin NOT NULL, `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), KEY `action` (`action`,`user_id`), CONSTRAINT `logs_ibfk_1` FOREIGN KEY (`action`) REFERENCES `logs_actions` (`name`) ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin CREATE TABLE `logs_actions` ( `id` int(5) NOT NULL AUTO_INCREMENT, `name` varchar(128) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin
Would this be a good idea?
- Use MyISAM tables for logging, they enable concurrent Choose &lifier Place queries - table level securing will not interfere for these kinds of queries.
- In MySQL, UTF-8 posts require 3 bytes per character, thus a column which will have the ability to hold 128 UTF-8 figures, will really have the ability to store 128*3=384 bytes, that is more than 256, thus these posts may have 2 bytes to count the amount of character within the column, rather than 1 byte (that is most likely that which you expected).
- Make use of an
INTcolumn type for that
ipcolumn - helps you to save lots of storage and may considerably reduce retrieval time.
- Attempt to batch the written text posts
valueright into a single column (possibly named
queryStringwhich signifies the experience &lifier worth of the consumer within the page)
Getting a catalog with this particular column order:
KEY `action` (`action`,`user_id`)
is bad and really should be prevented, because the text column seems first.
- I suggest finding out how to optimize schema &lifier query for MySQL with this particular great book:
High End MySQL: Optimi- zation, Backup copies, Replication, and More, Second Edition, by Baron Schwartz et al. Copyright 2008 O’Reilly Media, Corporation., 9780596101718.
user_id bigint(20), what! are you currently a developer at facebook? -) wouldn't a 4 byte int be sufficient? See MySql Numeric Types
I'd drop the
logs_actions, since you will need to code the applying to a particular values, you will need to control this value on place.
also, consider shedding the FK (a minimum of the cascade) if you wish to lessen the overhead just a little.
I believe MyIsam or Archive table is much more appropriate for logging. Since you need not transaction or concurrent use of table. If don't likely to remove data from table MyIsam will help you to make concurrent place so that you can avoid block whole table.
Using foreign secrets will slow place in to the table so if you choose to use innodb avoid that one.
Regarding table plan:
for storing ip you need to choose int type and employ inet_aton function. See http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_inet-aton