I must produce a copy of the database with roughly 40 InnoDB tables and around 1.5GB of information with mysqldump and MySQL 5.1.

Do you know the best parameters (ie: --single-transaction) that can lead to the fastest dump and load from the data?

Too, when loading the information in to the second DB, could it be faster to:

1) pipe the outcomes straight to the 2nd MySQL server instance and employ the --compress option

or

2) load it from the text file (ie: mysql < my_sql_dump.sql)

Pipe it straight to another instance, to prevent disk overhead. Think before with --compress unless of course you are ruling a sluggish network, since on the fast LAN or loopback the network overhead does not matter.

Going for a dump Rapidly on the quiesced database:

While using "-T " option with mysqldump results in several .sql and .txt files within the specified directory. This really is ~50% faster for dumping large tables than the usual single .sql file with Place claims (takes 1/3 less wall-clock time).

Furthermore, there's an enormous benefit when rebuilding if you're able to load multiple tables in parallel, and saturate multiple cores. With an 8-core box, this may be around an 8X difference in wall-clock time for you to restore the dump, on the top from the efficiency enhancements supplied by "-T". Because "-T" causes each table to become saved inside a separate file, loading them in parallel is simpler than splitting apart an enormous .sql file.

Using the methods above for their logical extreme, you could produce a script to dump a database broadly in parallel. Well, that's just what the Maakit mk-parallel-dump (see http://www.maatkit.org/doc/mk-parallel-dump.html) and mk-parallel-restore tools are perl scripts which make multiple calls towards the underlying mysqldump program. However, after i attempted to make use of these, I'd trouble obtaining the restore to accomplish without duplicate key errors that did not occur with vanilla dumps, so bear in mind that the milage can vary.

Going for a dump as the server is LIVE:

The --single-transaction switch is extremely helpful to take a dump of the live database without needing to quiesce it or going for a dump of the slave database without needing to stop toiling.

Sadly, -T isn't suitable for --single-transaction, which means you only acquire one.

When going for a dump to construct out a brand new host, netcat (nc) is an extremely helpful tool. Run "nc -l 7878 > mysql-dump.sql" on a single host to begin listening for that dump. Then run "mysqldump $Decides nc myhost.mydomain.com 7878" around the server you're dumping from. This reduces contention for that disk spindles around the master from writing the dump to disk, and saves the step of needing to transfer the dump file after it's finished. Caveats - clearly, you must have enough network bandwidth to not slow things lower unbearably, and when the TCP session breaks, you need to start throughout, however for most dumps this isn't a significant concern.

Usually, using the dump is a lot faster than rebuilding it. There's still room for any tool that go ahead and take incoming monolithic dump file and breaks it into multiple pieces to become loaded in parallel. To my understanding, this type of tool does not exist.

Lastly, observe that (from http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html):

Utilization of --opt is equivalent to indicating --add-drop-table, --add-locks, --create-options, --disable-secrets, --extended-place, --lock-tables, --quick, and --set-charset. All the options that --opt means are also on automatically because --opt is on automatically.

Thus, indicating the parameters in the above list doesn't have effect, despite how generally they're incorporated in lessons. Of individuals parameters, "--quick" is among the most significant, and may significantly accelerate large queries when used sensibly (skips caching the whole result occur mysqld before transmitting the very first row).