I actually do datamining and my work involves loading and unloading +1GB database dump files into MySQL. I'm wondering can there be every other free database engine that actually works much better than MySQL on huge databases? is PostgreSQL better when it comes to performance?

I just use fundamental SQL instructions so speed may be the only factor that i can select a database

It's unlikely that replacing another database engine will give you an enormous rise in performance. The decelerate you mention is more prone to be associated with your schema design and data access designs. You may could provide more details about that? For instance, may be the data saved like a time series? Are records written once sequentially or placed / up-to-date / erased randomly?

As lengthy while you drop indexes before placing huge data, shouldn't be much distinction between individuals two.

HDF may be the storage selection of NASA's Earth Watching System, for example. It isn't exactly a database within the traditional sense, and contains its very own eccentricities, but when it comes to pure performance it's unequalled.

I'm using PostgreSQL with my current project and possess to dump/restore databases pretty frequently. It requires under 20 mins to revive 400Mb compressed dump. You might try it out, even though some server configuration parameters have to be tweaked to adhere to your hardware configuration. These parameters include, although not restricted to:

  • shared_buffers
  • work_mem
  • temp_buffers
  • maintenance_work_mem
  • commit_delay
  • effective_cache_size

In case your datamining tool will support it, consider working from flat file sources. This will save much of your import/export procedures. It will possess some caveats, though:

  • You may want to get proficient having a scripting language like Perl or Python to complete the information munging (presuming you are not already acquainted with one).

  • You may want to expand the memory on your pc or visit a 64-bit platform if you want more memory.

  • Your computer data mining tool may not support working from flat documents in this way, by which situation you are buggered.

Modern disks - even SATA ones - will pull 100MB/sec approximately from the disk in consecutive reads. Which means that something could inhale a 1GB file fairly rapidly.

Alternatively, you could attempt getting SSDs in your machine and find out in the event that enhances the performance of the DBMS.

Your real question is too ambiguous to reply to usefully. "Performance" means a variety of items to differing people. I'm able to discuss how MySQL and PostgreSQL compare inside a couple of areas that could be important, but without information it's difficult to express which of those really matter for you. I have explained a lot more web sites this subject at Why PostgreSQL Instead of MySQL: Comparing Reliability and Speed. That is faster certainly is dependent on which you are doing.

May be the problem that loading data in to the database is simply too slow? That's an area that PostgreSQL does not do particularly well at, the COPY command in Postgres isn't a particularly quickest bulk-loading mechanism.

May be the problem that queries run too gradually? Is really, how complicated could they be? On complicated queries, the PostgreSQL optimizer can perform a better job compared to one out of SQL, especially if you will find many table joins involved. Small, simple queries often improve your speed in MySQL since it is not doing just as much thinking concerning how to execute the query prior to starting wiser execution costs a little of overhead.

The number of customers are involved? MySQL can perform a good job with a small amount of clients, at greater client counts the securing mechanism in PostgreSQL might perform a better job.

Would you worry about transactional integrity? Otherwise, it's simpler to show much more of individuals features off in MySQL, which provides it a substantial speed advantage in comparison to PostgreSQL.