There exists a client that must setup N local databases, each one of these that contains one site's data, after which possess a master corporate database that contains the union of N databases. Alterations in a person site database have to be propagated towards the master database, and alterations in the actual database have to be propagated towards the appropriate individual site database.

We have used MySQL replication for any client that requires two databases which are stored concurrently current. This is a bidirectional replication. When we attempted the identical approach ideas would find yourself with all of N local databases equal to the actual database, and that is not what we should want. Not just should every individual site not have the ability to see data in the other sites, delivering that data N occasions in the master rather than only once is most likely an enormous waste.

What exactly are my choices for achieving this new star pattern with MySQL? I understand we are able to replicate only certain tables, but it is possible to method to filter the replication by records?

What are the tools that will help or competing RDBMSes that might be better to check out?

I have carried this out before, and AFAIK this is actually the simplest way. You need to try looking in to presenting Microsoft SQL Server Merge Replication, and taking advantage of Row Blocking. Your row blocking could be established to possess a column that states what individual site destination it will visit.

For instance, your tables might seem like this:

ID_column column2 destination

The information within the column might seem like this: 12345 'data' 'site1'

You would set your merge replication "customer" site1 to filter on column 'destination' and cost 'site1'.

This information will most likely help:

Filtering Published Data for Merge Replication There's also articles on msdn known as "Improving Merge Replication Performance" which might help - as well as you will have to discover the fundamentals of establishing marketers and customers in SQL Server merge replication.

Best of luck!

SymmetricDS works with this. It's web-enabled, database independent, data synchronization/replication software. It uses web and database technologies to duplicate tables between relational databases in near real-time. The program was created to scale for a lot of databases, work across low-bandwidth connections, and withstand periods of network outage.

We've tried on the extender to synchronize 1000+ MySQL store databases for an Oracle corporate database.

Short answer no, you need to redesign.

Lengthy answer yes, but it is pretty crazy and will also be a genuine discomfort to create and manage.

One of the ways is always to roundrobin the primary database's replication one of the sites. Make use of a script to duplicate for say thirty seconds from the site record what lengths it got after which continue the the following site. You may decide to take a look at [cde] and friends to limit what's duplicated.

An alternative choice that I am unsure works would be to have N mysqls within the primary office that illegal copies from each one of the site offices, after which make use of the federated storage engine use a common view in the primary database in to the per-site slaves. The website slaves can replicate in the primary database and get whichever changes they require.

If you want unidirectional replication, then use multiple copies of databases duplicated in center of star and custom "bridge" application to maneuver data further towards the final one

Only a random pointer: Oracle lite supports this. I have examined it once for the same task, nevertheless it needs something placed on all clients that was no option. A tough architecture overview are available here

May be worth a glance at mysql-table-sync from maatkit which allows you sync tables by having an optional --where clause.