presently we've one master mysql server that connect every one hour to 100 remote mobile products [automobiles] over 3rd generation connection [not so reliable: get disconnect daily while sync happening for couple of cars]. the sync carried out by .internet home windows service tool. after checking the remote mysql status the actual start carry out the sync. sometimes the sync payload information is about 6-8 Megabytes. the sync carried out for just one table only using non-transactional approach.

mysql server version being used is: 4.1.22

Questions:

  1. could it be helpful to create the sync transactional understanding that just one table getting sync? or no useful!

  2. the sync data loaded to remote machine using mysql statement:

    LOAD DATA LOCAL INFILE

    the extendable is CSV. the way i can send the information in compressed format? without developing tool that reside around the remote device.

  3. could it be sound practice or architecture within the sync domain to deploy remote application which will carry out the sync after delivering the information or it ought to be done directly through the master? i am talking about the introduction of tool which will reside on remote machine is going to be hard to update or fix just in case new needs appear. however it helps you to save lots of bandwidth for that sync operation and it'll get rid of the errors that may raise in the live master sync just in case disconnection occur as the sync is within-progress. so if this sounds like recommend then only compressed data is going to be sent, then by utilizing some kind of check-sum I'll verify the whole data sent otherwise the request is going to be started again.

please share your ideas and experience.

thanks,

First of all, I'd alter the method of a customer inited sync versus a server inited sync. A many to 1 versus someone to many approach will expand much simpler than your present setup. My above comments provide a couple of good good examples of the needed client to server syncing.

Next, Switch on transactional record entry. There's pointless not have it. This can guarentee the information will get joined in due time and can have the ability to possibly provide much more 'meta-data' (for example which customers are slow to update, etc...).

Lastly, you are able to 'enhance' this uploading if you take another view it. Should you implement a kind of service in the server side that can take inside a response using a Publish in the client, you'd have the ability to send the information towards the server affiliate with no issues. It might be much like 'uploading' personal files to some server. When your 6-8 Megabytes file is 'uploaded' this will make it put in the database. The truly amazing factor relating to this is that if your server is definitely an APACHE (or perhaps your situation an IIS server), you'd have the ability to have each and every client uploading data simultaneously with little of the problem. At that time, uploading towards the mysql server with an place would take almost no time as well as your process would carry on with no problem.

This is one way I'd handle your circumstances...

Thank you for this publish.

What tool/way would you recommend to get this done sync between two linux inside a similiar architecture however with this 2 presumptions.

assumption 1: - The customer is just a read-only system of their own database that's synced every single day with similar structure database from the server

assumption 2: - The customer is really a read-write system of their own database that's synced every single day with similar structure database from the server

Is really a resumes server a great anwser with this? Commits and checkouts from the /var/lib/mysql/[database]

note: I'm not going master to understand replication because i'm not going a sync instantly

What's the best answer with this?

Thanks

Eduardo