I have attempted using Moving Curl, Epi Curl, along with other PHP multi curl solutions which are available, also it takes typically 180 seconds to transmit publish demands to simply 40 sites and receive data (I am speaking about receiving just small little success/fail strings) from their store, that's dog slow!!!

It only does well with 1 publish request that is like 3-6 seconds and that i don't know if that is even good because I see others speaking about getting 1 second reactions that is crazy.

I have also attempted using proc_available to run linux spend instructions (curl, wget) but that's also slow too, and never server friendly.

What I am virtually attempting to do is really a Wordpress wordpress plugin that's in a position to manage multiple Wordpress sites and do mass upgrades, remote publishings, blogroll management, etc. I understand that there's a website available known as managewp.com, but I'd rather not use their services because I wish to keep your sites I manage private and develop my very own. Things I notice about the subject is the fact that their request/fact is absurdly fast and i'm just puzzled at just how they are able to perform that, particularly with 100s of websites.

So can someone please shed light the way i could make these publish demands faster?

Edit

I have been doing a bit of thinking and that i requested myself, "What's essential about fetching the response? It isn't such as the demands that will get sent do not get processed correctly, all of them do 99% of times!Inch

Therefore i was thinking maybe I'm able to just send all of the demands without obtaining the reactions. And when I actually want to perform some monitoring of individuals processes and just how they went, I'm able to have individuals child sites send a publish request back using the status of how the procedure went and also have the master site add them right into a database table and also have an ajax request query like all ten seconds approximately for status updates or something like that like this.. so how exactly does that seem?

I am presently focusing on a task download 100s of Web addresses at any given time with PHP and curl_multi. Do batches as high as 250 Web addresses and have fun with CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT to refine your code's speed.

I've got a cURL class (2500+ lines) handling all cURL miracle including multi and right to file downloads. 250 Web addresses / 15-25 seconds using decent timeouts. (But I am not discussing it free of charge...)

PS: Installing that lots of Web addresses would require using temporary files as cURL download targets and never memory. Only a thought...

cUrl takes about .6 - .8 seconds per request

So for around 500 website it might take from 300 to 400 seconds.

You can whip this via a loop.

$ch = curl_init(); // Init cURL

curl_setopt($ch, CURLOPT_URL, "http://www.example.com/post.php"); // Post location
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // 1 = Return data, 0 = No return
curl_setopt($ch, CURLOPT_POST, true); // This is POST

// Our data
$postdata = array(
    'name1' => 'value1',
    'name2' => 'value2',
    'name3' => 'value3',
    'name4' => 'value4'
);

curl_setopt($ch, CURLOPT_POSTFIELDS, $postdata); // Add the data to the request

$o = curl_exec($ch); // Execute the request

curl_close($ch); // Finish the request. Close it.

This is dependent in your connection speed. From the datacenter it ought to be fine, in case your testing from the home location it could give not-as-great results.