I had been attempting to read a webpage in the same site using PHP. I discovered this good discussion and made the decision to make use of the cURL method recommended:

function get_web_page( $url )
    $options = array(
        CURLOPT_RETURNTRANSFER => true,     // return web page
        CURLOPT_HEADER         => false,    // don't return headers
        CURLOPT_FOLLOWLOCATION => true,     // follow redirects
        CURLOPT_ENCODING       => "",       // handle all encodings
        CURLOPT_AUTOREFERER    => true,     // set referer on redirect
        CURLOPT_CONNECTTIMEOUT => 120,      // timeout on connect
        CURLOPT_TIMEOUT        => 120,      // timeout on response
        CURLOPT_MAXREDIRS      => 10,       // stop after 10 redirects

    $ch      = curl_init( $url );
    curl_setopt_array( $ch, $options );
    $content = curl_exec( $ch );
    $err     = curl_errno( $ch );
    $errmsg  = curl_error( $ch );
    $header  = curl_getinfo( $ch );
    curl_close( $ch );

    $header['errno']   = $err;
    $header['errmsg']  = $errmsg;
    $header['content'] = $content;
    return $header;

//Now get the webpage
$data = get_web_page( "https://www.google.com/" );

//Display the data (optional)
echo "<pre>" . $data['content'] . "</pre>";

So, for my situation, I known as the get_web_page such as this:

$target_url = "http://" . $_SERVER['SERVER_NAME'] . "/press-release/index.html";           
$page = get_web_page($target_url);

The one thing which i could not fathom could it be done all my test servers only one. I have verified the cURL can be obtained around the server under consideration. Also, setting `$target_url = "http://www.google.com" labored fine. So, I'm pretty positive the reason is not related to the cURL library.

Will it be because some servers block themselves from being "indexed" by this kind of script? Or, maybe I simply skipped something here?

Thanks in advance.

Similar questions:

$target_url = "http://" . $_SERVER['SERVER_NAME'] . "/press-release/index.html"

I unsure the above mentioned expression is really return the right URL for you personally,
this may the reason for all problem.

Will it be because some servers block themselves from being "indexed" by this kind of script?

Yes, it may be.
However I not have the answer, because you didn't make the implementation particulars.
Here's your site, you need to in a position to check.

Inside a general, I'd say this can be a bad idea,
if you're attempting to access another page in the same domain,
you can easily function file_get_contents(PATH_TO_FILE.'/press-release/index.html');
(judge through the extension HTML, I suppose that's static page)

In the event that page is require some PHP processing,
well, you need to simply prepare all of the necessary variables ... then require file.

Use HTTP_HOST rather than SERVER_Title. They are less than exactly the same.

It switched out that there are no problem using the above script. Company, $target_url = "http://" . $_SERVER['SERVER_NAME'] . "/press-release/index.html"; came back the intended value (as asked by @ajreal in the answer).

The issue was really because of the way the IP (from the target page) had been resolved, making the response to this not associated with PHP nor Apache: after i went the script around the server under test, the came back Ip wasn't accessible. Please make reference to this more detailed explanation / discussion.

One remove: please try curl -v in the command line, that might provide you with helpful clues.