I am attempting to implement in Python an easy program that reads rom webpages and creates these to files. You will find about 2000 pages of messages incrementally designated, however, many amounts are missing.

The site is account information protected, and I am utilizing the same account information I normally use to gain access to it by hand. I am with a couple code good examples with cookie handling I based in the official Python site, however when I attempt them the web site I am attempting to copy replies

"Your browser isn't accepting our snacks. To see this site, please set your browser preferences to simply accept snacks. (Code )"

Clearly there's an issue with snacks, and possibly I am not handling account information properly. Any suggestion concerning the following code?

import urllib2
import cookielib
import string
import urllib
def cook():
    url="http://www.URL.com/message/"
    cj = cookielib.LWPCookieJar()
    authinfo = urllib2.HTTPBasicAuthHandler()
    realm = "http://www.URL.com"
    username = "ID"
    password = "PSWD"
    host = "http://www.URL.com/message/"
    authinfo.add_password(realm, host, username, password)
    opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj), authinfo)
    urllib2.install_opener(opener)

    # Create request object
    txheaders = { 'User-agent' : "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)" }
    try:
        req = urllib2.Request(url, None, txheaders)
        cj.add_cookie_header(req)
        f = urllib2.urlopen(req)

    except IOError, e:
        print "Failed to open", url
        if hasattr(e, 'code'):
            print "Error code:", e.code

    else:

        print f

cook
url="http://www.URL.com/message/"
urllib.urlretrieve(url + '1', 'filename')

Have a look in Bolacha, it is a wrapper to httplib2 that handles snacks along with other stuff...