I frequently write little Python scripts to iterate through all rows of the DB-table. For instance delivering all to any or all customers a email.

I actually do it such as this

conn = MySQLdb.connect(host = hst, user = usr, passwd = pw, db = db)
cursor = conn.cursor()
subscribers = cursor.execute("SELECT * FROM tbl_subscriber;")

for subscriber in subscribers:


I question if there's an easy method to get this done cause it's possible that my code loads 1000's of rows in to the memory.

I figured about that could be achieved better with LIMIT. Maybe something of that nature:

"SELECT * FROM tbl_subscriber LIMIT %d,%d;" % (actualLimit,steps)    

What is the easiest method to get it done? How does one get it done?

unless of course you've BLOBs inside, 1000's of rows should not be considered a problem. Are you aware that it's?

Also, why bring shame on yourself as well as your entire family by doing something similar to

"SELECT * FROM tbl_subscriber LIMIT %d,%d;" % (actualLimit,steps)

once the cursor can make the substitution for you personally in a fashion that eliminates SQL injection?

c.execute("SELECT * FROM tbl_subscriber LIMIT %i,%i;", (actualLimit,steps))

To begin with maybe you do not need Choose * from...

it can be enough for you personally simply to acquire some things like: "Choose email from..."

that will decrease the quantity of memory usage anyway:)

Have you got actual memory problems? When iterating on the cursor, answers are fetched individually (your DB-API implementation might choose to prefetch results, however it could provide a function to create the amount of prefetched results).

Most MySQL fittings according to libmysqlclient will buffer all of the leads to client memory automatically for performance reasons (using the assumption you will not be reading through large resultsets).

Whenever you need to read a sizable lead to MySQLdb use a SSCursor to prevent loading entire large resultsets.


SSCursor - A "server-side" cursor. Like Cursor but uses CursorUseResultMixIn. Use only when you coping potentially large result sets.

This may introduce complications that you need to be cautious about. If you do not read all of the is a result of the cursor, another query will raise an ProgrammingError:

>>> import MySQLdb
>>> import MySQLdb.cursors
>>> conn = MySQLdb.connect(read_default_file='~/.my.cnf')
>>> curs = conn.cursor(MySQLdb.cursors.SSCursor)
>>> curs.execute('SELECT * FROM big_table')
>>> curs.fetchone()
(1L, '2c57b425f0de896fcf5b2e2f28c93f66')
>>> curs.execute('SELECT NOW()')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 173, in execute
    self.errorhandler(self, exc, value)
  File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
    raise errorclass, errorvalue
_mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now")

Which means you need to always read from the cursor (and potentially multiple resultsets) before giving another - MySQLdb will not do that for you personally.

It's not necessary to customize the query, you should use the fetchmany approach to cursors. This is how I actually do it :

def fetchsome(cursor, some=1000):
    while True:
        if not rows: break
        for row in rows:
            yield row  

By doing this you are able to "Choose * FROM tbl_customer" but you will simply fetch some at any given time.