Can there be any method of getting sqlalchemy to perform a bulk place instead of placing every individual object. i.e.,

doing:

Place INTO `foo` (`bar`) VALUES (1), (2), (3)

instead of:

Place INTO `foo` (`bar`) VALUES (1)

Place INTO `foo` (`bar`) VALUES (2)

Place INTO `foo` (`bar`) VALUES (3)

I have just converted some code to make use of sqlalchemy instead of raw sql and although it's now much better to utilize it appears to become reduced now (up to and including factor of 10), I am wondering if because of this ,.

Might be I possibly could enhance the situation using periods more effectively. Right now I've autoCommit=False and perform a session.commit() after I have added some stuff. Even though this appears to result in the information to visit stale when the DB is transformed elsewhere, like even when I perform a new query I get old results back?

Interesting help!

So far as I understand, there's no method of getting the ORM to problem bulk card inserts. In my opinion the actual reason is the fact that SQLAlchemy must keep an eye on each object's identity (i.e., new primary secrets), and bulk card inserts hinder that. For instance, presuming your foo table consists of an id column and it is planned to some Foo class:

x = Foo(bar=1)

print x.id

# None

session.add(x)

session.flush()

# BEGIN

# Place INTO foo (bar) VALUES(1)

# COMMIT

print x.id

Number One

Since SQLAlchemy acquired the worthiness for x.id without giving another query, we are able to infer it got the worthiness from the Place statement. If you do not need subsequent use of the produced objects through the same instances, you are able to skip the ORM layer for the place:

Foo.__table__.place().execute([, , ])

# Place INTO foo (bar) VALUES ((1,), (2,), (3,))

SQLAlchemy can't match these new rows with any existing objects, so you will need to query them anew to for just about any subsequent procedures.

So far as stale information is concerned, it's useful to understand that the session doesn't have built-in method to know once the database is transformed outdoors from the session. To be able to access externally modified data through existing instances, the events should be marked as expired. This occurs automatically on session.commit(), but can be achieved by hand by calling session.expire_all() or session.expire(instance). A good example (SQL overlooked):

x = Foo(bar=1)

session.add(x)

session.commit()

print x.bar

Number One

foo.update().execute(bar=42)

print x.bar

Number One

session.expire(x)

print x.bar

# 42

session.commit() expires x, therefore the first print statement unconditionally opens a brand new transaction and re-queries x's characteristics. Should you comment the first print statement, you'll find that the 2nd one now accumulates the right value, since the new query is not released until following the update.

This will make sense from the purpose of look at transactional isolation - you need to only get exterior modifications between transactions. If this sounds like leading to you trouble, I'd suggest making clear or re-thinking your application's transaction limitations rather than immediately grabbing session.expire_all().