When the application requests an identical result set to 1 which has lately been asked for, how might the ORM keep an eye on which ends are stale and which may be used again from before without needing a lot of assets (memory) or creating an excessive amount of architectural complexity?

Cache invalidation is an extremely tricky matter. The fundamental situation you intend appears like something that's most easily handled through the database's query cache (frequent demands would keep your query in cache). When the cache strategy gets to be more complicated than this, most gains will come from by hand controlling the cache and cache expiration having a separate key-value cache store.

If the kind of factor may be the norm for the application's data access and you're simply into trying trendy, something totally new, couchdb's mapreduce sights may well be a good fit.

Beyond fundamental memoization, I am inclined to view caching in the ORM level like a fairly finicky and poor plan.

When I have to determine if the neighborhood information is synchronized using the (remote) server, I keep an eye on the transactions.

So before "refreshing" the neighborhood data I "query the transactions history" and, if no transaction happened around the concerned (remote) data because the last "refresh", will still be synced.

But I'm not sure whether it's "reducing the complexnessInch.