I've been focusing on an internet site in Django, offered using FCGI setup utilizing an autoinstaller along with a custom templating system.

When i get it setup now, each View is a clear case of a category, that is certain to a template file at load time, and never duration of execution. That's, the course is certain to web site using a decorator:

@include("page/page.xtag")                    # bind template to view class
class Page (Base):          
    def main(self):                           # main end-point to retrieve web page    
        blah = get_some_stuff()
        return self.template.main(data=blah)  # evaluates template using some data

One factor i've observed is the fact that since FCGI doesn't produce a new process and reload all of the modules/classes every request, changes towards the template don't instantly appear online until once i pressure a restart (i.e. by editing/saving a python file).

The webpages also contain plenty of data that's saved instxt files within the filesystem. For instance, i'll load large clips of code from separate files instead of departing them within the template (where they clutter up) or perhaps in the database (where it's bothersome to edit them). Understanding that the operation is persistent, i produced an advertisement-hoc memcache by saving the written text i loaded inside a static dictionary in a single of my classes:

class XLoad:
    rawCache = {} #{name : (time, text)}
    @staticmethod
    def loadRaw(source):
        latestTime = os.stat(source).st_mtime
        if source in XLoad.rawCache.keys() and latestTime < XLoad.rawCache[source][0]:
             # if the cached version of file is up to date, use it
             return XLoad.rawCache[source][1]
        else:
             # otherwise read it from disk, dump it in cache and use that
             text = open(source).read()
             XLoad.rawCache[source] = (latestTime, text)
             return text

Which sped everything up substantially, since the 24 approximately code-clips that we was loading one-by-one in the filesystem were now being taken from the process' memory. Each time i forced a restart, it might be slow for just one request as the cache chock-full then become blazing fast again.

My real question is, just what determines how/once the process will get restart, the classes and modules reloaded, and also the data i retain in my static dictionary cleared? Could it be determined by my installing of Python, or Django, or Apache, or FastCGI? Could it be deterministic, according to time, on quantity of demands, on load, or pseudo-random? And it is it safe to get this done kind of in-memory caching (which is really super easy and convenient!), or must i consider some most convenient way of caching these file-reads?

It may sound like you know this.

  1. Whenever you edit a Python file.
  2. Whenever you restart the server.
  3. When there's a nonrecoverable error. Also called "only if it needs to".

Caching such as this is okay -- you are doing the work if you store anything inside a variable. Since the details are read only, how could this 't be safe? Do not write changes to some file immediately after you've restarted the server however the worst factor that may happen is a page view will get screwed up.

There's an easy method to confirm all of this -- logging. Have your designers log when they're known as, and log if you need to load personal files from disk.