Managing a rails site at this time using SQLite3.

About once every 500 demands approximately, I recieve a

ActiveRecord::StatementInvalid (SQLite3::BusyException: database is locked:...

What's the best way to fix this that might be non-invasive to my code?

I am using SQLLite right now since you can keep DB in source control making copying natural and you will push changes out very rapidly. However, it's clearly not necessarily setup for concurrent access. I'll migrate to MySQL tomorrow morning.

You pointed out that this can be a Rails site. Rails enables you to definitely set the SQLite retry timeout inside your database.yml config file:

  adapter: sqlite3
  database: db/mysite_prod.sqlite3
  timeout: 10000

The timeout value is specified by miliseconds. Growing it to 10 or just a few seconds should decrease the amount of BusyExceptions the thing is inside your log.

Case a brief solution, though. In case your site needs true concurrency then you'll have to migrate to a different db engine.

Automatically, sqlite returns immediatly having a blocked, busy error when the database is busy and locked. You are able to request for this to hold back and trying for some time before quitting. This usually fixes the issue, unless of course you have a large number of threads being able to access your db, when To be sure sqlite could be inappropriate.

    // set SQLite to hold back and retry for approximately 100ms if database locked

    sqlite3_busy_timeout( db, 100 )

Only for the record. In a single application with Rails 2.3.8 we discovered that Rails was disregarding the "timeout" option Rifkin Habsburg recommended.

After more analysis we found a possibly related bug in Rails dev: And after more analysis we found the solution (examined with Rails 2.3.8):

Edit this ActiveRecord file: activerecord-2.3.8/lib/active_record/connection_plugs/sqlite_adapter.rb

Replace this:

  def begin_db_transaction #:nodoc:
    catch_schema_changes { @connection.transaction }


  def begin_db_transaction #:nodoc:
    catch_schema_changes { @connection.transaction(:immediate) }

And that is all! We've not observed a performance drop and today the application supports a lot more applications having to break (it waits for that timeout). Sqlite is great!

Source: this link

- Open the database
db ="filename")

-- Ten attempts are made to proceed, if the database is locked
function my_busy_handler(attempts_made)
  if attempts_made < 10 then
    return true
    return false

-- Set the new busy handler

-- Use the database

Sqlite makes it possible for other processes to hang about until the present one finished.

I personally use this line for connecting after i know I might have multiple processes attempting to access the Sqlite DB:

conn = sqlite3.connect('filename', isolation_level = 'exclusive')

Based on the Python Sqlite Documentation:

You are able to control which type of BEGIN claims pysqlite unconditionally executes (or none whatsoever) through the isolation_level parameter towards the connect() call, or through the isolation_level property of connections.

Many of these situations are true, however it does not answer the question, that is likely: how come my Rails application from time to time raise a SQLite3::BusyException in production?

@Shalmanese: what's the production hosting atmosphere like? Could it be on the shared host? May be the directory that consists of the sqlite database with an NFS share? (Likely, on the shared host).

This issue likely has related to the phenomena of file securing w/ NFS shares and SQLite's insufficient concurrency.

What table has been utilized once the lock is experienced?

Have you got lengthy-running transactions?

Are you able to discover which demands remained as being processed once the lock was experienced?