I've got a desktop application that continues its data inside a local H2 database. I'm using Squeryl to interface towards the database.
How big the database is extremely small (some 10kB). I am going through severe performance problems and there's extensive disk IO happening. I'm only reading through the DB and therefore I was expecting the complete data might be cached I even set the cache size with a value (way greater than total db size). Also I attempted crippling securing without any result.
My program works lots of small queries around the database essentially I've got a Swing
TableModel which makes a question for each table entry (each column of every row). I am wrapping all of individuals calls right into a Squeryl
I have designed a profile using JVisualVM and that i suspect the next call tree shows the issue. The best technique is a read access from the code.
How do i fix this or what shall we be held doing wrong? In some way I expect which i should have the ability to make many small calls to some DB that's sufficiently small to become locked in under 1MB of memory. Why disk IO happening and just how can one cure it?
Searching in the screeshot it appears you're choosing in the DB within the
getValueAt() approach to your TableModel (the technique title
getRowAt() towards the top of the phone call stack causes this assumption of mine).
If my assumption is correct, than this is actually the your primary problem.
getValueAt() is known as through the JTable's fresh paint() method constantly (most likely several occasions another), to ensure that ought to be as quick as you possibly can.
You need to get the information for the JTable in one SQL query after which save the create a some data structure (e.g. an ArrayList or something like that like this).
I'm not sure Squeryl, however i doubt you will need to wrap every Choose right into a transaction. In the stacktrace it seems this causes massive write in H2. Have you attempt to run the Chooses without explicitely opening (and shutting) a transaction every time?
The answer was quite simple ultimately. I'll quote the FAQ.
Postponed Database Closing
Usually, a database is closed once the last link with it's closed. In certain situations this slows lower the applying, for instance when it's difficult to help keep a minumum of one connection open. The automated closing of the database could be postponed or disabled using the SQL statement
SET DB_CLOSE_DELAY <seconds>. The parameter
<seconds> identifies the amount of seconds to help keep a database open following the last link with it had been closed. The next statement could keep a database open for ten seconds following the last connection was closed:
SET DB_CLOSE_DELAY 10
-1 means the database isn't closed instantly. The worthiness
0 may be the default and means the database is closed once the last connection is closed. This setting is persistent and may be set by webmaster only. You'll be able to set the worthiness within the database URL: