I'm using Apache Derby to keep a lot of rows within the order of 10s of Millions. Every time I initiate a load place, I'll be placing up to 2 Millions more rows in to the table. The table includes a UUID since it's primary key along with a single contraint to some UUID in a single other table. The place takes hrs !!! How Come ? I've produced INDEXs on all of the tables - however i have since removed this when i believe Derby instantly produces a catalog for every table having a primary key. I'm using batch update having a prepared statement as proven (in quite simple form below)

final PreparedStatement addStatement = connection.prepareStatement(...)
int entryCount = 0;
  for (final T entry : entries) {
    addStatement.addBatch();
    entryCount++;
    if (entryCount % 1000 == 0) {
    addStatement.executeBatch();
    addStatement.clearBatch();
    entryCount = 0;
    }
 addStatement.close();

Listed here are the outcomes

05/01/12 12:42:48 Creating 2051469 HE Peaks in DB Table APP.ST_HE_PEAK_TABLE
05/01/12 12:44:18 Progress: Written (10%) 205146/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE
05/01/12 12:46:51 Progress: Written (20%) 410292/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE
05/01/12 12:50:46 Progress: Written (30%) 615438/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE 05/01/12 12:56:46 Progress: Written (40%) 820584/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE
05/01/12 13:04:29 Progress: Written (50%) 1025730/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE
05/01/12 13:13:19 Progress: Written (60%) 1230876/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE
05/01/12 13:22:54 Progress: Written (70%) 1436022/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE
05/01/12 13:34:53 Progress: Written (80%) 1641168/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE
05/01/12 13:47:02 Progress: Written (90%) 1846314/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE
05/01/12 13:58:09 Completed: Written (100%) 2051469/2051469 entries to DB Table APP.ST_HE_PEAK_TABLE - Time Taken:01:15:21

When I place increasingly more rows, the procedure will get reduced and reduced (most likely becuase from the INDEX). The DB model I've right now serves my reasons well and i'm unwilling to change it out. Shall We Be Held doing a problem ? ... or expecting an excessive amount of ? Can there be in whatever way to enhance the Place speed ?

Perhaps you have attempted turning off autocommit mode? From http://db.apache.org/derby/docs/dev/tuning/tuningderby.pdf:

Card inserts could be shateringly slow in autocommit mode because each commit involves an update from the login the disk for every Place statement. The commit won't return until an actual disk write is performed. To quicken things:

  • Run in autocommit false mode, execute numerous card inserts in a single transaction, and then clearly problem a commit.
  • In case your application enables a preliminary load in to the table, you should use the import methods to place data right into a table. Derby won't log the person card inserts when loading into a clear table with such connects. Begin to see the Derby Reference Manual and also the Derby Server and Administration Guide for additional info on the import methods.