I lately moved a lot of tables from a current database right into a new database because of the database getting big. After doing this I observed a dramatic reduction in performance of my queries when running from the new database.

The procedure I required to recreate the brand new database is:

  1. Generate Table CREATE scripts using sql servers automatic script
    generator.
  2. Run the create table scripts
  3. Place all data into new database using Place INTO having a choose from the present database.
  4. Run all of the alter scripts to produce the foreign secrets and then any indexes

Does anybody have ideas of potential problems with my process, or some key step I am missing that's leading to this performance problem?

Thanks.

first I'd a b mimimum make certain that auto create statistics is enabled you may also set auto update statistics to true

next I'd update the stats by running

sp_updatestats

or

UPDATE STATISTICS

Also understand that the very first time you hit the queries it will likely be reduced because there is little be cached in RAM. Around the second hit ought to be considerably faster

Have you script the indexes in the tables within the original database? Missing indexes could certainly take into account poor performance.

Perhaps you have attempted searching in the execution intentions of each server when running these queries - which should permit you to easily find out if they are doing different things e.g. table checking because of military services weapons index, poor statistics etc.

Are generally DBs sitting on a single box using their documents on a single drive arrays?

Do you know what about individuals queries got reduced? New access plans? Same plans however they perform reduced? Will they execute reduced or could they be suspended more? Did all queries got reduced or simply some? And finally, how are you aware, ie. just what have you measure and just how?

A few of the usual suspects might be:

  • The brand new storage is a lot reduced (.mdf on slow disk, or on the busy disk)
  • You transformed the information structure throughout move (ie. some indexes didn't get ported)
  • You transformed the information size (ie. compression options) resulting on more pages for the similar data
  • Did other things change simultaneously, new application code or anything so on?
  • By stretching the information size (you need to do no mention removing that old tables) after you are trashing the buffer pool (did the page lifetime expectancy decreased in performance counters?)

just had similar issues - have there been any triggers or fulltext indexes on all of your tables?

Josh

Look how you place in the initial size and growth options. Should you did not provide enough space to start with or if you're growing by 1MB at any given time that may be a reason for performance issues.