So, the solution should most likely be presented in tiers is bigger from the application, but I am curious about individuals knowledge about selecting in which the disk drives utilized by the database should reside.

Here are the options:

  • JBOD - (just a lot of disks) Traditional internal disks - fast but not so expanding
  • NAS - Slow but cheap and expanding, most likely perfect for backup copies
  • DAS - A good compromise, but generally accessible from just one or two machines
  • SAN - Costly but excellent

Just how much in the event you be worried about selecting a 15k drive on the 10k or 7200RPM?

What's your preferred RAID level?

BTW, the first entry is frequently known as JBOD -- just a lot of disk. I have taken the freedom of editing your admission to reflect that. Great question!

The greatest performance boost you will get is as simple as partitioning tables/indexes onto different disks. The initial step is always to put indexes on a single disk and data on some other. Afterward you should think about which tables/indexes are utilized together, and use them separate disks ("spindles") whenever possible.

Although SAS-based DAS will probably be fastest for any single DB server (ideally with 15krpm 2.5 " SFF disks inside a RAID 10 configuration) for many systems you lose many of the advantages that the SAN may bring. For your reason I'd always build databases with dual FC (4 or 8Gbps fibre links) plugs into dual SAN switches, attached to a dual-controller SAN array. It will not only scenario be extremely swift indeed however it will open the choices to use the different snapshot techniques these boxes have to give you. These may enable'live-live' DB replication between sites for DR, instant database restoration and ideal capacity expansion/reduction without any effect on the server/s themselves. Hope this can help, tell me basically can also add anymore.

That will rely on the utilization you're putting the drives to. Some sample programs may be:

  • Robust storage of the modest quantity of data with modest traffic (like a home network with assorted por^H^H^Hmedia files onto it): One shown pair (RAID 1) of disks which are outside of the machine disk from the machine they're set up in. This will help you to rebuild the equipment or perform major surgery without having affected the information volume. RAID-1 implies that the information can survive the failure of merely one disk.
  • A relevant video editing sytem that requires fast streaming although not always 100% reliability: an immediate-attach RAID- (candy striped) on fibre funnel disks with 'V' firmware (a seagate-ism however they make most such parts). Fibre funnel is really a packet based protocol whereas with SCSI two products book the whole bus. FC works more effectively under load.
  • Trnsactional application: Logs on the shown pair and data on a number of RAID-5/6, RAID-10 or RAID-50/60 volumes. On the SAN or any controller configuration with battery-backed write back caching the controller can optimise the disk creates. DB logs are mainly consecutive access whereas the information volumes are mainly random access. The random seek activity will disturb the logging activity so you're going to get a performance profit from keeping the logging disks relatively quiet and free of competing traffic.
  • Large data warehouse fact table: A number of shown pairs (RAID 1) on JBODs with as numerous host channels in to the server as possible afford. Spread the very fact table partitions over the shown pairs. Candy striped disks with typical array firmware setup will frequently only enable you to get one (say) 64k stripe per revolution from the disk, which involves maybe 5 or 10MB/sec per disk on the 10K drive. DW workloads possess a more streaming data access pattern than the usual transactional application. While using shown pairs implies that the disks could possibly stream data at something a lot more like their maximum bandwith rate. This is often an order of magnitude faster.

Simply to understand this began, I'm utilizing a Dell MD3000 direct attached hard drive, connected via redundant HBA cards. It's 9x146Gb 15K drives, arranged in 4 RAID 1 arrays with 1 hot spare waiting. Total data footprint is approaching 200Gb. I am not thrilled using the IO performance, but it is obtaining the task finished.

There exists a database cluster mounted on a NAS, also with redundant HBA. The NAS models are RAID-10. From your storage-meister, for databases the greater Revoltions per minute the greater.