I am creating a very sensitive application for any client that must have 99.9999999999% uptime guarantee.

It is a Rails application with MySQL database. I'm considering hosting it on EngineYard due the reduced maintenance needs and easiness to operate.

Heroku doesn't appears to become an ideal solution because of uptime problems.

EC2 may also be a great choice but maybe it takes an excessive amount of work to set up and gaze after.

My real question is: steps to make a redundant system using EngineYard, Heroku, EC2 or other Rails hosting that you simply propose? Should i have 2 instances in various places around the globe being duplicated? Please advise the easiest way.

Regards.

Everybody wants 100% uptime, but achieving it's virtually impossible. Since lower-time could be triggered by the links within the chain, there tend to be dozens, to attain this type of high standard you will have to buy gold-plated everything. Basically, you will need to spend lots of cash. The main difference between 99% uptime, meaning your internet site is not available for 12 hrs annually, and 99.9% uptime, where it's under an hour or so is considerable, and after that to 99.99% is even greater, in which the tolerance is all about 5 minutes.

Going beyond 99.99% is merely not practical. Nobody will sign an assurance such as this unless of course they are being dishonest, the agreement so loaded lower with caveats they can be unenforcable, or don't mind doling out heavy credits constantly. Amazon . com EC2's SLA is 99.99% for example.

The metrics I have seen collected on the provider like Linode shows uptimes around 99.97% to 99.99%. From time to time you will notice datacenters with 100% uptime, but this is actually the network level only and does not consider intermittent internal glitches that could knock your server offline.

Selecting a handled hosting provider like Engine Yard may be the answer for you personally, since it can minimize your contact with random occasions, however it will not enable you to get this type of high uptime by itself. They are excellent at maintaining the machine layer, however their capability to fix or work-around bugs inside your application is extremely limited, plus they are susceptible to exactly the same intermittent networking difficulties with EC2 as other people.

You will find 2 kinds of reliability you ought to be concerning yourself with. The first is availability, that is purely a stride of how likely a customer would be to have the ability to make use of the application. Another is data integrity, the industry way of measuring how likely data will be maintained given a variety of disaster situations.

Many people need that the application may be lower once in awhile in short amounts of time, but people won't believe that data might have to go missing every occasionally.

It is not challenging a "99.9999999999%" data retention rate, but you will have to organize your backup, replication, and recovery strategy at length and will need to exercise your systems regularly to be being employed as designed.

In which you have very little treatments for the frequently patchy routing on the web generally, the defect rate within the hardware of the server, the energy inside your data center and so forth, you have a lot of treatments for your backup strategy.

EY utilizes a company known as TerraMark for his or her hosting, that is some pretty serious hosting infrastructure. From the 3 you listed, I'd opt for them.

For up time, you need to take a look at master/slave replication of the data, automatic failover, and you need to build redundancy wherever you are able to. High availability is a reasonably involved subject, and it has more related to After that it dev, I would suggest asking where you can begin again at serverfault.com.