To deal with high traffic, I am likely to scale out, run my web application (WordPress based) on some EC2 instances (I am very a new comer to AWS). The events have to work on a single data (images, videos...).
I'm considering using S3 because the storage with this shared data.
My questions are:
Basically use S3, should i write extra codes for my application to upload and obtain data to/from S3? Or there's a miracle method to mount EC2 instances to S3, and then EC2 instances can access S3 as being able to access the neighborhood storage?
I have heard that S3 is a little slow as it is utilized through web services (if customers upload files also it needs time to work to upload the files to S3). Same with there much better method for storing shared data?
I have read some documents about ale scaling of Amazon . com EC2. But not one of them mentions concerning how to handle shared data. Any assistance is highly appreciated. Thanks.
There's no native facility to 'mount' an S3 bucket as storage for an EC2 instance, although you will find several 3rd-party applications that offer systems to create S3 storage available like a virtual drive or repository. Many of them provide a preset quantity of free storage after which a tiered charging mechanism for bigger amounts - Google for 'S3 storage interface' and have a look.
Whether you are writing code to make use of S3 with the API or make use of an interface layer, there'll always be some latency involving the application and also the storage. This is a fact of physics and there is nothing that you can do to get rid of the delay, since the S3 repository isn't close to the EC2 cluster - so you won't ever achieve 'local' storage access speeds.
An alternate may be to make use of EBS which is close to your EC2 instance - it's different qualities to S3 (for instance, it doesn't offer edge locations for regionally-localized access) but is a lot faster for application use since it is within the EC2 cluster and mounted as local storage.