A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?
A . Use Amazon S3 with Transfer Acceleration to host the application.
B . Use Amazon S3 with CacheControl headers to host the application.
C . Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
D . Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
Answer: A
Explanation:
Reference: https://aws.amazon.com/ec2/autoscaling/
Why Use Amazon S3 Transfer Acceleration?
You might want to use Transfer Acceleration on a bucket for various reasons, including the following:
You have customers that upload to a centralized bucket from all over the world.
You transfer gigabytes to terabytes of data on a regular basis across continents.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
How should I choose between S3 Transfer Acceleration and Amazon CloudFront’s PUT/POST?
S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance.
https://aws.amazon.com/s3/faqs/#s3ta
Latest SAA-C02 Dumps Valid Version with 230 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund