URL | aws.amazon.com/s3/ |
---|---|
Type of site | File hosting service |
Registration | Required |
Available language(s) | English |
Owner | Amazon.com |
Launched | March 14, 2006 |
Current status | Active |
Amazon S3 (Simple Storage Service) is an online storage web service offered by Amazon Web Services. Amazon S3 provides storage through web services interfaces (REST, SOAP, and BitTorrent).[1] Amazon launched S3, its first publicly-available web service, in the United States in March 2006[2] and in Europe in November 2007.[3]
At its inception, Amazon charged end users US$0.15 per gigabyte-month, with additional charges for bandwidth used in sending and receiving data, and a per-request (get or put) charge.[4] As of November 1, 2008, pricing moved to tiers where end users storing more than 50 terabytes receive discounted pricing.[5] Amazon claims that S3 uses the same scalable storage infrastructure that Amazon.com uses to run its own global e-commerce network.[6]
Amazon S3 is reported to store more than 449 billion objects as of July 2011[update][7]. This is up from 102 billion objects as of March 2010[update],[8] 64 billion objects in August 2009,[9] 52 billion in March 2009,[10] 29 billion in October 2008,[5] 14 billion in January 2008, and 10 billion in October 2007.[11] S3 uses include web hosting, image hosting, and storage for backup systems. S3 comes with a 99.9% monthly uptime guarantee[12] which equates to approximately 43 minutes of downtime per month.[13]
Contents |
Details of S3's design are not made public by Amazon. According to Amazon, S3's design aims to provide scalability, high availability, and low latency at commodity costs.
S3 is designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.[14]
S3 stores arbitrary objects (computer files) up to 5 terabytes in size, each accompanied by up to 2 kilobytes of metadata. Objects are organized into buckets (each owned by an Amazon Web Services or AWS account), and identified within each bucket by a unique, user-assigned key. Amazon Machine Images (AMIs) which are modified in the Elastic Compute Cloud (EC2) can be exported to S3 as bundles.[15]
Buckets and objects can be created, listed, and retrieved using either a REST-style HTTP interface or a SOAP interface. Additionally, objects can be downloaded using the HTTP GET interface and the BitTorrent protocol.
Requests are authorized using an access control list associated with each bucket and object.
Bucket names and keys are chosen so that objects are addressable using HTTP URLs:
http://s3.amazonaws.com/bucket/key
http://bucket.s3.amazonaws.com/key
http://bucket/key
(where bucket is a DNS CNAME record pointing to bucket.s3.amazonaws.com)Because objects are accessible by unmodified HTTP clients, S3 can be used to replace significant existing (static) web hosting infrastructure.[16] The Amazon AWS Authentication mechanism allows the bucket owner to create an authenticated URL with time-bounded validity. That is, someone can construct a URL that can be handed off to a third-party for access for a period such as the next 30 minutes, or the next 24 hours.
Every item in a bucket can also be served up as a BitTorrent feed. The S3 store can act as a seed host for a torrent and any BitTorrent client can retrieve the file. This drastically reduces the bandwidth costs for the download of popular objects. While the use of BitTorrent does reduce bandwidth, AWS does not provide native bandwidth limiting and as such users have no access to automated cost control. This can lead to users on the 'free-tier' S3 or small hobby users to amass dramatic bills. AWS representatives have previously stated that such a feature was on the design table from 2006-2010[17] but have recently stated the feature is no longer in development.[18]
A bucket can be configured to save HTTP log information to a sibling bucket; this can be used in later data mining operations. This feature is currently still in beta.
As of February 18, 2011, Amazon S3 provides options to host static websites with Index document support and error document support.[19] This support was added as a result of user requests dating at least to 2006.[20] For example, suppose that Amazon S3 was configured with CNAME records to host http://subdomain.example.com/. In the past, a visitor to this URL would find only an XML-formatted list of objects instead of a general landing page (e.g., index.html) to accommodate casual visitors. Now, however, websites hosted on S3 may designate a default page to display, and another page to display in the event of a partially invalid URL. However the CNAME specification only allows a subdomain to be hosted this way, not a second level domain. That is, subdomain.example.com can be hosted, but not example.com. One may use an ANAME record pointing to the S3 server, but this method is not documented by Amazon.
Photo hosting service SmugMug has used S3 since April 2006. They experienced a number of initial outages and slowdowns,[21] but after one year they described it as being "considerably more reliable than our own internal storage" and claimed to have saved almost $1 million in storage costs.[22]
There is a User Mode File System (FUSE) for Unix-like operating systems (Linux, etc.) that lets EC2-hosted Xen images mount an S3 bucket as a file system. Note that as the semantics of the S3 file system are not that of a Posix file system, the file system may not behave entirely as expected.
Apache Hadoop file systems can be hosted on S3, as its requirements of a file system are met by S3. As a result, Hadoop can be used to run MapReduce algorithms on EC2 servers, reading data and writing results back to S3.
Dropbox[23], Zmanda and Ubuntu One are some of the many online backup and synchronization services that use S3 as their storage and transfer facility.
Minecraft hosts game updates and player skins on the S3 servers.[24]
Tumblr, Formspring and Posterous images are hosted on the S3 servers.
|
|