To store data with efficiency while maintaining high availability, you can also select from several storage classes in S3. The following storage classes are available:
S3 Standard: Provides general-purpose online storage with 99.99 percent availability and 99.999999999 percent durability (aka “11 nines”).
S3 Infrequent Access: Provides the same performance as the S3 standard but is up to 40 percent cheaper with 99.9 percent availability SLA and the same “11 nines” of durability.
S3 Intelligent Tiering: Can automatically determine whether to maintain the object stored on the S3 standard or S3 Infrequent Access based on the access pattern of the object. Intelligent tiering tries to optimize the location of the object and thus automatically reduce the cost of storing large amounts of data on S3.
S3 One Zone-Infrequent Access: Provides a cheaper data tier in only one availability zone that can deliver an additional 25 percent saving over S3 Infrequent Access. It has the same durability with 99.5 percent availability.
S3 Glacier: Costs less than one-fifth of the price of S3 and is designed for archiving and long-term storage. Restore times are between 1 minute and 7 hours, depending on the type of request.
S3 Glacier Deep Archive: Costs four times less than Glacier and is the cheapest storage solution at about $1 per terabyte per month. This solution is intended for very long-term storage and has the longest restore times of up to 12 hours.
S3 Outposts: Delivers the content from an on-premises AWS Outpost deployment of the S3 service.
S3 also supports life-cycling and expiry of objects. Lifecycle rules can help move objects between storage tiers based on a scenario that you define. For example:
Any object older than 60 days is migrated to S3 Infrequent Access (S3 IA).
After 120 days the object is moved to Glacier.
After one year the object is deleted.
The bucket also acts as a security endpoint for the data because you can apply a bucket policy to the bucket and granular control access to the bucket and objects within it. To control access, you can also use access control lists (ACLs) at the bucket level or for each object. However, ACLs only allow for coarse-grained permissions such as read or read/write to be assigned to the bucket or object. This is why a bucket policy is the preferred method of managing security in S3.
All new buckets are created with a no public access setting that prevents applying any public access via policies or ACLs. To enable public access, you need to remove the “no public access” setting in the bucket.
A bucket policy is a JSON-formatted document with the same structure as inline IAM polices. It allows you to granularly control each API action against the bucket or object. The ability to control each API call means that you can create permissions for users to read files (GetObjects API) but not list them (ListObjects API). In the same example, you can have a write-only policy, and so on. The owner of a bucket (the user who created it) always has all permissions/full control. Figure 5.2 illustrates the policy evaluation logic in AWS.