Amazon S3 Basics

Ben Dunn
3 min readOct 26, 2020

If you’re like me, you might have noticed Amazon Web Services coming up on just about every job posting out there. That’s because AWS is arguably the last word in cloud computing and APIs. With enticing pay-as-you-go pricing, unrivaled data durability and near-global data availability, AWS is your new best friend when it comes to data storage. If you’re looking for a point of entry, look no further than S3, AWS’s ‘Simple Storage Service’.

S3 is an object based storage service. The idea is that you can store and retrieve any amount of data from just about anywhere. Not only can you count on 99.99999999999% durability, but 99.99% availability as well. S3 uses objects and buckets, an object being any piece of data up to 5TBs in size, which S3 will assign a unique ID, and a bucket being a receptacle that holds your data.

You can choose a storage class based on how available you need your data to be, and the pricing will reflect that need. Storage classes are divided up based on urgency of need, from the instantly available Standard all the way to the cold storage ultra cheap archives of S3 Glacier.This can be extremely useful and cost-effective for data that might become less needed over time. For example, maybe I need a spreadsheet of contact information for an ongoing project. I know I’ll be done with the project in 30 days, so initially I set up a Standard bucket with the spreadsheet where I can get it immediately on a daily basis, but using the lifecycle management transition actions I have it automatically moved to the cheaper S3 Glacier after 30 days. I might want to see the spreadsheet again later, but I only need instant access to it during the work-phase of the project. After that, it’s not an issue if it takes a while to get the spreadsheet. This is the modern version of moving a box of last years files from my office to the sub-basement. Using lifecycle management I can also set expiration actions to automatically delete un-needed data after a set period of time. S3 also offers version control. You can rollback to earlier versions of your data and preserve recover, or restore, much like GitHub.

You may be wondering how S3 can boast such high availability with the whole wide world out there. Not only does AWS have a massive global presence, but S3 boasts plenty of features to reconcile this potential issue. Cross-Region-Replication can be used to make duplicates of your data automatically between regional buckets. That means I could work in tangent from Denver with a partner in Mumbai, and instead of my Mumbai partner dealing with long latency, we can stay on the same page despite the distance. It gets better! With S3’s Transfer Acceleration you can even make fast, secure long range transfers of data. Using Amazon Cloudfront’s vast network of global edge locations to chain an optimized route, you can get your client the data they need virtually anywhere.

AWS is a household name in the developer world, and hopefully this has helped you take your first steps into understanding the flagship, S3. Huge enterprises you likely use daily, like Netflix, Reddit, and Mojang all use S3 in some capacity. So get out there and store some data!

--

--