Optimizing Costs: AWS Object Storage Tiers and Lifecycle rules

AmazonS3[1]I’m currently using the AWS free tier, so I’m allowed 5 GB of standard storage for the next 12 months. I’m not close to my limit, but at my rate of usage, I’ll reach that limit in the later part of 2016. To reduce my costs, I’ll move my hosted mp3 files to a lower tier of storage (for those that haven’t been keeping track, I’m in the process of porting my old blog to AWS).

Currently AWS offers four (really three, since S3 – Standard and S3 – Reduced Redundancy Storage are kind of the same) different tiers of object storage:

  • S3 – Standard
  • S3 – Reduced Redundancy Storage1
  • S3 – Standard Infrequent Access
  • Glacier2

For my purposes – hosting a low traffic, mostly static website – S3- IA is a perfectly suitable choice. The latency and throughput is the same as Standard, but comes with a lower per GB storage price and per GB retrieval fee. The trade-off: lower reliability (99.9% – still more that good enough for a personal blog).

I’m keeping my files on Standard for now, but will implement S3 Object Lifecycle rules in the next year (before I have to start paying for my storage). It’s pretty simple to implement – you just use the console to create a simple rule (designate a folder and a duration before migrating to S3 – IA).

s3-bucket-properties-versioning[1]

I don’t like that you can’t specific files by suffix (mp3) in the Lifecycle rules GUI – but I’m sure a simple script can be created and run to only move mp3 files. And, yes I could put the files on S3-IA  when I first create them, but I think 30-60 days on Standard storage makes sense before moving the objects to a lower storage tier.

Final note – S3 – RRS is a valid option, too. It costs less than Standard (2.4 cents/GB versus 3 cents/GB [US East pricing]) and comes with 99.99% availability. My only problem is that it is less durable than Standard and IA… so for my purposes, I’m ok sacrificing availability (versus durability) for a lower cost.

Using Amazon Lambda

AWS LambdaAs I mentioned last week, I’m in the process of using Lambda with Elastic Transcoder to automate conversion of m4a files into mp3 files. I spent the weekend writing a few scripts in node.js; here’s what I’ve learned so far:

  1. A Lambda function is a pretty simple thing to write.
    1. There are tons of examples to reference
    2. You can write your functions in 3 languages – javascript (node.js), Java, and Python
  2. Lambda plays nicely with most AWS services
    1. I’m interacting with CloudWatch, SNS and S3… no problems, except for…
  3. IAM/Security can trip you up if you’re not careful
    1. 25% of my debgging time was spent figuring out what permissions I needed to add (without given “wide-open” permissions to my IAM role).

Continue reading “Using Amazon Lambda”