Optimizing Costs: AWS Object Storage Tiers and Lifecycle rules

AmazonS3[1]I’m currently using the AWS free tier, so I’m allowed 5 GB of standard storage for the next 12 months. I’m not close to my limit, but at my rate of usage, I’ll reach that limit in the later part of 2016. To reduce my costs, I’ll move my hosted mp3 files to a lower tier of storage (for those that haven’t been keeping track, I’m in the process of porting my old blog to AWS).

Currently AWS offers four (really three, since S3 – Standard and S3 – Reduced Redundancy Storage are kind of the same) different tiers of object storage:

  • S3 – Standard
  • S3 – Reduced Redundancy Storage1
  • S3 – Standard Infrequent Access
  • Glacier2

For my purposes – hosting a low traffic, mostly static website – S3- IA is a perfectly suitable choice. The latency and throughput is the same as Standard, but comes with a lower per GB storage price and per GB retrieval fee. The trade-off: lower reliability (99.9% – still more that good enough for a personal blog).

I’m keeping my files on Standard for now, but will implement S3 Object Lifecycle rules in the next year (before I have to start paying for my storage). It’s pretty simple to implement – you just use the console to create a simple rule (designate a folder and a duration before migrating to S3 – IA).


I don’t like that you can’t specific files by suffix (mp3) in the Lifecycle rules GUI – but I’m sure a simple script can be created and run to only move mp3 files. And, yes I could put the files on S3-IA  when I first create them, but I think 30-60 days on Standard storage makes sense before moving the objects to a lower storage tier.

Final note – S3 – RRS is a valid option, too. It costs less than Standard (2.4 cents/GB versus 3 cents/GB [US East pricing]) and comes with 99.99% availability. My only problem is that it is less durable than Standard and IA… so for my purposes, I’m ok sacrificing availability (versus durability) for a lower cost.

Using Amazon Lambda

AWS LambdaAs I mentioned last week, I’m in the process of using Lambda with Elastic Transcoder to automate conversion of m4a files into mp3 files. I spent the weekend writing a few scripts in node.js; here’s what I’ve learned so far:

  1. A Lambda function is a pretty simple thing to write.
    1. There are tons of examples to reference
    2. You can write your functions in 3 languages – javascript (node.js), Java, and Python
  2. Lambda plays nicely with most AWS services
    1. I’m interacting with CloudWatch, SNS and S3… no problems, except for…
  3. IAM/Security can trip you up if you’re not careful
    1. 25% of my debgging time was spent figuring out what permissions I needed to add (without given “wide-open” permissions to my IAM role).

Continue reading “Using Amazon Lambda”

Using Amazon Elastic Transcoder

Elastic TranscoderI’m a big fan of music and I manage my 50001 songs using iTunes. iTunes has it’s share of problems2, but overall it’s a pretty good media management application – especially if you’ve bought into the Apple ecosystem (which I have).

My problem is that most of my files are in the m4a format. That’s fine for my personal use, but it’s a problem when I want to share a file with someone else (who may not be part of the Apple ecosystem). iTunes allows you to convert a music file into mp3 format, but you have to remember to remove the duplicate file3. Again, this really isn’t a problem… but there’s got to be a better way to do batch conversions of files.

This became a bigger issue when I started re-publishing old posts to my blog. Many of those post have links to since-expired media files – which means I need to replace the links to valid files. I’m using the “WP Offload S3” plugin – which uploads my media to an S3 bucket. I’m also using the “zbPlayer” plugin to make my files available to play within the post. Problem is:zbPlayer only works with mp3 files4.

Continue reading “Using Amazon Elastic Transcoder”

Getting Started with my Blog on AWS

I workAWS_Services as a technology consultant, and it’s been some time since I’ve been hands-on (these days, I typically architect solutions on paper and the work with technology architects, designers, and developers to implement the solution). This blog will give me the opportunity, in a live environment, to get my hands dirty. Along the way, I’ll document what I’ve done, so others can follow (or use for their own purposes).

Step 1: Take AWS training. The best place to get instant hands-on experience is aws.qwiklabs.com. The website provides everything you need to learn about AWS and to build working examples.1

Step 2: Sign up for a free account. And yes, free means free, for the most part (more on that later).

Step 3: Build a LAMP server. The instructions here (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html) are easy peasy.

Step 4: Install WordPress. Again, super simple instructions (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hosting-wordpress.html) Continue reading “Getting Started with my Blog on AWS”

Hello world!

Hello-WorldWelcome to my blog. It’s still a work in progress, and it’s purpose will change over time… but for now, its serves the following purposes:

  1. Exercising the “right hemisphere” of my brain, as I share my thoughts on music, technology, books, hobbies, etc….
  2. Gives me a place to rehost posts from my old blog
  3. Helps me to grow my experience with AWS and other technologies that I’m becoming familiar with. Today, I’m hosting my own WordPress instance on a LAMP server that I installed on an AWS EC2 micro instance. I’m using S3 to host the images and media. Over time, I hope to use AWS (or scripting in PHP) to
    1. write a program (or use AWS media technologies or Lambda) to convert m4a files for mp3 (it’s easier to share my music in mp3 format)
    2. change the WP-S3 plugin to use S3 infrequent storage (no need for the redundancy that S3 provides)
    3. convert the DB from MySQL to Amazon Dynamo (or another noSQL db).

So, thanks for checking out my blog and feel free to leave a comment!