Which instances are available in my region?

Thursday, March 10, 2016 Category : , 2

Have you wondered what the best way is to quickly list which instances are available in an AWS region? When the t2.nano was added to the Sydney region this week, I thought exactly this.

First place you would look would be the pricing page and select your region. However this only lists the latest instance types, you have to click through to another page to see the older generations. As a technical person you think "surely this can be done from the CLI"?

First port of call would be the trusty AWS CLI which I just love, its so handy. However there is no command which lets you extract the instances types. The closest you can get is to list the currently available reserved instances, for example

aws ec2 describe-reserved-instances-offerings --query "ReservedInstancesOfferings[?AvailabilityZone=='ap-southeast-2a'] [InstanceType]" --output text --region "ap-southeast-2" | sort -u
Problem with this command is that there may be an instance type which does not have an RI available. When I was doing this t2.nano and hi1.4xlarge were not available for RI in Sydney. So the CLI is probably not the best solution here.

The only other possibility is the relatively new pricing file. This was created to provide a programatic interface to the pricing data instead of the nasty scraping hacks people previously performed. The pricing file for EC2 lists all of the instances available and looks something like this.
{
  "formatVersion" : "v1.0",
  "disclaimer" : "This pricing list [...]",
  "offerCode" : "AmazonEC2",
  "version" : "20160126001708",
  "publicationDate" : "2016-01-26T00:17:08Z",
  "products" : {
    "DQ578CGN99KG6ECF" : {
      "sku" : "DQ578CGN99KG6ECF",
      "productFamily" : "Compute Instance",
      "attributes" : {
        "servicecode" : "AmazonEC2",
        "location" : "US East (N. Virginia)",
        "locationType" : "AWS Region",
        "instanceType" : "hs1.8xlarge",
        "currentGeneration" : "No",
        "instanceFamily" : "Storage optimized",
        "vcpu" : "17",
        "physicalProcessor" : "Intel Xeon E5-2650",
        "clockSpeed" : "2 GHz",
        "memory" : "117 GiB",
        "storage" : "24 x 2000",
        "networkPerformance" : "10 Gigabit",
        "processorArchitecture" : "64-bit",
        "tenancy" : "Shared",
        "operatingSystem" : "Windows",
        "licenseModel" : "License Included",
        "usagetype" : "BoxUsage:hs1.8xlarge",
        "operation" : "RunInstances:0002",
        "preInstalledSw" : "NA"
      }
    },
So with some use of jq magic to manipulate the json data we can extract only the instance flavors. Notice I am filtering for the region I am interested in.
curl -s  https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/index.json | jq -r '.products[].attributes | select(.location == "Asia Pacific (Sydney)" and .tenancy == "Shared") | .instanceType' | sort -u
c1.medium
c1.xlarge
c3.2xlarge
c3.4xlarge
c3.8xlarge
c3.large
c3.xlarge
c4.2xlarge
c4.4xlarge
c4.8xlarge
c4.large
c4.xlarge
d2.2xlarge
d2.4xlarge
d2.8xlarge
d2.xlarge
g2.2xlarge
g2.8xlarge
hi1.4xlarge
hs1.8xlarge
i2.2xlarge
i2.4xlarge
i2.8xlarge
i2.xlarge
m1.large
m1.medium
m1.small
m1.xlarge
m2.2xlarge
m2.4xlarge
m2.xlarge
m3.2xlarge
m3.large
m3.medium
m3.xlarge
m4.10xlarge
m4.2xlarge
m4.4xlarge
m4.large
m4.xlarge
r3.2xlarge
r3.4xlarge
r3.8xlarge
r3.large
r3.xlarge
t1.micro
t2.large
t2.medium
t2.micro
t2.nano
t2.small
If you are just after the instance families a little more manipulation will get that too.
curl -s  https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/index.json | jq -r '.products[].attributes | select(.location == "Asia Pacific (Sydney)" and .tenancy == "Shared") | .instanceType' | sort -u | cut -f1 -d. | sort -u
c1
c3
c4
d2
g2
hi1
hs1
i2
m1
m2
m3
m4
r3
t1
t2
Note you have to use the text descriptor of the region names and not the codes such as "ap-southeast-2". If you want to list all the region descriptors you can do.
curl -s  https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/index.json | jq -r '.products[].attributes.location | select(. != null)' | sort -u
Asia Pacific (Seoul)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
AWS GovCloud (US)
EU (Frankfurt)
EU (Ireland)
South America (Sao Paulo)
US East (N. Virginia)
US West (N. California)
US West (Oregon)
There you go, a nice and easy way to pull instance types for a region. Well its easy if you cut and paste, its a relatively long command.

Enjoy.

Rodos

P.S. Thanks to the Solution Architects in the Sydney team at AWS who thought of using describe-reserved-instances-offerings in the CLI, turns out it is not as fruitful as the pricing file though.

Killer interview question on success

Saturday, February 27, 2016 Category : , , , 0

I do a lot of interviewing, close to 400 of them over my 3 years working at Amazon. I am also in the process of becoming one of those mysterious "bar raisers" which you can read a description of in this WSJ article. Whilst I feel I still have a lot to learn about interviewing I find it a fascinating topic.

Previously I wrote some advise on interviews but figured I should write some more. Lifehacker.com.au likes to write articles on Killer Interview Questions and there was one this month that I thought was a good one.

One good question to ask is “Who succeeds in this position?” or, to phrase it carefully, “How would you define success for this position?”
What a great question. You are probably thinking "Great with the answer to this question I can just tell them what they want to hear! For the rest of the interview or for subsequent interviews I will reinforce my alignment to these". If that's what you are thinking you have missed a great insight.

What insight should you gain from this question? The insight you will gain is if you want to work for this company. Often people forget that interviews are a two way thing. The interviewers are trying to understand you and if you will be able to perform the role and be a good cultural fit for the company. Likewise you are trying to figure out if this role and company is something that you are willing to commit a portion of your life too. The answer to this question is going to give you good insight into the role and how to be successful. If characteristics expressed do not align with your personal goals/desires or how you want to work then you really need to consider is this the right job for you.

For example, what if part of the answer was, "To be successful in this role you really need to be curious and have a deep desire to learn new things and experiment on your own with new technologies and concepts. The people who have done very well in this role are like sponges when it comes to technology." That might sound fantastic to you and describe just how you like to operate in your personal and work life. On the other hand it might make you really uncomfortable. Maybe you just finished your MBA after previously doing years of university and you really want to find opportunities to practise your learning rather than embarking on something that is going to require you to learn and develop lots of new knowledge. Maybe you are the type of person that likes formal training and structure and experimenting on your own is just not your thing.

Another example, what if part of the answer was, "Success is easily defined here, if you don't hit your monthly quota you get zero commission. If you miss three in a row you will be moved onto a performance plan. We are a results driven company and there is no easier way to measure success than hitting your quota". Does that sound fantastic to you? Is that an environment where you think you will thrive? Some people might love that environment, they are result driven and like clear measurements. They have a track record of results to know they can achieve the task and they like it when everyone around them is held to the same bar. Of course many people, including me, would not find such an environment a cultural fit. This answer would give me some good insight to determine if this was a role I would be good at and enjoy.

Remember the questions you ask in the interview are important. As advised in the previously mentioned article, have some good questions prepared. But it's important to know why you are asking the question and what you are going to do with the answer. You want to gain insight into the role and the company. Some questions are better at achieving this than others.

Happy interviewing!

Rodos

P.S. Shameless plug. Remember Amazon in always hiring. See http://amazon.jobs/ for open roles in Australia. If you apply for a role in Solution Architecture you may end up having an interview with me! Wouldn't that be fun!

Using an architectural review for improving site reliability

Tuesday, June 16, 2015 Category : , , , 2

I stumbled across another AWS Blogger,  Eric Hammond who blogs at https://alestic.com

One of the recent things which Eric has done is his Unreliable Town Clock (UTC) which you can use to schedule triggering of AWS Lambda functions. Its a cool idea.

Eric certainly knows what he is doing, he not only launched a service he sat down and ensured "this service is as reliable as I can reasonably make it". No wonder he is a AWS Community Hero!

Of course reliability is only one of the elements of an architectural review of an AWS environment. You should cover off such things as Security, Availability, Scalability and Cost Efficiency. Eric has covered some of this. Check out what he has done to ensure UTC is always up and running, there are some great tips in there.

What if you wanted to do a architectural review of your AWS environment. How would you go about that? What questions would you ask? What things require focus? Maybe post in the comments. Saying I will call my friendly AWS Solution Architect is cheating, although its a great idea.

Two items that will really help you get started with a review are these whitepapers.


What would you do beyond this? Here is some very small things I would investigate.

  • Auditing. Is CloudTrail, Config and VPC Flows all turned on? Its hard to do debugging or forensics on something in the past when you were not capturing the data. Is all the activity from the instance logged to CloudWatch Logs?
  • What dependancies are there that might stop a failed employment? That autoscaling group may relaunch an instance if it fails. What AMI is it using? Is it your own AMI sitting in the account or are you launching from a public one? What if the public ones goes away because a new one is released? How is the code deployed into that AMI? Is it baked in, coming from S3, does it need to download software from github, what if it can't?
  • Monitoring. There are 4 metrics in CloudWatch for SNS. Are there any alarms that could be created to provide alert of failure? What if the number of published messages dropped below a certain rate? An alarm like that could replace what Eric is using Cronitor.io for. You can even create those alarms with CloudFormation!
  • Turning on MFA is always a great idea.

This is the simplest of examples. For your typical system there are hundreds of review items to assess. But you get the idea.

Doing an architectural review is something you should do periodically in your AWS environment. As AWS keeps releasing new features there is frequently new things you can do to improve your setup. 

If only everyone was like Eric! Also, anyone use builds everything in CloudFormation is a winner in my book!

Rodos

Shortcuts in the AWS Console

Monday, June 15, 2015 Category : , 0

Here is something that I did not know you could do for ages, shortcuts inside the AWS console that appear on the top bar.

See this animated Gif for how to add them and then use them. I think the Edit button used to be a lot less obvious.


Its very handy to have the links for your most frequently accessed services always there.

Enjoy.

Rodos

A quick first look at AWS VPC Flow Logs

Thursday, June 11, 2015 Category : , , , 0

I woke up this morning to yet another new AWS feature, VPC Flow Logs, as described by Jeff Barr.

Jeff did a great job of providing an overview so make sure you read that before continuing.

Its really interesting to think what you can do with network flows logs. A lot of Enterprise customers ask for this so they can perform various security activities. Many of those security activities are really not needed in the new world of Cloud. However there are some valid ones that you may want to consider. There are also some good reasons to have flows available so you can perform some troubleshooting of your Security Groups or NACLs.

I suggest people turn them on, capture the data and set a retention period on the destination Cloud Watch Log Group, say 3 days up to 6 months. The data is then there if you need it. Just like Cloud Trail data. Its to late after the fact!

A great little use case would be some general visualization of network flows on a dashboard. Its not real time but its going to give you a general indication. You could analyze the amount of traffic by category, such as incoming, outgoing, cross AZ and within AZ (by reverse engineering the subnet ranges). You could even track it down to traffic to AWS regional based services such as S3. You may want to track these patterns over time, looking for trends. You could also look at top talker hosts internally or externally. I suspect it will be of interest to people at first, and then it will be a colorful screen to show visitors. After all, AWS handles all that heavy lifting of operating and scaling the networking.

Many will be interested in monitoring rejected traffic and if they see a lot if it starting, wonder if there is something else going on they should look at or take precessions on.  Generally you probably don't care, nothing to see here, its just dropped traffic.

Be great to see what AWS Partners do in the visualization space, I sense some eye candy coming.

I quickly turned VPC Flow Logs on in my account this morning.

Here is my Cloud Watch console showing the Log Groups.



Notice I have set the expire at 6 months. You can see below that when I look at my Log Group each of my Elastic Network Interfaces (ENIs) is shown.


I have 4 ENIs. Some of those are for my Workspaces instances which is cool. 

If I look at the instance I launched this morning by clicking on the eni-981db9fc-all here is the data displayed.


Notice how I have applied a filter. Nice hey. Here is what that filter looks like in that text box.

[version, accountid, interfaceid, srcaddr, dstaddr, srcport, distport=23, protocol, packets, bytes, start, end, action=REJECT, logstatus]

Notice that by putting the field names separated by commas and between brackets you can parse out the text. This is a general feature of Cloud Watch Logs. The field list is in the VPC Flow Logs documentation. 

There is lots of filters you can apply, here you can see I am just checking for matching values of a destination port of 23 (telnet) and where the action was to reject the packets. You can see all of those machines which have attempted to telnet into my little server. Thats why it has a correctly configured Security Group!

There is documentation in CloudWatch for the filter patterns syntax.  It supports both string and numeric conditional fields. For string fields, you can use = or != operators with an asterisk (*). For numeric fields, you can use the >, <, >=, <=, =, and != operators.

If someone asks you which hosts are communicating with the database at the moment you can quickly jump into the console and answer it by look at traffic to the right port.

The other nice thing you can do is create a metric on this filter to pull out the data. Here is one that creates a metic on the number of bytes accepted as SSH traffic into the ENI.

I created a few of these for my machine, here is the metrics display after I pushed some data its way. I am using the sum function to get the sum of bytes.


During this time period there were a few rejected telnet sessions, some SSH traffic and lots of general traffic. If you can write a filter on it, you can graph it.

Of course this only gets you so far. You have to know the ENI etc. 

You will probably want to extract all of the data into something easier. If you want to roll your own a good way would be to create a Subscription on the whole Log Group, see http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/Subscriptions.html and push all the data to a Kinesis stream (it will handle the scale). How do you get data out of Kinesis? Well you use Lambda functions of course, see http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-kinesis-events-adminuser.html. You Lambda function could dump it to S3 and from there you load into Redshift (which can be automated too) or start writing some EMR jobs. Now thats the power of AWS.

Hope that little bit of a first look helps you understand a bit more about VPC Flow Logs. I am really interested to see what people are going to do with it. The main uses will be those occasional operations or forensic events. 

Enjoy.

Rodos

P.S. Remember, I might work for AWS but these posts are my own ramblings late at night. Its the geek speaking. 

Powered by Blogger.