Updating Spending Analytics
The spending analytics in Ylastic are currently built by downloading and processing the raw CSV data from the AWS Usage activity page. Even though it works quite well, it is a rather tedious process. You can use IAM user credentials to download the data, but we want to reduce this to something even simpler. We are experimenting with the programmatic billing API access that AWS added which gives each AWS account the ability to request that billing csv files be dumped in the S3 bucket of your choice. This makes it a lot easier to both download and process the data and keeping up with the changes that AWS makes to the pricing. It also makes it a breeze to add new services. We are going to roll this out to replace the current implementation within a few weeks. Here are a few sample screenshots of a page built from this data.
AWS provides detailed documentation on how to enable each AWS account for programmatic billing access here. We will have more updates on the blog with all of the charts we will have soon.
Ylastic now supports management for DynamoDB. Manage all your tables in single or multiple regions on the same page. The read and write capacity units consumed in the last 20 minutes are displayed in spark line graphs right next to each dynamo table in the listing, giving you a quick overview of your DynamoDB environment.
Explore the items in your tables with the built-in viewer.
An audit trail is maintained to display the history of changes made to your DynamoDB environment from Ylastic, including the name of the user making the change as well as the IP address from which the change was made.
View Cloudwatch charts for each table by clicking either of the sparkline graphs. You can change time periods, dimensions, etc and refresh to view all of the cloudwatch data for the table.
We are in the process of hooking DynamoDb into our monitoring. Next release will have the ability to monitor and alert via email, voice, SMS :)
Selecting Snapshots for Backup
It can be a bit daunting when you have to backup snapshots or to use the AWS parlance - copy snapshots to other AWS regions, especially when you have a lot of them. We have been grappling with this issue and how to make it easier and simpler for our users to pick the snapshots they want to copy to other regions on a schedule for disaster recovery purposes. You can now select the snapshots to copy in the Ylastic scheduled task in four different ways:
- Select all snapshots in the source region - every single one of them.
- Select all snapshots whose tag value contains specified string value.
- Select all snapshots whose tag name contains a specified string value.
- Select all snapshots whose tag name contains a specified string, as well as the tag value contains a specified string.
This scheme lets you select snapshots for backup from a big net that captures everything to something really fine-grained that can only pick up a single select snapshot. Manage your backups in AWS the easy way!
Copying Snapshots between regions on a schedule
You can now schedule a task in ylastic to copy snapshots of your choice to one or multiple regions on a schedule of your choice.
Select a source region, and specify the strings to match for a tag name and tag value and the task will select all snapshots in the source region that meet those criteria. You can also specify exactly how many copies of a snapshot you want in each of the regions. All previous backups are removed leaving only the latest number of backups that you want in each region.
Here is a list of the backups being created with the above task.
All tags from the source snapshot are preserved and added to each of the new snapshots.
Backups the easy, easy way :)
Management and monitoring support for the new AWS region in Sydney, along with migration of AMIs, scheduling and more.
Windows AMI migration for Sydney on the way :)
Autoscaling in Amazon Virtual Private Cloud
Ylastic can now configure autoscaling for VPCs. You can use all of the scaling goodies such as policies, scheduling actions, recurring actions, etc for your instances running inside a VPC.
- Create a launch configuration that uses the security groups of your choice in your VPC.
- Create an autoscaling group with the above launch configuration and specify the VPC subnet to use.
- Kick back and watch auto scaling take over :-)
View all the changes being made to your VPC autoscaling group in its audit trail.
Change the scaling group to use a different subnet in your VPC?
Switch over to the VPCs page if you like and view all the instances (both normal and auto-scaled) that are currently inside your VPC along with their CPU util and all the other cloudwatch metric charts.
Manage your AWS cloud, the easy way :-)
Route53 Latency based routing
AWS recently enhanced Route53 with the ability to do latency based routing, which serves user requests from the EC2 region for which network latency is lowest. You create a latency resource record, and when Route53 receives a query for the domain, it will select the resource record for the EC2 region that will have the lowest latency for the requesting user. It really is as simple as that. Ylastic now supports managing these latency records. In the example below, we have a load balancer in US East(Virginia) region and one in the US West (California) region.
You can create a latency record for all seven AWS regions if you like from one screen. How does AWS figure out the latencies in order to make the routing decisions? AWS apparently gathers these latency measurements between most /24 subnets on the internet and the different AWS regions in order to create the dataset that is the basis of latency-based routing. The technology underpinning this is also used by CloudFront, the AWS CDN product.
You can also use IP addresses/Elastic IPs instead of ELBs for creating these records. This is a really nice addition to Route53, and much requested by the users. Enjoy :-)
Scheduling the AWS Account Advisor
You can now schedule the AWS Account advisor in Ylastic to run on a time period of your choice. So you can set it up to run checks against your account, say once a week and alert you via email if any flags are raised by the check. How easy is it to setup?
You will get an email if there are any warnings from the advisor run.
Simplify your AWS cloud management!
Simple Backups for EBS Instances
At Ylastic, we have been looking at backup management and ways to make it easier and simpler to both create the backups, as well as manage them easily without getting lost in looking through tons of AMIs. Introducing simplified EBS instance backup management - Select an instance, view all of its backups, launch new instances from any of the backups, and even schedule backups to happen on a time period of your choice.
Backups can be created in two different ways - on-demand, by clicking a button and filling out a few fields.
You can automate backups by setting up a scheduled task to create them on a time schedule that you want. Automate backups for multiple instances with a single task by specifyng a string to match in the name tag for instances. Ylastic will also save you storage costs by ensuring that only the specified number of latest backups are kept, and prune the older backups. The task shown below will backup all EBS instances in Virginia whose name contains Lorax at 1:00 AM everyday, and keep only the latest ten backups.
Ebs instance backup management is a feature available in the Ylastic Plus version. AWS cloud management made easier :-)
Simplifying CloudFormation Stack Management
Ylastic just released several enhancements for CloudFormation stack management. Our initial implementation of the UI was done when CloudFormation was first released, and it is now a rather large and complex service that enables you to reference and use resources from a lot of different services in the AWS stable. The increase in complexity leads to a quick proliferation of the number of resources that comprise your stack, and correspondingly needs a simpler way to get your head around what is in your stack. You launch this wonderful stack that creates instances, databases, Route53 resource records, autoscaling groups and so on. How do you actually view all of these resources that are part of your stack? No, we do not want to view a rather large incomprehensible table that lists just the physical ids for resources, as it is completely worthless in terms of getting any work done. And no, we do not mean navigating to 20 different pages to view them. We mean a single place to view your resources and additional information for each resource. Here we go ..
All the resources in a stack are displayed on this page, separated by the service that they are part of. Additional and meaningful information for each resource is also presented. For example, if you are viewing the instances that comprise your stack, you can view other info for the instance such as AMI id, zones, uptime, cloudwatch data, etc.
There is also an easier way to view the JSON template definition associated with the stack. We added a nicely formatted representation of your stack template. You can even expand and collapse various sections in the template as you like.
Stacks have a cost associated with them like most resources in AWS. We like knowing the money that we are spending on AWS, and knowing the expenses that are being incurred for a stack is very, very useful. We now display the estimated costs for each stack computed for two different time periods:
- Estimated month to date cost for running the stack.
- Estimated cost to run the stack for the whole year.
Finally, another thing that we use a lot when using stacks - the ability to view cloudwatch charts for a resource. Each resource such as instance, elb, volumes, etc will display a little sparkline graph for the CPU util or similar metric for the last 20 minutes. Click on the sparkline to display detailed cloudwatch charts for the selected resource.
Manage your AWS cloud the easier way :-)