Ylastic now has an updated dashboard to give you a better overview of your AWS environment - the number of instances, databases, volumes, total volume size, scheduled tasks, monitors and also the estimated charges for that month. Click on any of the panels to go to the corresponding page for that resource in Ylastic. If you are a Ylastic Plus user, just click on the estimated charges panel to navigate to the spending analytics page. The resources are totaled up across ALL regions that you select to manage in Ylastic. You want to see the overview for another AWS account? As easy as selecting the account you want from a drop-down to refresh the dashboard with the overview for the newly selected account.
Manage your AWS cloud the easy way :)
We switched over to using the AWS programmatic billing API which uses spending data from S3 bucket. It has made things a lot easier for us in terms of processing and also ease of use. We no longer need the username and password for logging in to the AWS usage activity page. From the user perspective all you have to do is go to the AWS Billing Preferences Page and enable the programmatic Access checkbox, specify the S3 bucket name and also check the Detailed Billing Report checkbox.
Setup a scheduled task in Ylastic to process the spending data at an interval of your choice.
The data gets downloaded and processed. Navigate to the spending page and view the charts. if you want to view the analytics for a different AWS account, just select it from the drop-down at the top right. Here is the spending for EC2 for last year - 2013.
You can view your spending broken down by region. Hovering over any slice on the pie chart will display the exact dollar amount spent.
Here is the spending chart for all the services for last year - 2013.
You can view all of the spending analytics inside Ylastic. There is no separate console, no separate applications - a single place for managing your AWS resources :)
The spending analytics in Ylastic are currently built by downloading and processing the raw CSV data from the AWS Usage activity page. Even though it works quite well, it is a rather tedious process. You can use IAM user credentials to download the data, but we want to reduce this to something even simpler. We are experimenting with the programmatic billing API access that AWS added which gives each AWS account the ability to request that billing csv files be dumped in the S3 bucket of your choice. This makes it a lot easier to both download and process the data and keeping up with the changes that AWS makes to the pricing. It also makes it a breeze to add new services. We are going to roll this out to replace the current implementation within a few weeks. Here are a few sample screenshots of a page built from this data.
AWS provides detailed documentation on how to enable each AWS account for programmatic billing access here. We will have more updates on the blog with all of the charts we will have soon.
Ylastic now supports management for DynamoDB. Manage all your tables in single or multiple regions on the same page. The read and write capacity units consumed in the last 20 minutes are displayed in spark line graphs right next to each dynamo table in the listing, giving you a quick overview of your DynamoDB environment.
Explore the items in your tables with the built-in viewer.
An audit trail is maintained to display the history of changes made to your DynamoDB environment from Ylastic, including the name of the user making the change as well as the IP address from which the change was made.
View Cloudwatch charts for each table by clicking either of the sparkline graphs. You can change time periods, dimensions, etc and refresh to view all of the cloudwatch data for the table.
We are in the process of hooking DynamoDb into our monitoring. Next release will have the ability to monitor and alert via email, voice, SMS :)
It can be a bit daunting when you have to backup snapshots or to use the AWS parlance - copy snapshots to other AWS regions, especially when you have a lot of them. We have been grappling with this issue and how to make it easier and simpler for our users to pick the snapshots they want to copy to other regions on a schedule for disaster recovery purposes. You can now select the snapshots to copy in the Ylastic scheduled task in four different ways:
- Select all snapshots in the source region - every single one of them.
- Select all snapshots whose tag value contains specified string value.
- Select all snapshots whose tag name contains a specified string value.
- Select all snapshots whose tag name contains a specified string, as well as the tag value contains a specified string.
This scheme lets you select snapshots for backup from a big net that captures everything to something really fine-grained that can only pick up a single select snapshot. Manage your backups in AWS the easy way!
You can now schedule a task in ylastic to copy snapshots of your choice to one or multiple regions on a schedule of your choice.
Select a source region, and specify the strings to match for a tag name and tag value and the task will select all snapshots in the source region that meet those criteria. You can also specify exactly how many copies of a snapshot you want in each of the regions. All previous backups are removed leaving only the latest number of backups that you want in each region.
Here is a list of the backups being created with the above task.
All tags from the source snapshot are preserved and added to each of the new snapshots.
Backups the easy, easy way :)
Management and monitoring support for the new AWS region in Sydney, along with migration of AMIs, scheduling and more.
Windows AMI migration for Sydney on the way :)
Ylastic can now configure autoscaling for VPCs. You can use all of the scaling goodies such as policies, scheduling actions, recurring actions, etc for your instances running inside a VPC.
- Create a launch configuration that uses the security groups of your choice in your VPC.
- Create an autoscaling group with the above launch configuration and specify the VPC subnet to use.
- Kick back and watch auto scaling take over :-)
View all the changes being made to your VPC autoscaling group in its audit trail.
Change the scaling group to use a different subnet in your VPC?
Switch over to the VPCs page if you like and view all the instances (both normal and auto-scaled) that are currently inside your VPC along with their CPU util and all the other cloudwatch metric charts.
Manage your AWS cloud, the easy way :-)
AWS recently enhanced Route53 with the ability to do latency based routing, which serves user requests from the EC2 region for which network latency is lowest. You create a latency resource record, and when Route53 receives a query for the domain, it will select the resource record for the EC2 region that will have the lowest latency for the requesting user. It really is as simple as that. Ylastic now supports managing these latency records. In the example below, we have a load balancer in US East(Virginia) region and one in the US West (California) region.
You can create a latency record for all seven AWS regions if you like from one screen. How does AWS figure out the latencies in order to make the routing decisions? AWS apparently gathers these latency measurements between most /24 subnets on the internet and the different AWS regions in order to create the dataset that is the basis of latency-based routing. The technology underpinning this is also used by CloudFront, the AWS CDN product.
You can also use IP addresses/Elastic IPs instead of ELBs for creating these records. This is a really nice addition to Route53, and much requested by the users. Enjoy :-)
You can now schedule the AWS Account advisor in Ylastic to run on a time period of your choice. So you can set it up to run checks against your account, say once a week and alert you via email if any flags are raised by the check. How easy is it to setup?
You will get an email if there are any warnings from the advisor run.
Simplify your AWS cloud management!