Amazon QuickSight Now Supports Audit Logging with AWS CloudTrail

QuickSight diagram

Amazon QuickSight democratizes business intelligence, making it easier and cheaper for you to provide advanced business analytics capabilities to everyone in your organization. Amazon QuickSight also enables you to understand your business better and helps you make data-driven decisions more quickly. However, determining who has access to which data in your organization can still be an administrative challenge.

Today, we are happy to announce that Amazon QuickSight now supports AWS CloudTrail, which enables you to log Amazon QuickSight events across your account. Amazon QuickSight administrators can now quickly and accurately answer questions such as who changed an analysis last or who connected to a sensitive database. CloudTrail support in Amazon QuickSight gives your administrators better governance, auditing, and risk management of your company’s Amazon QuickSight usage.

To learn more, see the full AWS Big Data Blog post.

– Craig

Source: Aws Security

Like This (1)
Dislike This (0)

Announcing the Availability of Hardware Multi-Factor Authentication in the AWS GovCloud (US) Region

AWS GovCloud (US) Region image

Hardware multi-factor authentication (MFA) is now available in the AWS GovCloud (US) Region to help strengthen data security while giving you control over token keys that have access to your data. MFA is a best practice that adds an extra layer of protection on top of users’ user names and passwords.

These token keys that are specific to the AWS GovCloud (US) Region are distributed by SurePassID, a third-party digital security company, and implement the Initiative for Open Authentication Time-Based One-Time Password (OATH TOTP) standard. SurePassID tokens are available for purchase on

For more information about hardware MFA in the AWS GovCloud (US) Region, see the AWS Public Sector Blog post.

– Craig

Source: Aws Security

Like This (1)
Dislike This (0)

More Than One Dozen AWS Cloud Services Receive Department of Defense Impact Level 4 Provisional Authorizations in the AWS GovCloud (US) Region

AWS GovCloud (US) Region logo

Today, I am pleased to announce that the AWS GovCloud (US) Region has received Defense Information Systems Agency Impact Level 4 (IL4) Provisional Authorization (PA) for more than one dozen new services. The IL4 PA enables Department of Defense (DoD) customers to operate their mission-critical and regulated workloads in the AWS GovCloud (US) Region, with data up to the DoD Cloud Computing Security Requirements Guide IL4.

The new AWS services added to the authorization include advanced database, low-cost storage, data warehouse, security, and configuration automation solutions that will help organizations with IL4 workloads increase the productivity and security of their data in the AWS Cloud. For example, with AWS CloudFormation you can deploy AWS resources by automating configuration processes. AWS Key Management Service enables you to create and control the encryption keys that you use to encrypt your data. With Amazon Redshift, you can analyze all your data by using your existing business intelligence tools and automate common administrative tasks to manage, monitor, and scale your data warehouse.

For a list of frequently asked questions, see AWS DoD Compliance page. For more information about AWS security and compliance, see the AWS Security Center and the AWS Compliance Center.

– Chad

Source: Aws Security

Like This (1)
Dislike This (0)

AWS and the General Data Protection Regulation (GDPR)

European Union image

Just over a year ago, the European Commission approved and adopted the new General Data Protection Regulation (GDPR). The GDPR is the biggest change in data protection laws in Europe since the 1995 introduction of the European Union (EU) Data Protection Directive, also known as Directive 95/46/EC. The GDPR aims to strengthen the security and protection of personal data in the EU and will replace the Directive and all local laws relating to it.

AWS welcomes the arrival of the GDPR. The new, robust requirements raise the bar for data protection, security, and compliance, and will push the industry to follow the most stringent controls, helping to make everyone more secure. I am happy to announce today that all AWS services will comply with the GDPR when it becomes enforceable on May 25, 2018.

In this blog post, I explain the work AWS is doing to help customers with the GDPR as part of our continued commitment to help ensure they can comply with EU Data Protection requirements.

What has AWS been doing?

AWS continually maintains a high bar for security and compliance across all of our regions around the world. This has always been our highest priority—truly “job zero.” The AWS Cloud infrastructure has been architected to offer customers the most powerful, flexible, and secure cloud-computing environment available today. AWS also gives you a number of services and tools to enable you to build GDPR-compliant infrastructure on top of AWS.

One tool we give you is a Data Processing Agreement (DPA). I’m happy to announce today that we have a DPA that will meet the requirements of the GDPR. This GDPR DPA is available now to all AWS customers to help you prepare for May 25, 2018, when the GDPR becomes enforceable. For additional information about the new GDPR DPA or to obtain a copy, contact your AWS account manager.

In addition to account managers, we have teams of compliance experts, data protection specialists, and security experts working with customers across Europe to answer their questions and help them prepare for running workloads in the AWS Cloud after the GDPR comes into force. To further answer customers’ questions, we have updated our EU Data Protection website. This website includes information about what the GDPR is, the changes it brings to organizations operating in the EU, the services AWS offers to help you comply with the GDPR, and advice about how you can prepare.

Another topic we cover on the EU Data Protection website is AWS’s compliance with the CISPE Code of Conduct. The CISPE Code of Conduct helps cloud customers ensure that their cloud infrastructure provider is using appropriate data protection standards to protect their data in a manner consistent with the GDPR. AWS has declared that Amazon EC2, Amazon S3, Amazon RDS, AWS Identity and Access Management (IAM), AWS CloudTrail, and Amazon Elastic Block Storage (Amazon EBS) are fully compliant with the CISPE Code of Conduct. This declaration provides customers with assurances that they fully control their data in a safe, secure, and compliant environment when they use AWS. For more information about AWS’s compliance with the CISPE Code of Conduct, go to the CISPE website.

As well as giving customers a number of tools and services to build GDPR-compliant environments, AWS has achieved a number of internationally recognized certifications and accreditations. In the process, AWS has demonstrated compliance with third-party assurance frameworks such as ISO 27017 for cloud security, ISO 27018 for cloud privacy, PCI DSS Level 1, and SOC 1, SOC 2, and SOC 3. AWS also helps customers meet local security standards such as BSI’s Common Cloud Computing Controls Catalogue (C5) that is important in Germany. We will continue to pursue certifications and accreditations that are important to AWS customers.

What can you do?

Although the GDPR will not be enforceable until May 25, 2018, we are encouraging our customers and partners to start preparing now. If you have already implemented a high bar for compliance, security, and data privacy, the move to GDPR should be simple. However, if you have yet to start your journey to GDPR compliance, we urge you to start reviewing your security, compliance, and data protection processes now to ensure a smooth transition in May 2018.

You should consider the following key points in preparation for GDPR compliance:

  • Territorial reach – Determining whether the GDPR applies to your organization’s activities is essential to ensuring your organization’s ability to satisfy its compliance obligations.
  • Data subject rights – The GDPR enhances the rights of data subjects in a number of ways. You will need to make sure you can accommodate the rights of data subjects if you are processing their personal data.
  • Data breach notifications – If you are a data controller, you must report data breaches to the data protection authorities without undue delay and in any event within 72 hours of you becoming aware of a data breach.
  • Data protection officer (DPO) – You may need to appoint a DPO who will manage data security and other issues related to the processing of personal data.
  • Data protection impact assessment (DPIA) – You may need to conduct and, in some circumstances, you might be required to file with the supervisory authority a DPIA for your processing activities.
  • Data processing agreement (DPA) – You may need a DPA that will meet the requirements of the GDPR, particularly if personal data is transferred outside the European Economic Area.

AWS offers a wide range of services and features to help customers meet requirements of the GDPR, including services for access controls, monitoring, logging, and encryption. For more information about these services and features, see EU Data Protection.

At AWS, security, data protection, and compliance are our top priorities, and we will continue to work vigilantly to ensure that our customers are able to enjoy the benefits of AWS securely, compliantly, and without disruption in Europe and around the world. As we head toward May 2018, we will share more news and resources with you to help you comply with the GDPR.

– Steve

Source: Aws Security

Like This (1)
Dislike This (0)

EC2 F1 Instances with FPGAs – Now Generally Available

We launched the Developer Preview of the FPGA-equipped F1 instances at AWS re:Invent. The response to the announcement was quick and overwhelming! We received over 2000 requests for entry, and were able to provide over 200 developers with access to the Hardware Development Kit (HDK) and the actual F1 instances.

In the post that I wrote for re:Invent, I told you that:

This highly parallelized model is ideal for building custom accelerators to process compute-intensive problems. Properly programmed, an FPGA has the potential to provide a 30x speedup to many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms and applications.

During the preview, partners and developers have been working on all sorts of exciting tools, services, and applications. I’ll tell you more about them in just a moment.

Now Generally Available
Today we are making the F1 instances generally available in the US East (Northern Virginia) Region, with plans to bring them to other regions before too long.

We continued to add features and functions during the preview, while also making the development tools more efficient and easier to use. Here’s a summary:

Developer Community – We launched the AWS FPGA Development Forum to provide a place for FPGA developers to hang out and to communicate with us and with each other.

HDK and SDK – We published the EC2 FPGA Hardware (HDK) and Software Development Kit to GitHub, and made many improvements in response to feedback that we received during the preview.

The improvements include support for VHDL (in addition to Verilog), an improved virtual lab environment (Virtual JTAG, Virtual LED, and Virtual DipSwitch), AWS libraries for FPGA management and the FPGA runtime, and support for OpenCL including the AWS OpenCL runtime library.

FPGA Developer AMI – This Marketplace AMI contains a full set of FPGA development tools including an RTL compiler and simulator, along with Xilinx SDAccel for OpenCL development, all tuned for use on C4, M4, and R4 instances.

FPGAs At Work
Here’s a sampling of the impressive work that our partners have been doing with the F1’s:

Edico Genome is deploying their DRAGEN Bio-IT Platform on F1 instances, with the expectation that it will provide whole-genome sequencing that runs in real time.

Ryft offers the Ryft Cloud, an accelerator for data analytics and machine learning that extends Elastic Stack. It sources data from Amazon Kinesis, Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and local instance storage and uses massive bitwise parallelism to drive performance. The product supports high-level JDBC, ODBC, and REST interfaces along with low-level C, C++, Java, and Python APIs (see the Ryft API page for more information). launched a cloud-based service that allows you to program FPGAs using the Go programming language. You can build, test, and deploy your code from within their cloud-based environment while taking advantage of concurrency-oriented language features such as goroutines (lightweight threads), channels, and selects.

NGCodec ported their RealityCodec video encoder to the F1 and used it to produce broadcast-quality video at 80 frames per second. Their solution can encode up to 32 independent video streams on a single F1 instance (read their new post, You Deserve Better than Grainy Giraffes, to learn more).

FPGAs In School & Research
Research groups and graduate classes at top-tier universities contacted us via AWS Educate and were eager to gain access to F1 instances.

UCLA‘s CS133 class (Parallel and Distributed Computing) is setting up an F1-based FPGA lab that will be operational within 3 or 4 weeks. According to UCLA Chancellor’s Professor Jason Cong, they are expanding multiple research projects to cover F1 including FPGA performance debugging, machine learning acceleration, Spark to FPGA compilation, and systolic array compilation.

Last month we announced that we are collaborating with the National Science Foundation (NSF) to foster innovation in big data research (read AWS Collaborates With the National Science Foundation to Foster Innovation to learn more and to find out how to apply for a grant).

FPGA’s in the AWS Marketplace
As I shared in my original post, we have built a complete beginning to end solution that lets developers build FPGA-powered applications and services and list them in the AWS Marketplace. I can’t wait to see what kinds of cool things show up there!


Source: EC2

Like This (0)
Dislike This (0)

New – Tag EC2 Instances & EBS Volumes on Creation

Way back in 2010, we launched Resource Tagging for EC2 instances and other EC2 resources. Since that launch, we have raised the allowable number of tags per resource from 10 to 50, and we have made tags more useful with the introduction of resource groups and a tag editor. Our customers use tags to track ownership, drive their cost accounting processes, implement compliance protocols, and to control access to resources via IAM policies.

The AWS tagging model provides separate functions for resource creation and resource tagging. While this is flexible and has worked well for many of our users, it does result in a small time window where the resources exist in an untagged state. Using two separate functions means that resource creation could succeed only for tagging to fail, again leaving resources in an untagged state.

Today we are making tagging more flexible and more useful, with four new features:

Tag on Creation – You can now specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources.

Enforced Tag Usage – You can now write IAM policies that mandate the use of specific tags on EC2 instances or EBS volumes.

Resource-Level Permissions – By popular request, the CreateTags and DeleteTags functions now support IAM’s resource-level permissions.

Enforced Volume Encryption – You can now write IAM policies that mandate the use of encryption for newly created EBS volumes.

Tag on Creation
You now have the ability to specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources (if the call creates both instances and volumes, you can specify distinct tags for the instance and for each volume). The resource creation and the tagging are performed atomically; both must succeed in order for the operation (RunInstances, CreateVolume, and other functions that create resources) to succeed. You no longer need to build tagging scripts that run after instances or volumes have been created.

Here’s how you specify tags when you launch an EC2 instance (the CostCenter and SaveSnapshotFlag tags are also set on any EBS volumes created when the instance is launched):

To learn more, read Using Tags.

Resource-Level Permissions
CreateTags and DeleteTags now support IAM’s resource-level permissions, as requested by many customers. This gives you additional control over the tag keys and values on existing resources.

Also, RunInstances and CreateVolume now support additional resource-level permissions. This allows you to exercise control over the users and groups that can tag resources on creation.

To learn more, see Example Policies for Working with the AWS CLI or an AWS SDK.

Enforced Tag Usage
You can now write IAM policies that enforce the use of specific tags. For example, you could write a policy that blocks the deletion of tags named Owner or Account. Or, you could write a “Deny” policy that disallows the creation of new tags for specific existing resources. You could also use an IAM policy to enforce the use of Department and CostCenter tags to help you achieve more accurate cost allocation reporting. In order to implement stronger compliance and security policies, you could also restrict access to DeleteTags if the resource is not tagged with the user’s name. The ability to enforce tag usage gives you precise control over access to resources, ownership, and cost allocation.

Here’s a statement that requires the use of costcenter and stack tags (with values of “115” and “prod,” respectively) for all newly created volumes:

"Statement": [
      "Sid": "AllowCreateTaggedVolumes",
      "Effect": "Allow",
      "Action": "ec2:CreateVolume",
      "Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/costcenter": "115",
          "aws:RequestTag/stack": "prod"
         "ForAllValues:StringEquals": {
             "aws:TagKeys": ["costcenter","stack"]
       "Effect": "Allow",
       "Action": [
       "Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",
       "Condition": {
         "StringEquals": {
             "ec2:CreateAction" : "CreateVolume"

Enforced Volume Encryption
Using the additional IAM resource-level permissions now supported by RunInstances and CreateVolume, you can now write IAM policies that mandate the use of encryption for any EBS boot or data volumes created. You can use this to comply with regulatory requirements, enforce enterprise security policies, and to protect your data in compliance with applicable auditing requirements.

Here’s a sample statement that you can incorporate into an IAM policy for RunInstances and CreateVolume to enforce EBS volume encryption:

"Statement": [
            "Effect": "Deny",
            "Action": [
            "Resource": [
            "Condition": {
                "Bool": {
                    "ec2:Encrypted": "false"

To learn more and to see some sample policies, take a look at Example Policies for Working with the AWS CLI or an AWS SDK and IAM Policies for Amazon EC2.

Available Now
As you can see, the combination of tagging and the new resource-level permissions on the resource creation and tag manipulation functions gives you the ability to track and control access to your EC2 resources.

This new feature is available now in all regions except AWS GovCloud (US) and China (Beijing). You can start using it today from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the AWS APIs.

We are planning to add support for additional EC2 resource types over time; stay tuned for more information!


Source: EC2

Like This (0)
Dislike This (0)

New – Instance Size Flexibility for EC2 Reserved Instances

Reserved Instances allow AWS customers to receive a significant discount on their EC2 usage (up to 75% when compared to On-Demand pricing), along with capacity reservation when the RIs are purchased for use in a specific Availability Zone (AZ).

Late last year we made Reserved Instances more flexible with the launch of Regional RIs that apply the discount to any AZ in the Region, along with Convertible RIs that allow you to change the instance family and other parameters associated with a Reserved Instance. Both types of RIs reduce your management overhead and provide you with additional options. When you use Regional RIs you can launch an instance without having to worry about launching in the AZ that is eligible for the RI discount. When you use Convertible RIs you can ensure that your RIs remain well-fitted to your usage, even as your choice of instance types and sizes varies over time.

Instance Size Flexibility
Effective March 1, your existing Regional RIs are even more flexible! All Regional Linux/UNIX RIs with shared tenancy now apply to all sizes of instances within an instance family and AWS region, even if you are using them across multiple accounts via Consolidated Billing. This will further reduce the time that you spend managing your RIs and will let you be even more creative and innovative with your use of compute resources.

All new and existing RIs are sized according to a normalization factor that is based on the instance size:

Instance Size
Normalization Factor
nano 0.25
micro 0.5
small 1
medium 2
large 4
xlarge 8
2xlarge 16
4xlarge 32
8xlarge 64
10xlarge 80
16xlarge 128
32xlarge 256

Let’s say you already own an RI for a c4.8xlarge. This RI now applies to any usage of a Linux/UNIX C4 instance with shared tenancy in the region. This could be:

  • One c4.8xlarge instance.
  • Two c4.4xlarge instances.
  • Four c4.2xlarge instances.
  • Sixteen c4.large instances.

It also includes other combinations such as one c4.4xlarge and eight c4.large instances.

If you own an RI that is smaller than the instance that you are running, you will be charged the pro-rated, On-Demand price for the excess. This means that you could buy an RI for a c4.4xlarge, use that instance most of the time, but scale up to a c4.8xlarge instance on occasion. We’ll do the math and you’ll pay only half of the On-Demand, per-hour price for the larger instance (as always, our goal is to give you access to compute power at the lowest possible cost). If you own an RI for a large instance and run a smaller instance, the RI price will apply to the smaller instance. However, the unused portion of the reservation will not accumulate.

Now Available
This new-found flexibility is available now and will be applied automatically to your Regional Linux/UNIX RIs with shared tenancy, without any effort on your part.


Source: EC2

Like This (0)
Dislike This (0)

Now Available – I3 Instances for Demanding, I/O Intensive Applications

On the first day of AWS re:Invent I published an EC2 Instance Update and promised to share additional information with you as soon as I had it.

Today I am happy to be able to let you know that we are making six sizes of our new I3 instances available in fifteen AWS regions! Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, these instances can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput. This makes them a great fit for any workload that requires high throughput and low latency including relational databases, NoSQL databases, search engines, data warehouses, real-time analytics, and disk-based caches. When compared to the I2 instances, I3 instances deliver storage that is less expensive and more dense, with the ability to deliver substantially more IOPS and more network bandwidth per CPU core.

The Specs
Here are the instance sizes and the associated specs:

Instance Name vCPU Count Memory
Instance Storage (NVMe SSD) Price/Hour
i3.large 2 15.25 GiB 0.475 TB $0.15
i3.xlarge 4 30.5 GiB 0.950 TB $0.31
i3.2xlarge 8 61 GiB 1.9 TB $0.62
i3.4xlarge 16 122 GiB 3.8 TB (2 disks) $1.25
i3.8xlarge 32 244 GiB 7.6 TB (4 disks) $2.50
i3.16xlarge 64 488 GiB 15.2 TB (8 disks) $4.99

The prices shown are for On-Demand instances in the US East (Northern Virginia) Region; see the EC2 pricing page for more information.

I3 instances are available in On-Demand, Reserved, and Spot form in the US East (Northern Virginia), US West (Oregon), US West (Northern California), US East (Ohio), Canada (Central), South America (São Paulo), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Sydney), and AWS GovCloud (US) Regions. You can also use them as Dedicated Hosts and as Dedicated Instances.

These instances support Hardware Virtualization (HVM) AMIs only, and must be run within a Virtual Private Cloud. In order to benefit from the performance made possible by the NVMe storage, you must run one of the following operating systems:

  • Amazon Linux AMI
  • CentOS – 7.0 or better
  • Ubuntu – 16.04 or 16.10
  • SUSE 12
  • SUSE 11 with SP3
  • Windows Server 2008 R2, 2012 R2, and 2016

The I3 instances offer up to 8 NVMe SSDs. In order to achieve the best possible throughput and to get as many IOPS as possible, you can stripe multiple volumes together, or spread the I/O workload across them in another way.

Each vCPU (Virtual CPU) is a hardware hyperthread on an Intel E5-2686 v4 (Broadwell) processor running at 2.3 GHz. The processor supports the AVX2 instructions, along with Turbo Boost and NUMA.

Go For Launch
The I3 instances are available today in fifteen AWS regions and you can start to use them right now.



Source: EC2

Like This (0)
Dislike This (0)

Amazon EBS Update – New Elastic Volumes Change Everything

It is always interesting to speak with our customers and to learn how the dynamic nature of their business and their applications drives their block storage requirements. These needs change over time, creating the need to modify existing volumes to add capacity or to change performance characteristics. Today’s 24×7 operating models leaves no room for downtime; as a result, customers want to make changes without going offline or otherwise impacting operations.

Over the years, we have introduced new EBS offerings that support an ever-widening set of use cases. For example, we introduced two new volume types in 2016 – Throughput Optimized HDD (st1) and Cold HDD (sc1). Our customers want to use these volume types as storage tiers, modifying the volume type to save money or to change the performance characteristics, without impacting operations.

In other words, our customers want their EBS volumes to be even more elastic!

New Elastic Volumes
Today we are launching a new EBS feature we call Elastic Volumes and making it available for all current-generation EBS volumes attached to current-generation EC2 instances. You can now increase volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect.

This new feature will greatly simplify (or even eliminate) many of your planning, tuning, and space management chores. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.

You can address the following scenarios (and many more that you can come up with on your own) using Elastic Volumes:

Changing Workloads – You set up your infrastructure in a rush and used the General Purpose SSD volumes for your block storage. After gaining some experience you figure out that the Throughput Optimized volumes are a better fit, and simply change the type of the volume.

Spiking Demand – You are running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month, with a 10x spike in traffic during the final three days of each month due to month-end processing. You can use Elastic Volumes to dial up the provisioning in order to handle the spike, and then dial it down afterward.

Increasing Storage – You provisioned a volume for 100 GiB and an alarm goes off indicating that it is now at 90% of capacity. You increase the size of the volume and expand the file system to match, with no downtime, and in a fully automated fashion.

Using Elastic Volumes
You can manage all of this from the AWS Management Console, via API calls, or from the AWS Command Line Interface (CLI).

To make a change from the Console, simply select the volume and choose Modify Volume from the Action menu:

Then make any desired changes to the volume type, size, and Provisioned IOPS (if appropriate). Here I am changing my 75 GiB General Purpose (gp2) volume into a 400 GiB Provisioned IOPS volume, with 20,000 IOPS:

When I click on Modify I confirm my intent, and click on Yes:

The volume’s state reflects the progress of the operation (modifying, optimizing, or complete):

The next step is to expand the file system so that it can take advantage of the additional storage space. To learn how to do that, read Expanding the Storage Space of an EBS Volume on Linux or Expanding the Storage Space of an EBS Volume on Windows. You can expand the file system as soon as the state transitions to optimizing (typically a few seconds after you start the operation). The new configuration is in effect at this point, although optimization may continue for up to 24 hours. Billing for the new configuration begins as soon as the state turns to optimizing (there’s no charge for the modification itself).

Automatic Elastic Volume Operations
While manual changes are fine, there’s plenty of potential for automation. Here are a couple of ideas:

Right-Sizing – Use a CloudWatch alarm to watch for a volume that is running at or near its IOPS limit. Initiate a workflow and approval process that could provision additional IOPS or change the type of the volume. Or, publish a “free space” metric to CloudWatch and use a similar approval process to resize the volume and the filesystem.

Cost Reduction – Use metrics or schedules to reduce IOPS or to change the type of a volume. Last week I spoke with a security auditor at a university. He collects tens of gigabytes of log files from all over campus each day and retains them for 60 days. Most of the files are never read, and those that are can be scanned at a leisurely pace. They could address this use case by creating a fresh General Purpose volume each day, writing the logs to it at high speed, and then changing the type to Throughput Optimized.

As I mentioned earlier, you need to resize the file system in order to be able to access the newly provisioned space on the volume. In order to show you how to automate this process, my colleagues built a sample that makes use of CloudWatch Events, AWS Lambda, EC2 Systems Manager, and some PowerShell scripting. The rule matches the modifyVolume event emitted by EBS and invokes the logEvents Lambda function:

The function locates the volume, confirms that it is attached to an instance that is managed by EC2 Systems Manager, and then adds a “maintenance tag” to the instance:

from __future__ import print_function
import boto3
ec2 = boto3.client('ec2')
ssm = boto3.client('ssm')
tags ={'Key': os.environ['Tagname'], 'Value': os.environ['Tagvalue']}

def find_viable_window ():
    #checks if the Tag Key value pair matches with an existing Maintenance Window
    wins = ssm.describe_maintenance_windows()['WindowIdentities']
    for win in wins:
        tgts = ssm.describe_maintenance_window_targets(WindowId=win['WindowId'])['Targets']
        for tgt in tgts:
            t = tgt['Targets'][0]['Key'].split(':')
            v = tgt['Targets'][0]['Values'][0]
            if len(t) > 1 and t[0] == 'tag' and t[1] == tags['Key'] and v == tags['Value']:
                return True;
    return False;

def lambda_handler(event, context):
    volume =(event['resources'][0].split('/')[1])
    if event['detail']['result'] == 'completed':
        if attach: 
            instance = attach[0]['InstanceId']
            filter={'key': 'InstanceIds', 'valueSet': [instance]}
            info = ssm.describe_instance_information(InstanceInformationFilterList=[filter])['InstanceInformationList']
            if info:
                if not find_viable_window():
                    print "WARNING: the proposed tags {0} : {1} are not a valid target in any maintenance window n The changes will not be automatically applied".format(tags['Key'],tags['Value'])
                print "{0} Instance {1} has been tagged for maintenance".format(info[0]['PlatformName'], instance)
                raise Exception('Instance ' + instance + ' is not managed through SSM')
            raise Exception('Volume ' + volume + ' not currently attached to an instance')
        print "Change to the Volume {0} is not yet completed; instance will not be tagged for maintenance".format(volume)

Available Today
The Elastic Volumes feature is available today and you can start using it right now!

To learn about some important special cases and a few limitations on instance types, read Considerations When Modifying EBS Volumes.


PS – If you would like to design and build cool, game-changing storage services like EBS, take a look at our EBS Jobs page!


Source: EC2

Like This (0)
Dislike This (0)

AWS IPv6 Update – Global Support Spanning 15 Regions & Multiple AWS Services

We’ve been working to add IPv6 support to many different parts of AWS over the last couple of years, starting with Elastic Load Balancing, AWS IoT, AWS Direct Connect, Amazon Route 53, Amazon CloudFront, AWS WAF, and S3 Transfer Acceleration, all building up to last month’s announcement of IPv6 support for EC2 instances in Virtual Private Clouds (initially available for use in the US East (Ohio) Region).

Today I am happy to share the news that IPv6 support for EC2 instances in VPCs is now available in a total of fifteen regions, along with Application Load Balancer support for IPv6 in nine of those regions.

You can now build and deploy applications that can use IPv6 addresses to communicate with servers, object storage, load balancers, and content distribution services. In accord with the latest guidelines for IPv6 support from Apple and other vendors, your mobile applications can now make use of IPv6 addresses when they communicate with AWS.

IPv6 Now in 15 Regions
IPv6 support for EC2 instances in new and existing VPCs is now available in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), South America (São Paulo), Canada (Central), EU (Ireland), EU (Frankfurt), EU (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Mumbai), and AWS GovCloud (US) Regions and you can start using it today!

You can enable IPv6 from the AWS Management Console when you create a new VPC:

Application Load Balancer
Application Load Balancers in the US East (Northern Virginia), US West (Northern California), US West (Oregon), South America (São Paulo), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and AWS GovCloud (US) Regions now support IPv6 in dual-stack mode, making them accessible via IPv4 or IPv6 (we expect to add support for the remaining regions within a few weeks).

Simply enable the dualstack option when you configure the ALB and then make sure that your security groups allow or deny IPv6 traffic in accord with your requirements. Here’s how you select the dualstack option:

You can also enable this option by running the set-ip-address-type command or by making a call to the SetIpAddressType function. To learn more about this new feature, read the Load Balancer Address Type documentation.

IPv6 Recap
Here are the IPv6 launches that we made in the run-up to the launch of IPv6 support for EC2 instances in VPCs:

CloudFront, WAF, and S3 Transfer Acceleration – This launch let you enable IPv6 support for individual CloudFront distributions. Newly created distributions supported IPv6 by default and existing distributions could be upgraded with a couple of clicks (if you using Route 53 alias records, you also need to add an AAAA record to the domain). With IPv6 support enabled, the new addresses will show up in the CloudFront Access Logs. The launch also let you use AWS WAF to inspect requests that arrive via IPv4 or IPv6 addresses and to use a new, dual-stack endpoint for S3 Transfer Acceleration.

Route 53 – This launch added support for DNS queries over IPv6 (support for the requisite AAAA records was already in place). A subsequent launch added support for Health Checks of IPv6 Endpoints, allowing you to monitor the health of the endpoints and to arrange for DNS failover.

IoT – This product launch included IPv6 support for message exchange between devices and AWS IoT.

S3 – This launch added support for access to S3 buckets via dual-stack endpoints.

Elastic Load Balancing – This launch added publicly routable IPv6 addresses for Elastic Load Balancers.

Direct Connect – Supports single and dualstack configurations on public and private VIFs (virtual interfaces).



Source: EC2

Like This (0)
Dislike This (0)