Amazon EC2 Update – X1e Instances in Five More Sizes and a Stronger SLA

Earlier this year we launched the x1e.32xlarge instances in four AWS Regions with 4 TB of memory. Today, two months after that launch, customers are using these instances to run high-performance relational and NoSQL databases, in-memory databases, and other enterprise applications that are able to take advantage of large amounts of memory.

Five More Sizes of X1e
I am happy to announce that we are extending the memory-optimized X1e family with five additional instance sizes. Here’s the lineup:

Model vCPUs Memory (GiB) SSD Storage (GB) Networking Performance
x1e.xlarge 4 122 120 Up to 10 Gbps
x1e.2xlarge 8 244 240 Up to 10 Gbps
x1e.4xlarge 16 488 480 Up to 10 Gbps
x1e.8xlarge 32 976 960 Up to 10 Gbps
x1e.16xlarge 64 1,952 1,920 10 Gbps
x1e.32xlarge 128 3,904 3,840 25 Gbps

The instances are powered by quad socket Intel® Xeon® E7 8880 processors running at 2.3 GHz, with large L3 caches and plenty of memory bandwidth. ENA networking and EBS optimization are standard, with up to 14 Gbps of dedicated throughput (depending on instance size) to EBS.

As part of today’s launch we are also making all sizes of X1e available in the Asia Pacific (Sydney) Region. This means that you can now launch them in On-Demand and Reserved Instance form in the US East (Northern Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Sydney) Regions.

Stronger EC2 SLA
I also have another piece of good news!

Effective immediately, we are increasing the EC2 Service Level Agreement (SLA) for both EC2 and EBS to 99.99%, for all regions and for all AWS customers. This change was made possible by our continuous investment in infrastructure and quality of service, along with our focus on operational excellence.

Jeff;


Source: New feed

Like This (0)
Dislike This (0)

Use the New Visual Editor to Create and Modify Your AWS IAM Policies

Today, AWS Identity and Access Management (IAM) made it easier for you to create and modify your IAM policies by using a point-and-click visual editor in the IAM console. The new visual editor guides you through granting permissions for IAM policies without requiring you to write policies in JSON (although you can still author and edit policies in JSON, if you prefer). This update to the IAM console makes it easier to grant least privilege for the AWS service actions you select by listing all the supported resource types and request conditions you can specify. Policy summaries identify unrecognized services and actions and permissions errors when you import existing policies, and now you can use the visual editor to correct them. In this blog post, I give a brief overview of policy concepts and show you how to create a new policy by using the visual editor.

IAM policy concepts

You use IAM policies to define permissions for your IAM entities (groups, users, and roles). Policies are composed of one or more statements that include the following elements:

  • Effect: Determines if a policy statement allows or explicitly denies access.
  • Action: Defines AWS service actions in a policy (these typically map to individual AWS APIs.)
  • Resource: Defines the AWS resources to which actions can apply. The defined resources must be supported by the actions defined in the Action element for permissions to be granted.
  • Condition: Defines when a permission is allowed or denied. The conditions defined in a policy must be supported by the actions defined in the Action element for the permission to be granted.

To grant permissions, you attach policies to groups, users, or roles. Now that I have reviewed the elements of a policy, I will demonstrate how to create an IAM policy with the visual editor.

How to create an IAM policy with the visual editor

Let’s say my human resources (HR) recruiter, Casey, needs to review files located in an Amazon S3 bucket for all the product manager (PM) candidates our HR team has interviewed in 2017. To grant this access, I will create and attach a policy to Casey that grants list and limited read access to all folders that begin with PM_Candidate in the pmrecruiting2017 S3 bucket. To create this new policy, I navigate to the Policies page in the IAM console and choose Create policy. Note that I could also use the visual editor to modify existing policies by choosing Import existing policy; however, for Casey, I will create a new policy.

Image of the "Create policy" button

On the Visual editor tab, I see a section that includes Service, Actions, Resources, and Request Conditions.

Image of the "Visual editor" tab

Select a service

To grant S3 permissions, I choose Select a service, type S3 in the search box, and choose S3 from the list.

Image of choosing "S3"

Select actions

After selecting S3, I can define actions for Casey by using one of four options:

  1. Filter actions in the service by using the search box.
  2. Type actions by choosing Add action next to Manual actions. For example, I can type List* to grant all S3 actions that begin with List*.
  3. Choose access levels from List, Read, Write, Permissions management, and Tagging.
  4. Select individual actions by expanding each access level.

In the following screenshot, I choose options 3 and 4, and choose List and s3:GetObject from the Read access level.

Screenshot of options in the "Select actions" section

We introduced access levels when we launched policy summaries earlier in 2017. Access levels give you a way to categorize actions and help you understand the permissions in a policy. The following table gives you a quick overview of access levels.

Access level Description Example actions
List Actions that allow you to see a list of resources s3:ListBucket, s3:ListAllMyBuckets
Read Actions that allow you to read the content in resources s3:GetObject, s3:GetBucketTagging
Write Actions that allow you to create, delete, or modify resources s3:PutObject, s3:DeleteBucket
Permissions management Actions that allow you to grant or modify permissions to resources s3:PutBucketPolicy
Tagging Actions that allow you to create, delete, or modify tags
Note: Some services support authorization based on tags.
s3:PutBucketTagging, s3:DeleteObjectVersionTagging

Note: By default, all actions you choose will be allowed. To deny actions, choose Switch to deny permissions in the upper right corner of the Actions section.

As shown in the preceding screenshot, if I choose the question mark icon next to GetObject, I can see the description and supported resources and conditions for this action, which can help me scope permissions.

Screenshot of GetObject

The visual editor makes it easy to decide which actions I should select by providing in an integrated documentation panel the action description, supported resources or conditions, and any required actions for every AWS service action. Some AWS service actions have required actions, which are other AWS service actions that need to be granted in a policy for an action to run. For example, the AWS Directory Service action, ds:CreateDirectory, requires seven Amazon EC2 actions to be able to create a Directory Service directory.

Choose resources

In the Resources section, I can choose the resources on which actions can be taken. I choose Resources and see two ways that I can define or select resources:

  1. Define specific resources
  2. Select all resources

Specific is the default option, and only the applicable resources are presented based on the service and actions I chose previously. Because I want to grant Casey access to some objects in a specific bucket, I choose Specific and choose Add ARN under bucket.

Screenshot of Resources section

In the pop-up, I type the bucket name, pmrecruiting2017, and choose Add to specify the S3 bucket resource.

Screenshot of specifying the S3 bucket resource

To specify the objects, I choose Add ARN under object and grant Casey access to all objects starting with PM_Candidate in the pmrecruiting2017 bucket. The visual editor helps you build your Amazon Resource Name (ARN) and validates that it is structured correctly. For AWS services that are AWS Region specific, the visual editor prompts for AWS Region and account number.

The visual editor displays all applicable resources in the Resources section based on the actions I choose. For Casey, I defined an S3 bucket and object in the Resources section. In this example, when the visual editor creates the policy, it creates three statements. The first statement includes all actions that require a wildcard (*) for the Resource element because this action does not support resource-level permissions. The second statement includes all S3 actions that support an S3 bucket. The third statement includes all actions that support an S3 object resource. The visual editor generates policy syntax for you based on supported permissions in AWS services.

Specify request conditions

For additional security, I specify a condition to restrict access to the S3 bucket from inside our internal network. To do this, I choose Specify request conditions in the Request Conditions section, and choose the Source IP check box. A condition is composed of a condition key, an operator, and a value. I choose aws:SourceIp for my Key so that I can control from where the S3 files can be accessed. By default, IpAddress is the Operator, and I set the Value to my internal network.

Screenshot of "Request conditions" section

To add other conditions, choose Add condition and choose Save changes after choosing the key, operator, and value.

After specifying my request condition, I am now able to review all the elements of these S3 permissions.

Screenshot of S3 permissions

Next, I can choose to grant permissions for another service by choosing Add new permissions (bottom left of preceding screenshot), or I can review and create this new policy. Because I have granted all the permissions Casey needs, I choose Review policy. I type a name and a description, and I review the policy summary before choosing Create policy. 

Now that I have created the policy, I attach it to Casey by choosing the Attached entities tab of the policy I just created. I choose Attach and choose Casey. I then choose Attach policy. Casey should now be able to access the interview files she needs to review.

Summary

The visual editor makes it easier to create and modify your IAM policies by guiding you through each element of the policy. The visual editor helps you define resources and request conditions so that you can grant least privilege and generate policies. To start using the visual editor, sign in to the IAM console, navigate to the Policies page, and choose Create policy.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

– Joy


Source: Aws Security

Like This (0)
Dislike This (0)

New – AWS OpsWorks for Puppet Enterprise

At last year’s AWS re:Invent we launched AWS OpsWorks for Chef Automate which enabled customers to get their own Chef Automate server, managed by AWS. Building on customer feedback we’re excited to bring Puppet Enterprise to OpsWorks today.

Puppet Enterprise allows you to automate provisioning, configuring, and managing instances through a puppet-agent deployed on each managed node. You can define a configuration once and apply it to thousands of nodes with automatic rollback and drift detection. AWS OpsWorks for Puppet Enterprise eliminates the need to maintain your own Puppet masters while working seamlessly with your existing Puppet manifests.

OpsWorks for Puppet Enterprise will manage the Puppet master server for you and take care of operational tasks like installation, upgrades, and backups. It also simplifies node registration and offers a useful starter kit for bootstrapping your nodes. More details below.

Creating a Managed Puppet Master

Creating a Puppet master in OpsWorks is simple. First navigate to the OpsWorks console Puppet section and click “Create Puppet Enterprise Server”.

On this first part of the setup you configure the region and EC2 instance type for your Puppet master. A c4.large can support up to 450 nodes while a c4.2xlarge can support 1600+ nodes. Your Puppet Enterprise server will be provisioned with the newest version of Amazon Linux (2017.09) and the most current version of Puppet Enterprise (2017.3.2).

On the next screen of the setup you can optionally configure an SSH key to connect your Puppet master. This is useful if you’ll be making any major customizations but it’s a good general practice to interact with Puppet through the client tools rather than directly on the instance itself.

Also on this page, you can setup an r10k repo to pull dynamic configurations.

In the advanced settings page you can select the usual deployment options around VPCs, security groups, IAM roles, and instance profiles. If you choose to have OpsWorks create the instance security group for you, do note that it will be open by default so it’s important to restrict access to this later.

Two components to pay attention to on this page are the maintenance window and backup configurations. When new minor versions of Puppet software become available, system maintenance is designed to update the minor version of Puppet Enterprise on your Puppet master automatically, as soon as it passes AWS testing. AWS performs extensive testing to verify that Puppet upgrades are production-ready and will deploy without disrupting existing customer environments. Automated backups allow you to store durable backups of your Puppet master in S3 and to restore from those backups at anytime. You can adjust the backup frequency and retention based on your business needs.

Using AWS OpsWorks for Puppet Enterprise

While your Puppet master is provisioning there are two helpful information boxes provided in the console.

You can download your sign-in credentials as well as sample userdata for installing the puppet-agent onto your Windows and Linux nodes. An important note here is that you’re able to manage your on-premises nodes as well, provided they have connectivity to your Puppet master.

Once your Puppet master is fully provisioned you can access the Puppet Enterprise http console and use Puppet as you normally would.

Useful Details

AWS OpsWorks for Puppet Enterprise is priced in Node Hours for your managed nodes. Prices start at $0.017 per-node-hour and decrease with volume of nodes – you can see the full pricing page here. You’ll also pay for the underlying resources required to run your Puppet master. At launch AWS OpsWorks for Puppet Enterprise is available in US East (N. Virginia) Region, US West (Oregon) Region, and EU (Ireland) Region. Of course everything you’ve seen in the console can also be accomplished through the AWS SDKs and CLI. You can get more information in the Getting Started Guide.

Randall


Source: New feed

Like This (0)
Dislike This (0)

Swarm Orchestration in Docker Enterprise Edition

swarm orchestration

At DockerCon Europe, we announced that the next release of Docker Enterprise Edition (Docker EE) would include Kubernetes integration. We’re really excited about bringing Kubernetes to our customer base and continuing to increase our involvement within the community. But it’s equally important for us to note that Swarm orchestration is not going away. Swarm forms an integral cluster management component of the Docker EE platform; in addition, Swarm will operate side-by-side with Kubernetes in a Docker EE cluster, allowing customers to select, based on their needs, the most suitable orchestration tool at application deployment time.

Here are just a few reasons that Swarm is integral to the Docker EE solution:

  1. Support our existing customers and ecosystem integrations
  2. Provide customers the flexibility in choice of orchestrators
  3. Provide secure and highly available clustering architecture

Existing Customers and Ecosystem Integrations

Docker now has hundreds of Docker EE customers who have standardized on Swarm orchestration. In fact, at our Customer Summit during DockerCon, all of the customers stated that they intend to continue using Swarm even with the Kubernetes announcement. Having both Swarm and Kubernetes options available is definitely a plus for some of these customers if they want to add additional choice, but we remain committed to our customers who continue to build and extend their Swarm environments.

In addition, as a platform solution, it is critical that Docker EE works with other IT solutions in the wider ecosystem, such as solutions for monitoring and logging, or infrastructure for storage and networking. As a result of this, we have a “batteries included but swappable” strategy that encourages integration with 3rd party tools through our certification process. Many of these integrations have been certified for Swarm, and we are committed to engaging with technology partners creating these integrations and our customers who benefit from these joint solutions.

Simple and Powerful Orchestration

For new customers or those just getting started with containers, Swarm provides a simple learning curve and powerful capabilities that make it easy to get started with production-ready container deployments. Some of the areas of Swarm orchestration that may be attractive to those new to containers or looking for a straightforward solution:

Simple and straightforward application deployment. You can get a highly available Swarm-based application up and running via a quick `docker service create` CLI command or the Docker EE UI’s service creation wizard. Swarm makes use of simple defaults in order to speed up the application process, while more advanced users can customize these settings as parameters in the API, CLI, or UI.

Batteries included networking. Docker Swarm includes capabilities for integrated VIP-based service discovery and load balancing, and Docker EE in particular makes hostname routing to applications easy through the use of labels and a batteries-included proxy service. Networks are first class objects in the API, CLI, and UI, so it’s easy to segment applications by just attaching services to each virtual network. And finally, adding network encryption is as simple as clicking a checkbox or adding a CLI flag during network creation.

Production support for multiple infrastructures. Out of the box, Docker EE-based Swarm provides generally available production support for a number of infrastructures, including Linux on x86, Linux on IBM Power and Z mainframes, and Microsoft Window Server 2016.

Simple yet powerful cluster management capabilities. Docker EE leverages Swarm for its powerful node management capabilities and to make setting up out-of-box highly available production clusters a snap–not just for Swarm workloads, but for Kubernetes workloads as well. Let’s dive into this in more detail.

Secure and Highly Available Clustering

Regardless of whether you end up using Swarm or Kubernetes for application orchestration, Docker EE leverages the clustering capabilities of Swarm to provide a secure and highly available, enterprise-ready container environment.

Cluster Creation and Adding Nodes

With Docker Swarm, creating a cluster is as simple as running `docker swarm init` on the first node. Docker EE builds upon this functionality to provide a simple one-line install command to bring the cluster manager up and running. Adding new nodes to the cluster is as simple as copying and pasting the  the `docker swarm join` command with the provided token from the Docker EE UI into each node, and Docker EE automatically downloads and configures the components necessary to get the node linked up to the cluster. In both cases Docker EE automatically installs Kubernetes components, providing an easy way to get a Kubernetes-ready cluster up and running in any environment.

High Availability

High Availability (in the form of redundant cluster managers) is a necessity for any production enterprise environment. Creating a highly available Kubernetes cluster can be a daunting task, involving extending the distributed key value store, installing new components, and manually replicating certificates across new nodes. Docker EE leverages Swarm clustering to make high availability as simple as promoting worker nodes to managers via a single UI click or API call. Behind the scenes, Docker EE ensures that the right system components (orchestration controllers, certificates, distributed key value store, user authentication/authorization containers, etc.)  are installed and replicated across the manager nodes. With just a few simple clicks, Docker EE uses Swarm to give you a highly available Kube cluster right out of the box.

Secure Clustering with Auto Certificate Rotation

A core principle for Docker is creating a zero-configuration, secure-by-default, out-of-the-box experience for the Docker platform. Docker Swarm clustering comes with mutually authenticated TLS, providing authentication, authorization and encryption to the communications of every node. Every node joining the cluster will automatically be issued a new certificate with a randomly generated ID, and these certificates can be automatically rotated.

Conclusion

The next release of Docker EE will allow customers to run Swarm or Kubernetes orchestration side-by-side, giving our customers the freedom to run the best orchestration solution for their needs. We are excited to deliver an enterprise-grade container platform that extends the value of Docker EE to Kubernetes workloads while still continuing to support Swarm, our customers and our ecosystem.

If you’re interested in the upcoming Beta, be sure to sign up at https://www.docker.com/kubernetes.  

Additional Resources:


Docker Swarm Orchestration in #Docker Enterprise Edition
Click To Tweet


The post Swarm Orchestration in Docker Enterprise Edition appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Docker Enterprise Edition Certified as Kubernetes Conformant

Yesterday, the Cloud Native Computing Foundation® (CNCF®) announced that Docker Enterprise Edition (Docker EE) passed the Certified Kubernetes™ conformance program. Based on the upcoming release of Docker EE that was demonstrated onstage at DockerCon Europe, this certification guarantees that all Kubernetes APIs function as specified to deliver a consistent and portable Kubernetes experience within Docker EE.

Docker EE Certified Kubernetes

In addition to a fully-conformant, unmodified  Kubernetes experience, users of Docker EE will also have access to the advanced capabilities of the Docker platform including simple and powerful clustering, a secure end-to-end supply chain, and validation to run on all major Linux distributions, Windows, IBM mainframe, and several leading public clouds. By integrating Kubernetes into Docker EE, we simplify and advance the management of Kubernetes for enterprise IT.  

Conformance Testing

Docker EE was certified against Kubernetes v1.8 by passing a test suite overseen by SIG Architecture. The test gives end users the confidence that Docker EE delivers a high level of common functionality to the main Kubernetes distribution and these results are validated by CNCF.

As a Certified Kubernetes platform, we are also promising to release new versions of Docker EE with Kubernetes to ensure that customers can take advantage of the rapid pace of ongoing development of the Kubernetes project.

Join the Beta

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise, sign up today for our upcoming Beta: https://www.docker.com/kubernetes

Learn how you can bring a secure, enterprise container platform that supports both Docker Swarm and Kubernetes side-by-side:


Big News! @CloudNativefdn has certified #Docker Enterprise Edition as #Kubernetes conformant
Click To Tweet


The post Docker Enterprise Edition Certified as Kubernetes Conformant appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Integrating Docker EE Into Société Générale’s Existing Enterprise IT Systems

Société Générale is a 153-year old French multinational bank that believes technology and innovation are key to enriching the customer experience and advancing economic development. A few years ago, the bank started a project to define their next generation application platform that would help them get 80% of their applications running in the cloud by 2020. Société Générale chose Docker Enterprise Edition (Docker EE) to be the foundation of their application platform and began working with it 15 months ago. This year at DockerCon Europe, Stephan Dechoux, DevOps architect, and Thomas Boussardon, Middleware Specialist, shared their journey over this time integrating Docker Enterprise Edition [Docker EE] into Société Générale IT systems.

You can watch their breakout session here:

A New Platform For Today and Tomorrow

Société Générale has a diverse application portfolio that includes many different types of applications, including legacy monolithic apps, SOA, distributed apps and REST APIs. The bank is also a global organization with teams and data centers around the world. A primary goal  was to deliver a new application platform to improve time-to-market and lower costs, while accelerating innovation. Initially Société Générale considered off-the-shelf PaaS solutions, but realized that these were better suited for greenfield applications and they needed to immediately support the thousands of applications that were already in existence.

On the other hand, the platform also needed to support Société Générale’s future projects. For example, while they have a sizeable private cloud, they wanted to make sure that the solution could be extensible to public clouds like Azure or AWS and support emerging technologies and future use cases such as  machine learning and blockchain technologies.

Société Générale came to the conclusion that Docker EE would support both their existing apps as well as new apps while enabling them to integrate their preferred systems and leverage their existing infrastructure.

A Step-by-Step Plan for Success

Docker EE is the foundation of the Société Générale application platform, delivering infrastructure independence and a common way to support diverse applications. To successfully roll out and integrate Docker EE to their existing IT systems, they planned a step-by-step approach.

They started with defining the existing technology and application stacks that were being used and prioritizing the integration of Docker EE with their core CI / CD pipeline tools including Jenkins, Github, and Nexus.

Docker build

With these pieces in place, Société Générale could begin migrating stateful applications to the platform. To support these stateful, primarily legacy applications, Docker EE was integrated to existing NetApp storage using the new volume plug-ins. Société Générale began running the first set of applications in Docker EE approximately 9 months ago.

Stateful containers

With these apps running in Docker EE, Thomas and Stephan added logging and monitoring solutions to not only see what was happening inside the container but also, as a way to centralize the information in a single dashboard.

Today, 15 months after they began this project, they are in Phase 3. There are 10 applications running in production on the Docker EE platform with another 50 in development. There are approximately 400 developers working with the platform, and it has generated enthusiasm across the organization with different teams all wanting to work on it.

Docker EE

A Future-Proof Platform

Now that existing legacy applications are easier to update and maintain, Société Générale is looking to bring more microservices into the picture and expand the platform to support new cloud-native applications and emerging technologies. Over the next few months, Société Générale will also be working across multiple clouds (private and public) with the plan to operate everything, as Stephan put it, in “a big giant cluster.”

To Stephan and Thomas, the success of Docker EE in Société Générale came from prioritizing both the integration points to the platform as well as the initial selection of key applications. By starting small, the bank could build on their successes and gather more momentum behind the platform. With their step-by-step approach, they were able to add more capabilities over time while educating more teams on the platform. Today, Docker EE is the platform of choice for the entire organization.

Additional Resources:


Integrating #Docker EE into Société Générale’s Existing Enterprise IT Systems #Dockercon
Click To Tweet


The post Integrating Docker EE Into Société Générale’s Existing Enterprise IT Systems appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

THE ENTERPRISE IT CHECKLIST FOR DOCKER OPERATIONS

At Docker, we believe the best insights come from the developers and IT pros using the Docker platform every day. Since the launch of Docker Enterprise Edition, we learned three things from our customers.

  1. First, a top goal in enterprise IT is to deliver value to customers (internal business units or external clients)…and to do so fast.
  2. Second, most enterprises believe that Docker is at the center of their IT platform.
  3. Finally, most enterprises’ biggest challenge is moving their containerized applications to production in time to prove value. My DockerCon talk focused on addressing the third item, which seems to be a critical one for many of our customers.

In our recent customer engagements, we’ve seen a pattern of common challenges when designing and deploying Docker in an enterprise environment. Particularly, customers are struggling to find best practices to speed up their move to production. To address some of these common challenges, we put together a production readiness checklist (https://github.com/nicolaka/checklist) for Docker Enterprise Edition. This list was discussed thoroughly during my DockerCon EU 2017 session. Here’s a video of that talk:

I go through 10 key topics (shown below) that a typical enterprise should  go through when deploying Docker in production. For each topic, I go over guidelines and best practices that we learned from past deployments, and from using Docker Enterprise Edition ourselves at Docker to run our SaaS applications like Docker Hub and Store.

Docker Operations

Although each customer has a unique set of goals and challenges reaching them,  at Docker we aim to produce a set of helpful resources in the form of checklists, reference architectures, and technical solutions to help you with your journey to run Docker in production. Head over to success.docker.com to see for yourself!

To learn more about Docker Enterprise Edition:


Enterprise IT checklist for @docker operations by @nicolakabar
Click To Tweet


 

The post THE ENTERPRISE IT CHECKLIST FOR DOCKER OPERATIONS appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Updated AWS SOC Reports Are Now Available with 19 Additional Services in Scope

AICPA SOC logo

Newly updated reports are available for AWS System and Organization Control Report 1 (SOC 1), formerly called AWS Service Organization Control Report 1, and AWS SOC 2: Security, Availability, & Confidentiality Report. You can download both reports for free and on demand in the AWS Management Console through AWS Artifact. The updated AWS SOC 3: Security, Availability, & Confidentiality Report also was just released. All three reports cover April 1, 2017, through September 30, 2017.

With the addition of the following 19 services, AWS now supports 51 SOC-compliant AWS services and is committed to increasing the number:

  • Amazon API Gateway
  • Amazon Cloud Directory
  • Amazon CloudFront
  • Amazon Cognito
  • Amazon Connect
  • AWS Directory Service for Microsoft Active Directory
  • Amazon EC2 Container Registry
  • Amazon EC2 Container Service
  • Amazon EC2 Systems Manager
  • Amazon Inspector
  • AWS IoT Platform
  • Amazon Kinesis Streams
  • AWS Lambda
  • AWS Lambda@Edge
  • AWS Managed Services
  • Amazon S3 Transfer Acceleration
  • AWS Shield
  • AWS Step Functions
  • AWS WAF

With this release, we also are introducing a separate spreadsheet, eliminating the need to extract the information from multiple PDFs.

If you are not yet an AWS customer, contact AWS Compliance to access the SOC Reports.

– Chad


Source: Aws Security

Like This (0)
Dislike This (0)

Say Hello To Our Newest AWS Community Heroes (Fall 2017 Edition)

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these heroes share their time and knowledge across social media and through in-person events. Heroes also actively help drive community-led tracks at conferences. At this year’s re:Invent, many Heroes will be speaking during the Monday Community Day track.

This November, we are thrilled to have four Heroes joining our network of cloud innovators. Without further ado, meet to our newest AWS Community Heroes!

 

Anh Ho Viet

Anh Ho Viet is the founder of AWS Vietnam User Group, Co-founder & CEO of OSAM, an AWS Consulting Partner in Vietnam, an AWS Certified Solutions Architect, and a cloud lover.

At OSAM, Anh and his enthusiastic team have helped many companies, from SMBs to Enterprises, move to the cloud with AWS. They offer a wide range of services, including migration, consultation, architecture, and solution design on AWS. Anh’s vision for OSAM is beyond a cloud service provider; the company will take part in building a complete AWS ecosystem in Vietnam, where other companies are encouraged to become AWS partners through training and collaboration activities.

In 2016, Anh founded the AWS Vietnam User Group as a channel to share knowledge and hands-on experience among cloud practitioners. Since then, the community has reached more than 4,800 members and is still expanding. The group holds monthly meetups, connects many SMEs to AWS experts, and provides real-time, free-of-charge consultancy to startups. In August 2017, Anh joined as lead content creator of a program called “Cloud Computing Lectures for Universities” which includes translating AWS documentation & news into Vietnamese, providing students with fundamental, up-to-date knowledge of AWS cloud computing, and supporting students’ career paths.

 

Thorsten Höger

Thorsten Höger is CEO and Cloud consultant at Taimos, where he is advising customers on how to use AWS. Being a developer, he focuses on improving development processes and automating everything to build efficient deployment pipelines for customers of all sizes.

Before being self-employed, Thorsten worked as a developer and CTO of Germany’s first private bank running on AWS. With his colleagues, he migrated the core banking system to the AWS platform in 2013. Since then he organizes the AWS user group in Stuttgart and is a frequent speaker at Meetups, BarCamps, and other community events.

As a supporter of open source software, Thorsten is maintaining or contributing to several projects on Github, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for CloudFormation. He is also the maintainer of the Jenkins AWS Pipeline plugin.

In his spare time, he enjoys indoor climbing and cooking.

 

Becky Zhang

Yu Zhang (Becky Zhang) is COO of BootDev, which focuses on Big Data solutions on AWS and high concurrency web architecture. Before she helped run BootDev, she was working at Yubis IT Solutions as an operations manager.

Becky plays a key role in the AWS User Group Shanghai (AWSUGSH), regularly organizing AWS UG events including AWS Tech Meetups and happy hours, gathering AWS talent together to communicate the latest technology and AWS services. As a female in technology industry, Becky is keen on promoting Women in Tech and encourages more woman to get involved in the community.

Becky also connects the China AWS User Group with user groups in other regions, including Korea, Japan, and Thailand. She was invited as a panelist at AWS re:Invent 2016 and spoke at the Seoul AWS Summit this April to introduce AWS User Group Shanghai and communicate with other AWS User Groups around the world.

Besides events, Becky also promotes the Shanghai AWS User Group by posting AWS-related tech articles, event forecasts, and event reports to Weibo, Twitter, Meetup.com, and WeChat (which now has over 2000 official account followers).

 

Nilesh Vaghela

Nilesh Vaghela is the founder of ElectroMech Corporation, an AWS Cloud and open source focused company (the company started as an open source motto). Nilesh has been very active in the Linux community since 1998. He started working with AWS Cloud technologies in 2013 and in 2014 he trained a dedicated cloud team and started full support of AWS cloud services as an AWS Standard Consulting Partner. He always works to establish and encourage cloud and open source communities.

He started the AWS Meetup community in Ahmedabad in 2014 and as of now 12 Meetups have been conducted, focusing on various AWS technologies. The Meetup has quickly grown to include over 2000 members. Nilesh also created a Facebook group for AWS enthusiasts in Ahmedabad, with over 1500 members.

Apart from the AWS Meetup, Nilesh has delivered a number of seminars, workshops, and talks around AWS introduction and awareness, at various organizations, as well as at colleges and universities. He has also been active in working with startups, presenting AWS services overviews and discussing how startups can benefit the most from using AWS services.

Nilesh is Red Hat Linux Technologies and AWS Cloud Technologies trainer as well.

 

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.


Source: New feed

Like This (0)
Dislike This (0)

Amazon ElastiCache Update – Online Resizing for Redis Clusters

Amazon ElastiCache makes it easy to for you to set up a fast, in-memory data store and cache. With support for the two most popular open source offerings (Redis and Memcached), ElastiCache supports the demanding needs of game leaderboards, in-memory analytics, and large-scale messaging.

Today I would like to tell you about an important addition to Amazon ElastiCache for Redis. You can already create clusters with up to 15 shards, each responsible for storing keys and values for a specific set of slots (each cluster has exactly 16,384 slots). A single cluster can expand to store 3.55 terabytes of in-memory data while supporting up to 20 million reads and 4.5 million writes per second.

Now with Online Resizing
You can now adjust the number of shards in a running ElastiCache for Redis cluster while the cluster remains online and responding to requests. This gives you the power to respond to changes in traffic and data volume without having to take the cluster offline or to start with an empty cache. You can also rebalance a running cluster to uniformly redistribute slot space without changing the number of shards.

When you initiate a resharding or rebalancing operation, ElastiCache for Redis starts by preparing a plan that will result in an even distribution of slots across the shards in the cluster. Then it transfers slots across shards, moving many in parallel for efficiency. This all happens while the cluster continues to respond to requests, with a modest impact on write throughput for writes to a slot that is in motion. The migration rate is dependent on the instance type, network speed, read/write traffic to the slots, and is generally about 1 gigabyte per minute.

The resharding and rebalancing operations apply to Redis clusters that were created with Cluster Mode enabled:

Resharding a Cluster
In general, you will know that it is time to expand a cluster via resharding when it starts to face significant memory pressure or when individual nodes are becoming bottlenecks. You can watch the cluster’s CloudWatch metrics to identify each situation:

Memory Pressure – FreeableMemory, SwapUsage, BytesUsedForCache.

CPU Bottleneck – CPUUtilization, CurrConnections, NewConnections.

Network Bottleneck – NetworkBytesIn, NetworkBytesOut.

You can use CloudWatch Dashboards to monitor these metrics, and CloudWatch Alarms to automate the resharding process.

To reshard a Redis cluster from the ElastiCache Dashboard, click on the cluster to visit the detail page, and then click on the Add shards button:

Enter the number of shards to add and (optionally) the desired Availability Zones, then click on Add:

The status of the cluster will change to modifying and the resharding process will begin. It can take anywhere from a few minutes to several hours, as indicated above. You can track the progress on the detail page for the cluster:

You can see the slots moving from shard to shard:

You can also watch the Events for the cluster:

During the resharding you should avoid the use of the KEYS and SMEMBERS commands, as well as compute-intensive Lua scripts in order to moderate the load on the cluster shards. You should avoid the FLUSHDB and FLUSHALL commands entirely; using them will interrupt and then abort the resharding process.

The status of each shard will return to available when the process is complete:

The same process takes place when you delete shards.

Rebalancing Slots
You can perform this operation by heading to the cluster’s detail page and clicking on Rebalance Slot Distribution:

Things to Know
Here are a couple of things to keep in mind about this new feature:

Engine Version – Your cluster must be running version 3.2.10 of the Redis engine.

Migration Size – Slots that contain items that are larger than 256 megabytes after serialization are not migrated.

Cluster Endpoint – The cluster endpoint does not change as a result of a resharding or rebalancing.

Available Now
This feature is available now and you can start using it today.

Jeff;

 


Source: New feed

Like This (0)
Dislike This (0)