Plan Your DockerCon 2018 Conference Experience: Everything an Enterprise Architect Needs to Know

DockerConSan Francisco 2018 is just around the corner and we’re here to help Enterprise Architects learn what Docker can do for them. DockerCon isn’t just for developers and we want to help you find the sessions and experiences that we’ve created that are developed with this role in mind:

In 2015, I attended DockerCon for the first time. I was sitting in a chair and listening to the amazing stories and ideas presented by speakers at the conference, which set off a chain of events that led to today. I feel privileged, and am really looking forward to being on stage and sharing our transformational journey to inspire the people who would sit in that chair. 
Alex Iankoulski, Principal Software Architect, Baker Hughes GE

The first thing to notice is that as you build your DockerCon agenda this year, we have a “Journey” theme that will help guide you during the conference. Whether you’re just “Getting Started” in your learning about containerization or you want to hear about “Innovation” using the Docker container platform for data sciences, AI, machine learning, and IoT, we have sessions that will be just right for you.

You can will also find content we’ve curated specifically for Architects. At DockerCon US we have:

  • 56 breakout sessions geared towards Architects. You obviously won’t be able to see them all live but here are a few highlights:
    • Building a Center of Excellence panel discussion with MetLife, PayPal, and Splunk
    • Building Your Production Tech Stack for the Docker container platform
    • Production SecOps with Kubernetes in Docker
    • Get matched with the peers and thought leaders you want to learn from most, using our Hallway Track app
    • Explore the Expo Hall and meet with our partners to see how other technologies can be integrated into the Docker platform

I want to give a special mention to the Hallway Track, because it’s different than what you’ll find at your typical tech conference. Networking is a key benefit to tech conferences but random interactions don’t always give you what you need. Our Hallway Track is where you can learn directly from your peers and subject matter experts, 1-on-1 or in a small group. Have a burning question you haven’t been able to get answered? Look in the Hallway Track to see if somebody else wants to discuss it, too. If not, create a request to find somebody to meet with. It’s like your own custom breakout session!

Whether this is your first DockerCon or your fifth, the conference is the best place to see and meet with all of the leaders in the container industry. You’ll learn about the latest trends and technologies and how they fit together as well as gain insight into how other customers are using the Docker platform in their own container journeys.

Start planning your DockerCon experience today using the links below and we’ll see you in San Francisco!

Join us and learn what #Docker can do for Enterprise Architects at #DockerCon 2018
Click To Tweet

The post Plan Your DockerCon 2018 Conference Experience: Everything an Enterprise Architect Needs to Know appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

AWS GDPR Data Processing Addendum – Now Part of Service Terms

Today, we’re happy to announce that the AWS GDPR Data Processing Addendum (GDPR DPA) is now part of our online Service Terms. This means all AWS customers globally can rely on the terms of the AWS GDPR DPA which will apply automatically from May 25, 2018, whenever they use AWS services to process personal data under the GDPR. The AWS GDPR DPA also includes EU Model Clauses, which were approved by the European Union (EU) data protection authorities, known as the Article 29 Working Party. This means that AWS customers wishing to transfer personal data from the European Economic Area (EEA) to other countries can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA.

As we approach the GDPR enforcement date this week, this announcement is an important GDPR compliance component for us, our customers, and our partners. All customers which that are using cloud services to process personal data will need to have a data processing agreement in place between them and their cloud services provider if they are to comply with GDPR. As early as April 2017, AWS announced that AWS had a GDPR-ready DPA available for its customers. In this way, we started offering our GDPR DPA to customers over a year before the May 25, 2018 enforcement date. Now, with the DPA terms included in our online service terms, there is no extra engagement needed by our customers and partners to be compliant with the GDPR requirement for data processing terms.

The AWS GDPR DPA also provides our customers with a number of other important assurances, such as the following:

  • AWS will process customer data only in accordance with customer instructions.
  • AWS has implemented and will maintain robust technical and organizational measures for the AWS network.
  • AWS will notify its customers of a security incident without undue delay after becoming aware of the security incident.
  • AWS will make available certificates issued in relation to the ISO 27001 certification, the ISO 27017 certification, and the ISO 27018 certification to further help customers and partners in their own GDPR compliance activities.

Customers who have already signed an offline version of the AWS GDPR DPA can continue to rely on that GDPR DPA. By incorporating our GDPR DPA into the AWS Service Terms, we are simply extending the terms of our GDPR DPA to all customers globally who will require it under GDPR.

AWS GDPR DPA is only part of the story, however. We are continuing to work alongside our customers and partners to help them on their journey towards GDPR compliance.

If you have any questions about the GDPR or the AWS GDPR DPA, please contact your account representative, or visit the AWS GDPR Center at:


Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

Source: Aws Security

Like This (0)
Dislike This (0)

EC2 Instance Update – C5 Instances with Local NVMe Storage (C5d)

As you can see from my EC2 Instance History post, we add new instance types on a regular and frequent basis. Driven by increasingly powerful processors and designed to address an ever-widening set of use cases, the size and diversity of this list reflects the equally diverse group of EC2 customers!

Near the bottom of that list you will find the new compute-intensive C5 instances. With a 25% to 50% improvement in price-performance over the C4 instances, the C5 instances are designed for applications like batch and log processing, distributed and or real-time analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. Some of these applications can benefit from access to high-speed, ultra-low latency local storage. For example, video encoding, image manipulation, and other forms of media processing often necessitates large amounts of I/O to temporary storage. While the input and output files are valuable assets and are typically stored as Amazon Simple Storage Service (S3) objects, the intermediate files are expendable. Similarly, batch and log processing runs in a race-to-idle model, flushing volatile data to disk as fast as possible in order to make full use of compute resources.

New C5d Instances with Local Storage
In order to meet this need, we are introducing C5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for the applications that I described above, as well as others that you will undoubtedly dream up! Here are the specs:

Instance Name vCPUs RAM Local Storage EBS Bandwidth Network Bandwidth
c5d.large 2 4 GiB 1 x 50 GB NVMe SSD Up to 2.25 Gbps Up to 10 Gbps
c5d.xlarge 4 8 GiB 1 x 100 GB NVMe SSD Up to 2.25 Gbps Up to 10 Gbps
c5d.2xlarge 8 16 GiB 1 x 225 GB NVMe SSD Up to 2.25 Gbps Up to 10 Gbps
c5d.4xlarge 16 32 GiB 1 x 450 GB NVMe SSD 2.25 Gbps Up to 10 Gbps
c5d.9xlarge 36 72 GiB 1 x 900 GB NVMe SSD 4.5 Gbps 10 Gbps
c5d.18xlarge 72 144 GiB 2 x 900 GB NVMe SSD 9 Gbps 25 Gbps

Other than the addition of local storage, the C5 and C5d share the same specs. Both are powered by 3.0 GHz Intel Xeon Platinum 8000-series processors, optimized for EC2 and with full control over C-states on the two largest sizes, giving you the ability to run two cores at up to 3.5 GHz using Intel Turbo Boost Technology.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
C5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent C5 instances.


PS – We will be adding local NVMe storage to other EC2 instance types in the months to come, so stay tuned!

Source: New feed

Like This (0)
Dislike This (0)

An easier way to control access to AWS resources by using the AWS organization of IAM principals

AWS Identity and Access Management (IAM) now makes it easier for you to control access to your AWS resources by using the AWS organization of IAM principals (users and roles). For some services, you grant permissions using resource-based policies to specify the accounts and principals that can access the resource and what actions they can perform on it. Now, you can use a new condition key, aws:PrincipalOrgID, in these policies to require all principals accessing the resource to be from an account in the organization. For example, let’s say you have an Amazon S3 bucket policy and you want to restrict access to only principals from AWS accounts inside of your organization. To accomplish this, you can define the aws:PrincipalOrgID condition and set the value to your organization ID in the bucket policy. Your organization ID is what sets the access control on the S3 bucket. Additionally, when you use this condition, policy permissions apply when you add new accounts to this organization without requiring an update to the policy.

In this post, I walk through the details of the new condition and show you how to restrict access to only principals in your organization using S3.

Condition concepts

Before I introduce the new condition, let’s review the condition element of an IAM policy. A condition is an optional IAM policy element you can use to specify special circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, the condition key ec2:InstanceType supports specific EC2 actions. Global conditions support all actions across all AWS services.

Now that I’ve reviewed the condition element in an IAM policy, let me introduce the new condition.

AWS:PrincipalOrgID Condition Key

You can use this condition key to apply a filter to the Principal element of a resource-based policy. You can use any string operator, such as StringLike, with this condition and specify the AWS organization ID for as its value.

Condition key Description Operator(s) Value
aws:PrincipalOrgID Validates if the principal accessing the resource belongs to an account in your organization. All String operators Any AWS organization ID

Example: Restrict access to only principals from my organization

Let’s consider an example where I want to give specific IAM principals in my organization direct access to my S3 bucket, 2018-Financial-Data, that contains sensitive financial information. I have two accounts in my AWS organization with multiple account IDs, and only some IAM users from these accounts need access to this financial report.

To grant this access, I author a resource-based policy for my S3 bucket as shown below. In this policy, I list the individuals who I want to grant access. For the sake of this example, let’s say that while doing so, I accidentally specify an incorrect account ID. This means a user named Steve, who is not in an account in my organization, can now access my financial report. To require the principal account to be in my organization, I add a condition to my policy using the global condition key aws:PrincipalOrgID. This condition requires that only principals from accounts in my organization can access the S3 bucket. This means that although Steve is one of the principals in the policy, he can’t access the financial report because the account that he is a member of doesn’t belong to my organization.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "AllowPutObject",
            "Effect": "Allow",
            "Principal": ["arn:aws:iam::094697565664:user/Casey",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::2018-Financial-Data/*",
            "Condition": {"ForAnyValue:StringLike": 
                             {"aws:PrincipalOrgID": [ "o-xxxxxxxxxx" ]}

In the policy above, I specify the principals that I grant access to using the principal element of the statement. Next, I add s3:GetObject as the action and 2018-Financial-Data/* as the resource to grant read access to my S3 bucket. Finally, I add the new condition key aws:PrincipalOrgID and specify my organization ID in the condition element of the statement to make sure only the principals from the accounts in my organization can access this bucket.


You can now use the aws:PrincipalOrgID condition key in your resource-based policies to more easily restrict access to IAM principals from accounts in your AWS organization. For more information about this global condition key and policy examples using aws:PrincipalOrgID, read the IAM documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Source: Aws Security

Like This (0)
Dislike This (0)

AWS IoT 1-Click – Use Simple Devices to Trigger Lambda Functions

We announced a preview of AWS IoT 1-Click at AWS re:Invent 2017 and have been refining it ever since, focusing on simplicity and a clean out-of-box experience. Designed to make IoT available and accessible to a broad audience, AWS IoT 1-Click is now generally available, along with new IoT buttons from AWS and AT&T.

I sat down with the dev team a month or two ago to learn about the service so that I could start thinking about my blog post. During the meeting they gave me a pair of IoT buttons and I started to think about some creative ways to put them to use. Here are a few that I came up with:

Help Request – Earlier this month I spent a very pleasant weekend at the HackTillDawn hackathon in Los Angeles. As the participants were hacking away, they occasionally had questions about AWS, machine learning, Amazon SageMaker, and AWS DeepLens. While we had plenty of AWS Solution Architects on hand (decked out in fashionable & distinctive AWS shirts for easy identification), I imagined an IoT button for each team. Pressing the button would alert the SA crew via SMS and direct them to the proper table.

Camera ControlTim Bray and I were in the AWS video studio, prepping for the first episode of Tim’s series on AWS Messaging. Minutes before we opened the Twitch stream I realized that we did not have a clean, unobtrusive way to ask the camera operator to switch to a closeup view. Again, I imagined that a couple of IoT buttons would allow us to make the request.

Remote Dog Treat Dispenser – My dog barks every time a stranger opens the gate in front of our house. While it is great to have confirmation that my Ring doorbell is working, I would like to be able to press a button and dispense a treat so that Luna stops barking!

Homes, offices, factories, schools, vehicles, and health care facilities can all benefit from IoT buttons and other simple IoT devices, all managed using AWS IoT 1-Click.

All About AWS IoT 1-Click
As I said earlier, we have been focusing on simplicity and a clean out-of-box experience. Here’s what that means:

Architects can dream up applications for inexpensive, low-powered devices.

Developers don’t need to write any device-level code. They can make use of pre-built actions, which send email or SMS messages, or write their own custom actions using AWS Lambda functions.

Installers don’t have to install certificates or configure cloud endpoints on newly acquired devices, and don’t have to worry about firmware updates.

Administrators can monitor the overall status and health of each device, and can arrange to receive alerts when a device nears the end of its useful life and needs to be replaced, using a single interface that spans device types and manufacturers.

I’ll show you how easy this is in just a moment. But first, let’s talk about the current set of devices that are supported by AWS IoT 1-Click.

Who’s Got the Button?
We’re launching with support for two types of buttons (both pictured above). Both types of buttons are pre-configured with X.509 certificates, communicate to the cloud over secure connections, and are ready to use.

The AWS IoT Enterprise Button communicates via Wi-Fi. It has a 2000-click lifetime, encrypts outbound data using TLS, and can be configured using BLE and our mobile app. It retails for $19.99 (shipping and handling not included) and can be used in the United States, Europe, and Japan.

The AT&T LTE-M Button communicates via the LTE-M cellular network. It has a 1500-click lifetime, and also encrypts outbound data using TLS. The device and the bundled data plan is available an an introductory price of $29.99 (shipping and handling not included), and can be used in the United States.

We are very interested in working with device manufacturers in order to make even more shapes, sizes, and types of devices (badge readers, asset trackers, motion detectors, and industrial sensors, to name a few) available to our customers. Our team will be happy to tell you about our provisioning tools and our facility for pushing OTA (over the air) updates to large fleets of devices; you can contact them at

AWS IoT 1-Click Concepts
I’m eager to show you how to use AWS IoT 1-Click and the buttons, but need to introduce a few concepts first.

Device – A button or other item that can send messages. Each device is uniquely identified by a serial number.

Placement Template – Describes a like-minded collection of devices to be deployed. Specifies the action to be performed and lists the names of custom attributes for each device.

Placement – A device that has been deployed. Referring to placements instead of devices gives you the freedom to replace and upgrade devices with minimal disruption. Each placement can include values for custom attributes such as a location (“Building 8, 3rd Floor, Room 1337”) or a purpose (“Coffee Request Button”).

Action – The AWS Lambda function to invoke when the button is pressed. You can write a function from scratch, or you can make use of a pair of predefined functions that send an email or an SMS message. The actions have access to the attributes; you can, for example, send an SMS message with the text “Urgent need for coffee in Building 8, 3rd Floor, Room 1337.”

Getting Started with AWS IoT 1-Click
Let’s set up an IoT button using the AWS IoT 1-Click Console:

If I didn’t have any buttons I could click Buy devices to get some. But, I do have some, so I click Claim devices to move ahead. I enter the device ID or claim code for my AT&T button and click Claim (I can enter multiple claim codes or device IDs if I want):

The AWS buttons can be claimed using the console or the mobile app; the first step is to use the mobile app to configure the button to use my Wi-Fi:

Then I scan the barcode on the box and click the button to complete the process of claiming the device. Both of my buttons are now visible in the console:

I am now ready to put them to use. I click on Projects, and then Create a project:

I name and describe my project, and click Next to proceed:

Now I define a device template, along with names and default values for the placement attributes. Here’s how I set up a device template (projects can contain several, but I just need one):

The action has two mandatory parameters (phone number and SMS message) built in; I add three more (Building, Room, and Floor) and click Create project:

I’m almost ready to ask for some coffee! The next step is to associate my buttons with this project by creating a placement for each one. I click Create placements to proceed. I name each placement, select the device to associate with it, and then enter values for the attributes that I established for the project. I can also add additional attributes that are peculiar to this placement:

I can inspect my project and see that everything looks good:

I click on the buttons and the SMS messages appear:

I can monitor device activity in the AWS IoT 1-Click Console:

And also in the Lambda Console:

The Lambda function itself is also accessible, and can be used as-is or customized:

As you can see, this is the code that lets me use {{*}}include all of the placement attributes in the message and {{Building}} (for example) to include a specific placement attribute.

Now Available
I’ve barely scratched the surface of this cool new service and I encourage you to give it a try (or a click) yourself. Buy a button or two, build something cool, and let me know all about it!

Pricing is based on the number of enabled devices in your account, measured monthly and pro-rated for partial months. Devices can be enabled or disabled at any time. See the AWS IoT 1-Click Pricing page for more info.

To learn more, visit the AWS IoT 1-Click home page or read the AWS IoT 1-Click documentation.



Source: New feed

Like This (0)
Dislike This (0)

Windows Containers in Docker Enterprise Edition 2.0 – Top 7 Questions from the Docker Virtual Event

The recent Docker Virtual Event, Unveiling Docker Enterprise Edition (EE) 2.0, gave us the opportunity to highlight some of the great reasons to adopt a containerization strategy across your entire application portfolio. In Part 1 of this blog, we covered some of the top questions we received about Swarm and Kubernetes orchestration in Docker Enterprise Edition – the world’s leading enterprise-ready container platform. Today, we will cover some of questions about running Windows containers.

If you missed the live event, don’t worry! You can still catch the recording on-demand here.

Docker Enterprise Edition: Only Fully-Supported Solution for Running Containers on Windows Server 2016

Q: I thought containers were based on Linux processes. How do Windows-based Docker containers work?

A: Docker has been partnering with Microsoft since 2014 to deliver all the same benefits of Docker containers to Windows Server so that customers can easily run .NET and IIS applications in Docker containers. We worked closely together on changes to the Windows Server kernel to support containerization primitives, added Windows Server support to the Docker Engine and CLI  and added multi-architecture support for Windows images. The result is Docker containers run natively on Windows Server 2016, leveraging the same CLI commands in Powershell that you use in a Linux-based environment.

Q: Is Windows Server supported with Kubernetes?

A: Support for Windows Server containers in Kubernetes is currently in beta with full support anticipated later this year. To run Windows Server containers in production, customers of Docker Enterprise Edition 2.0 can deploy these workloads with Docker Swarm. This is a feature that has been generally available for over a year.

Q: Can you run full Windows Server clusters?

A: With Docker Enterprise Edition 2.0, you can run clusters of mixed Linux and Windows Server 2016 workers or all Windows Server workers. Note that even if you are running all Windows Server workers, the Docker Enterprise Edition 2.0 management stack is deployed onto Linux-based nodes. For many of our enterprise customers who are Windows shops, they deploy these management nodes as Hyper-V virtual machines with a Linux OS.

Q: Can Windows and Linux containers in the same environment talk to each other?

A: Yes! Windows and Linux containers in the same cluster can share a common overlay network.

Q: Do you support Windows Server 1709 or Windows Server 1803?

A: Support for Windows Server 1709 and Windows Server 1803 will be coming in a patch release in Q2 2018.

Q: How does licensing work for Docker EE 2.0 and Windows Server?

A: Docker EE 2.0 Basic (i.e. EE Engine) is included in Windows Server 2016, however EE Standard and Advanced tiers, which provide the unified management UI and private registry capabilities, require additional licensing. Please contact sales for more information around pricing and licensing.

Q: What solutions are available for my .NET development team building containerized applications?

A: In addition to Docker for Mac, Docker also delivers Docker for Windows – a free desktop solution for local container development. There are two releases of Docker for Windows; the edge release includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance allowing your .NET developers to begin working with both Swarm and Kubernetes locally.

Top 7 Q&A’s from webinar about fully-supported #Windows Containers with #DockerEE 2.0
Click To Tweet

To learn more about Windows containers, check out these resources:

The post Windows Containers in Docker Enterprise Edition 2.0 – Top 7 Questions from the Docker Virtual Event appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Amazon Sumerian – Now Generally Available

We announced Amazon Sumerian at AWS re:Invent 2017. As you can see from Tara‘s blog post (Presenting Amazon Sumerian: An Easy Way to Create VR, AR, and 3D Experiences), Sumerian does not require any specialized programming or 3D graphics expertise. You can build VR, AR, and 3D experiences for a wide variety of popular hardware platforms including mobile devices, head-mounted displays, digital signs, and web browsers.

I’m happy to announce that Sumerian is now generally available. You can create realistic virtual environments and scenes without having to acquire or master specialized tools for 3D modeling, animation, lighting, audio editing, or programming. Once built, you can deploy your finished creation across multiple platforms without having to write custom code or deal with specialized deployment systems and processes.

Sumerian gives you a web-based editor that you can use to quickly and easily create realistic, professional-quality scenes. There’s a visual scripting tool that lets you build logic to control how objects and characters (Sumerian Hosts) respond to user actions. Sumerian also lets you create rich, natural interactions powered by AWS services such as Amazon Lex, Polly, AWS Lambda, AWS IoT, and Amazon DynamoDB.

Sumerian was designed to work on multiple platforms. The VR and AR apps that you create in Sumerian will run in browsers that supports WebGL or WebVR and on popular devices such as the Oculus Rift, HTC Vive, and those powered by iOS or Android.

During the preview period, we have been working with a broad spectrum of customers to put Sumerian to the test and to create proof of concept (PoC) projects designed to highlight an equally broad spectrum of use cases, including employee education, training simulations, field service productivity, virtual concierge, design and creative, and brand engagement. Fidelity Labs (the internal R&D unit of Fidelity Investments), was the first to use a Sumerian host to create an engaging VR experience. Cora (the host) lives within a virtual chart room. She can display stock quotes, pull up company charts, and answer questions about a company’s performance. This PoC uses Amazon Polly to implement text to speech and Amazon Lex for conversational chatbot functionality. Read their blog post and watch the video inside to see Cora in action:

Now that Sumerian is generally available, you have the power to create engaging AR, VR, and 3D experiences of your own. To learn more, visit the Amazon Sumerian home page and then spend some quality time with our extensive collection of Sumerian Tutorials.



Source: New feed

Like This (0)
Dislike This (0)

The AWS Shared Responsibility Model and GDPR

The EU’s General Data Protection Regulation (GDPR) describes data processor and data controller roles, and some customers and AWS Partner Network (APN) partners are asking how this affects the long-established AWS Shared Responsibility Model. I wanted to take some time to help folks understand shared responsibilities for us and for our customers in context of the GDPR.

How does the AWS Shared Responsibility Model change under GDPR? The short answer – it doesn’t. AWS is responsible for securing the underlying infrastructure that supports the cloud and the services provided; while customers and APN partners, acting either as data controllers or data processors, are responsible for any personal data they put in the cloud. The shared responsibility model illustrates the various responsibilities of AWS and our customers and APN partners, and the same separation of responsibility applies under the GDPR.

AWS responsibilities as a data processor

The GDPR does introduce specific regulation and responsibilities regarding data controllers and processors. When any AWS customer uses our services to process personal data, the controller is usually the AWS customer (and sometimes it is the AWS customer’s customer). However, in all of these cases, AWS is always the data processor in relation to this activity. This is because the customer is directing the processing of data through its interaction with the AWS service controls, and AWS is only executing customer directions. As a data processor, AWS is responsible for protecting the global infrastructure that runs all of our services. Controllers using AWS maintain control over data hosted on this infrastructure, including the security configuration controls for handling end-user content and personal data. Protecting this infrastructure, is our number one priority, and we invest heavily in third-party auditors to test our security controls and make any issues they find available to our customer base through AWS Artifact. Our ISO 27018 report is a good example, as it tests security controls that focus on protection of personal data in particular.

AWS has an increased responsibility for our managed services. Examples of managed services include Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon Elastic MapReduce, and Amazon WorkSpaces. These services provide the scalability and flexibility of cloud-based resources with less operational overhead because we handle basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery. For most managed services, you only configure logical access controls and protect account credentials, while maintaining control and responsibility of any personal data.

Customer and APN partner responsibilities as data controllers — and how AWS Services can help

Our customers can act as data controllers or data processors within their AWS environment. As a data controller, the services you use may determine how you configure those services to help meet your GDPR compliance needs. For example, AWS Services that are classified as Infrastructure as a Service (IaaS), such as Amazon EC2, Amazon VPC, and Amazon S3, are under your control and require you to perform all routine security configuration and management that would be necessary no matter where the servers were located. With Amazon EC2 instances, you are responsible for managing: guest OS (including updates and security patches), application software or utilities installed on the instances, and the configuration of the AWS-provided firewall (called a security group).

To help you realize data protection by design principles under the GDPR when using our infrastructure, we recommend you protect AWS account credentials and set up individual user accounts with Amazon Identity and Access Management (IAM) so that each user is only given the permissions necessary to fulfill their job duties. We also recommend using multi-factor authentication (MFA) with each account, requiring the use of SSL/TLS to communicate with AWS resources, setting up API/user activity logging with AWS CloudTrail, and using AWS encryption solutions, along with all default security controls within AWS Services. You can also use advanced managed security services, such as Amazon Macie, which assists in discovering and securing personal data stored in Amazon S3.

For more information, you can download the AWS Security Best Practices whitepaper or visit the AWS Security Resources or GDPR Center webpages. In addition to our solutions and services, AWS APN partners can provide hundreds of tools and features to help you meet your security objectives, ranging from network security and configuration management to access control and data encryption.

Source: Aws Security

Like This (0)
Dislike This (0)

Spring 2018 AWS SOC Reports are Now Available with 11 Services Added in Scope

Since our last System and Organization Control (SOC) audit, our service and compliance teams have been working to increase the number of AWS Services in scope prioritized based on customer requests. Today, we’re happy to report 11 services are newly SOC compliant, which is a 21 percent increase in the last six months.

With the addition of the following 11 new services, you can now select from a total of 62 SOC-compliant services. To see the full list, go to our Services in Scope by Compliance Program page:

• Amazon Athena
• Amazon QuickSight
• Amazon WorkDocs
• AWS Batch
• AWS CodeBuild
• AWS Config
• AWS OpsWorks Stacks
• AWS Snowball
• AWS Snowball Edge
• AWS Snowmobile
• AWS X-Ray

Our latest SOC 1, 2, and 3 reports covering the period from October 1, 2017 to March 31, 2018 are now available. The SOC 1 and 2 reports are available on-demand through AWS Artifact by logging into the AWS Management Console. The SOC 3 report can be downloaded here.

Finally, prospective customers can read our SOC 1 and 2 reports by reaching out to AWS Compliance.

Want more AWS Security news? Follow us on Twitter.

Source: Aws Security

Like This (0)
Dislike This (0)

Welcome New Docker Captains

Today, we are excited to announce our new Docker Captains! Docker Captains are technology experts and leaders in their communities who are passionate about sharing their Docker knowledge with others. Individuals are awarded the distinction of Docker Captain because of their technical expertise, content and technical contributions to the community  and outstanding engagement with Docker’s users.

The New Captains Class

New Docker Captains

New Docker Captains

Arjuna and Nicolas - Captains


Follow the Captains

Follow all of the Captains on twitter. Also check out the Captains GitHub repo to see what projects they have been working on.

Learn more about each Captain

Docker Captains are eager to bring their technical expertise to new audiences both offline and online around the world – don’t hesitate to reach out to them via the social links on their Captain profile pages. You can filter the captains by location, expertise, and more.

Alex Iankoulski

Alex has 20+ years of experience in the software industry. He is currently a Principal Software Architect for Data Science and Analytics at Baker Hughes, a GE Company where he focuses on enabling deep learning scientists and analytics experts to bring algorithms and new modeling techniques from prototype to production using containers. He believes that good tools get out of the way, empower users to go fast and enable them to stay focused on what they do best.

Arjuna Rivera

Arjuna Rivera is Lockheed Martin’s Infrastructure and International (I2) Labs Senior Manager and Principal Researcher reporting into the I2’s Technology Office.In this role, Arjuna is responsible for leading labs to drive innovation, IT disruption and modern cloud usage across the enterprise.  As an idea incubator and technical facilitator for I2, I2 labs doesn’t just discover products, they bring experiences to life.

Arjuna’s career spans more than 19 years of service to Lockheed Martin. Arjuna’s diverse background includes, but not limited to experience in IT Infrastructure, Applications, Testing, Security Engineering, Governance, Risk, Compliance, Incident Response and general management.

Arjuna maintains several industry certifications; he has a MS in Management with a concentration in IT from Rensselaer Polytechnic Institute and BS in Management Information Systems from the University of South Florida.

Brandon Mitchell

Brandon Mitchell is a Solutions Architect for BoxBoat. He started with Linux when Slackware was shipped on floppy disks and has been hooked ever since. In his day job, he helps clients deploying docker CE, EE, Swarm, and CI/CD pipelines. In his spare time, he’s answering questions on StackOverflow as BMitch or Twitter. For a break from the keyboard, he enjoys biking and backpacking, not at the same time. Cats over dogs, vim over emacs, and spaces over tabs.

Danny Bryant

Oracle ACE Director, Danny Bryant resides with his family in Atlanta, Ga where he received his BS in Computer Science from the Georgia Institute of Technology, and is currently and Infrastructure Principal at the Accenture Enkitec Group. He is a hands-on technical manager with a wide range of skills across the IT landscape possessing experience in areas such as networking, server administration, and programming.

Danny is greatly involved in the Oracle community as he serves on the Board of Directors for both ODTUG, and his local user group, the Georgia Oracle User Group. In additional to being an ODTUG Board member, he also served as the Kscope 2016 and 2017 Conference Chair. He has served on the Board of Directors for Atlanta FreeNet, a nonprofit organization working to provide free wireless Internet access to the City of Atlanta, and is a lifetime member of The National Society of Black Engineers (NSBE).

He was also an Adjunct Instructor with the Oracle Academy, teaching SQL and Data Modeling throughout the US and Europe.  In his free time, he SCUBA dives, trains in the martial arts of Jeet Kune Do and Capoeira with his son.

Kevin Crawley

Kevin has been working with containers for over 3 years, having delivered and maintained multiple applications in production. He is an organizer and speaker for the Nashville area’s own Docker Meetup, FaaS Meetup, and Go Meetup (Gophers). He is passionate about technology and an advocate of the HumanOps movement.

When he’s not rolling his face around on the keyboard he enjoys cycling, camping, hockey, and traveling.

Kinnary Jangla

Kinnary Jangla is an engineering manager at Pinterest and an author. Her newest book is “Docker on microservices”. She has worked on ML Infra at  Pinterest, Growth and Maps at Uber and Bing at Microsoft. Kinnary is an advisory board member on the Rutgers Big Data certificate program and on Women Who Code.

Julie Lerman

Julie Lerman is a Microsoft Regional director and a long-time Microsoft MVP who now counts her years as a coder in decades.  She makes her living as a mentor and consultant to software teams around the world. You can find Julie presenting on Entity Framework, Domain Driven Design and other topics at user groups and conferences around the world. Julie blogs at, is the author of the highly acclaimed “Programming Entity Framework” books, the MSDN Magazine Data Points column and popular videos on Follow Julie on twitter @julielerman.

Łukasz Lach

Łukasz Lach is a software Architect at, passionate and happy every-day Docker user, and Docker Community Leader organizes meet-ups in Warsaw (Poland). Once a week he lectures at the University of Warsaw, on topics related to high performance web solutions and teaches students how to base their work on Docker and Docker Swarm. Łukasz is a Docker Certified Associate and the creator of the Docker Quiz (to be released at DockerCon San Francisco) to help others prepare

Nicolas Muller

Nicolas has been a developer for 15 years (Java & iOS), Jug Leader in Marseille, speaker at Devoxx France 2014 (InfluxDB) and Devoxx France 2018 about Docker/BtrFS.  Previously worked in the financial industry in Luxembourg and Marseille.

Pav Jimanov

Pav is the CEO of SoftwareDefined, an innovative consulting practice specializing in DevOps Transformation and modern IT infrastructure.

Pav was first introduced to containers while working as an architect on Solaris 10 Software Update @Sun Microsystems back in 2005. He is passionate about Container, Virtualization, Analytics, and all things software defined. Pav tweets at @PavJimanov.

Rachid Zarouli

Docker Community Leader and speaker, Rachid Zachid is a former virtualization consultant and Instructor. After a successful experience building and training the ops team of the French registry (AFNIC), he’s now C.I.O. of a worldwide recognised CRM and E-COMMERCE agency. Dedicated to offer the most efficient production tools and infrastructure platforms, he works with his team on bringing latest technology at a production level. Involved in OSS communities from monitoring to container solutions, he teaches cloud computing architecture at a software engineering school in his spare time.

Renno Reinurm

Always keep shipping. Renno is software delivery automation engineer at sales-software startup Pipedrive and he always thinks how to make any regular routine more fun, faster and secure. Luckily Docker does most of the heavy lifting either using community or commercial products.

Sujay Pillai

Sujay is an ECM expert and now currently holds the Sr. DevOps position at Jabil. He plays a pioneer role in adoption of container technology at his work and is a community leader for Docker Penang meetup edition.

Sune Keller

Sune, is a former developer who is now an architect at Alm. Brand, a financial corporation (insurance, banking, pension/life insurance, car leasing) located in Copenhagen, Denmark. He was introduced to Docker by a buddy from university who enjoyed playing with the platform. After demonstrating to a few colleagues on how they could run a large, most bothersome legacy app in Docker, he remembers saying out loud, “But obviously, we’ll never be running Docker in production here.”

Today, and both greenfield and legacy apps and services in Docker in production, and the Docker-based platform his company has built a strategic foundation for their own-developed software development, as well as for running third-party software in our data centers. He enjoys spending time with his daughter.


Welcome New Docker Captains #DockerCaptains
Click To Tweet

The post Welcome New Docker Captains appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)