Attend This Free December 13 Tech Talk: “Cloud-Native DDoS Mitigation with AWS Shield”

AWS Online Tech Talks banner

As part of the AWS Online Tech Talks series, AWS will present Cloud-Native DDoS Mitigation with AWS Shield on Wednesday, December 13. This tech talk will start at 9:00 A.M. Pacific Time and end at 9:40 A.M. Pacific Time.

Distributed Denial of Service (DDoS) mitigation can help you maintain application availability, but traditional solutions are hard to scale and require expensive hardware. AWS Shield is a managed DDoS protection service that helps you safeguard web applications running in the AWS Cloud. In this tech talk, you will learn simple techniques for using AWS Shield to help you build scalable DDoS defenses into your applications without investing in costly infrastructure. You also will learn how AWS Shield helps you monitor your applications to detect DDoS attempts and how to respond to in-progress events.

This tech talk is free. Register today.

– Craig


Source: Aws Security

Like This (0)
Dislike This (0)

A tour of containerd 1.0

 containerd

We have done a few talks in the past on different features of containerd, how it was designed, and some of the problems that we have fixed along the way. Containerd is used by Docker, Kubernetes CRI, and a few other projects but this is a post for people who may not know what containerd actually does within these platforms.  I would like to do more posts on the feature set and design of containerd in the future but for now, we will start with the basics.

I think the container ecosystem can be confusing at times. Especially with the terminology that we use. Whats this? A runtime. And this? A runtime…  containerd (pronounced “container-dee”as the name implies, not contain nerd as some would like to troll me with, is a container daemon.  It was originally built as an integration point for OCI runtimes like runc but over the past six months it has added a lot of functionality to bring it up to par with the needs of modern container platforms like Docker and orchestration systems like Kubernetes.

So what do you actually get using containerd?  You get push and pull functionality as well as image management.  You get container lifecycle APIs to create, execute, and manage containers and their tasks. An entire API dedicated to snapshot management and an openly governed project to depend on.  Basically everything that you need to build a container platform without having to deal with the underlying OS details.  I think the most important part of containerd is having a versioned and stable API that will have bug fixes and security patches backported.

containerd

Since there is no such thing as Linux containers in the kernel, containers are various kernel features tied together, when you are building a large platform or distributed system you want an abstraction layer between your management code and the syscalls and duct tape of features to run a container.  That is where containerd lives.  It provides a client a layer of stable types that platforms can build on top of without ever having to drop down to the kernel level.  It’s so much nicer to work with Container, Task, and Snapshot types than it is to manage calls to clone() or mount(). Balanced with the flexibility to directly interact with the runtime or host-machine, these objects avoid the sacrifice of capabilities that typically come with higher-level abstractions. The result is that easy tasks are simple to complete and hard tasks are possible.

containerdContainerd was designed to be used by Docker and Kubernetes as well as any other container system that wants to abstract away syscalls or OS specific functionality to run containers on Linux, Windows, Solaris, or other Operating Systems.  With these users in mind, we wanted to make sure that containerd has only what they need and nothing that they don’t.  Realistically this is impossible but at least that is what we try for.  While networking is out of scope for containerd, what it doesn’t do lets higher level systems have full control.  The reason for this is, when you are building a distributed system, networking is a very central aspect.  With SDN and service discovery today, networking is way more platform specific than abstracting away netlink calls on linux.  Most of the new overlay networks are route based and require routing tables to be updated each time a new container is created or deleted.  Service discovery, DNS, etc all have to be notified of these changes as well.  It would be a large chunk of code to be able to support all the different network interfaces, hooks, and integration points to support this if we added networking to containerd.  What we did instead is opted for a robust events system inside containerd so that multiple consumers can subscribe to the events that they care about.  We also expose a Task API that lets users create a running task, have the ability to add interfaces to the network namespace of the container, and then start the container’s process without the need for complex hooks in various points of a container’s lifecycle.

Another area that has been added to containerd over the past few months is a complete storage and distribution system that supports both OCI and Docker image formats.  You have a complete content addressed storage system across the containerd API that works not only for images but also metadata, checkpoints, and arbitrary data attached to containers.

We also took the time to rethink how “graphdrivers” work.  These are the overlay or block level filesystems that allow images to have layers and you to perform efficient builds.  Graphdrivers were initially written by Solomon and I when we added support for devicemapper.  Docker only supported AUFS at the time so we modeled the graphdrivers after the overlay filesystem.  However, making a block level filesystem such as devicemapper/lvm act like an overlay filesystem proved to be much harder to do in the long run.  The interfaces had to expand over time to support different features than what we originally thought would be needed.  With containerd, we took a different approach, make overlay filesystems act like a snapshotter instead of vice versa.  This was much easier to do as overlay filesystems provide much more flexibility than snapshotting filesystems like BTRFS, ZFS, and devicemapper as they don’t have a strict parent/child relationship.  This helped us build out a smaller interface for the snapshotters while still fulfilling the requirements needed from things like a builder as well as reduce the amount of code needed, making it much easier to maintain in the long run.

You can find more details about the architecture of containerd in Stephen Day’s Dec 7th 2017 KubeCon SIG Node presentation.

In addition to the technical and design changes in the 1.0 codebase, we also switched the containerd governance model from the long standing BDFL to a Technical Steering Committee giving the community an independent third party resource to rely on.

Learn more about containerd:


A tour of @containerd 1.0 by @crosbymichael
Click To Tweet


The post A tour of containerd 1.0 appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Now Available: A New AWS Quick Start Reference Deployment for CJIS

CJIS logo

As part of the AWS Compliance Quick Start program, AWS has published a new Quick Start reference deployment for customers who need to align with Criminal Justice Information Services (CJIS) Security Policy 5.6 and process Criminal Justice Information (CJI) in accordance with this policy. The new Quick Start is AWS Enterprise Accelerator – Compliance: CJIS, and it makes it easier for you to address the list of supported controls you will find in the security controls matrix that accompanies the Quick Start.

As all AWS Quick Starts do, this Quick Start helps you automate the building of a recommended architecture that, when deployed as a package, provides a baseline AWS configuration. The Quick Start uses sets of nested AWS CloudFormation templates and user data scripts to create an example environment with a two-VPC, multi-tiered web service.

The new Quick Start also includes:

The recommended architecture built by the Quick Start supports a wide variety of AWS best practices (all of which are detailed in the Quick Start), including the use of multiple Availability Zones, isolation using public and private subnets, load balancing, and Auto Scaling.

The Quick Start package also includes a deployment guide with detailed instructions and a security controls matrix that describes how the deployment addresses CJIS Security Policy 5.6 controls. You should have your IT security assessors and risk decision makers review the security controls matrix so that they can understand the extent of the implementation of the controls within the architecture. The matrix also identifies the specific resources in the CloudFormation templates that affect each control, and contains cross-references to the CJIS Security Policy 5.6 security controls.

If you have questions about this new Quick Start, contact the AWS Compliance Quick Start team. For more information about the AWS CJIS program, see CJIS Compliance.

– Emil


Source: Aws Security

Like This (0)
Dislike This (0)

Introducing AWS Single Sign-On

Today, AWS introduced AWS Single Sign-On (AWS SSO), a service that makes it easy for you to centrally manage SSO access to multiple AWS accounts and business applications. AWS SSO provides a user portal so that your users can find and access all of their assigned accounts and applications from one place, using their existing corporate credentials. AWS SSO is integrated with AWS Organizations to enable you to manage access to AWS accounts in your organization. In addition, AWS SSO supports Security Assertion Markup Language (SAML) 2.0, which means you can extend SSO access to your SAML-enabled applications by using the AWS SSO application configuration wizard. AWS SSO also includes built-in SSO integrations with many business applications, such as Salesforce, Box, and Office 365.

In this blog post, I help you get started with AWS SSO by answering three main questions:

  1. What benefits does AWS SSO provide?
  2. What are the key features of AWS SSO?
  3. How do I get started?

1. What benefits does AWS SSO provide?

You can connect your corporate Microsoft Active Directory to AWS SSO so that your users can sign in to the user portal with their user names and passwords to access the AWS accounts and applications to which you have granted them access. The following screenshot shows an example of the AWS SSO user portal.

Screenshot of an example user portal

You can use AWS SSO to centrally assign, manage, and audit your users’ access to multiple AWS accounts and SAML-enabled business applications. You can add new users to the appropriate Active Directory group, which automatically gives them access to the AWS accounts and applications assigned for members of that group. AWS SSO also provides better visibility into which users accessed which accounts and applications from the user portal by recording all user portal sign-in activities in AWS CloudTrail. AWS SSO records details such as the IP address, user name, date, and time of the sign-in request. Any changes made by administrators in the AWS SSO console also are recorded in CloudTrail, and you can use security information and event management (SIEM) solutions such as Splunk to analyze the associated CloudTrail logs.

2. What are the key features of AWS SSO?

AWS SSO includes the following key features.

AWS SSO user portal: In the user portal, your users can easily find and access all applications and AWS accounts to which you have granted them access. Users can access the user portal with their corporate Active Directory credentials and access these applications without needing to enter their user name and password again. 

Integration with AWS Organizations: AWS SSO is integrated with Organizations to enable you to manage access to all AWS accounts in your organization. When you enable AWS SSO in your organization’s master account, AWS SSO lists all the accounts managed in your organization for which you can enable SSO access to AWS consoles.

Integration with on-premises Active Directory: AWS SSO integrates with your on-premises Active Directory by using AWS Directory Service. Users can access AWS accounts and business applications by using their Active Directory credentials. You can manage which users or groups in your corporate directory can access which AWS accounts.

Centralized permissions management: With AWS SSO, you can centrally manage the permissions granted to users when they access AWS accounts via the AWS Management Console. You define users’ permissions as permission sets, which are collections of permissions that are based on a combination of AWS managed policies or AWS managed policies for job functions. AWS managed policies are designed to provide permissions for many common use cases, and AWS managed policies for job functions are designed to closely align with common job functions in the IT industry.

With AWS SSO, you can configure all the necessary user permissions to your AWS resources in your AWS accounts by applying permission sets. For example, you can grant database administrators broad permissions to Amazon Relational Database Service in your development accounts, but limit their permissions in your production accounts. As you change these permission sets, AWS SSO helps you keep them updated in all relevant AWS accounts, allowing you to manage permissions centrally.

Application configuration wizard: You can configure SSO access to any SAML-enabled business application by using the AWS SSO application configuration wizard.

Built-in SSO integrations: AWS SSO provides built-in SSO integrations and step-by-step configuration instructions for many commonly used business applications such as Office 365, Salesforce, and Box.

Centralized auditing: AWS SSO logs all sign-in and administrative activities in CloudTrail. You can send these logs to SIEM solutions such as Splunk to analyze them.

Highly available multi-tenant SSO infrastructure: AWS SSO is built on a highly available, AWS managed SSO infrastructure. The AWS SSO multi-tenant architecture enables you to start using the service quickly without needing to procure hardware or install software.

3. How do I get started?

To get started, connect your corporate Active Directory to AWS SSO by using AWS Directory Service. You have two choices to connect your corporate directory: use AD Connector, or configure an Active Directory trust with your on-premises Active Directory. After connecting your corporate directory, you can set up accounts and applications for SSO access. You also can use AWS Managed Microsoft AD in the cloud to manage your users and groups in the cloud, if you don’t have an on-premises Active Directory or don’t want to connect to on-premises Active Directory.

The preceding diagram shows how AWS SSO helps connect your users to the AWS accounts and business applications to which they need access. The numbers in the diagram correspond to the following use cases.

Use case 1: Manage SSO access to AWS accounts

With AWS SSO, you can grant your users access to AWS accounts in your organization. You can do this by adding your users to groups in your corporate Active Directory. In AWS SSO, specify which Active Directory groups can access which AWS accounts, and then pick a permission set to specify the level of SSO access you are granting these Active Directory groups. AWS SSO then sets up AWS account access for the users in the groups. Going forward, you can add new users to your Active Directory groups, and AWS SSO automatically provides the users access to the configured accounts. You also can grant Active Directory users direct access to AWS accounts (without needing to add users to Active Directory groups).

To configure AWS account access for your users:

  1. Navigate to the AWS SSO console, and choose AWS accounts from the navigation pane. Choose which accounts you want users to access from the list of accounts. For this example, I am choosing three accounts from my MarketingBU organizational unit. I then choose Assign users.
    Screenshot of choosing "Assign users"
  2. Choose Users, start typing to search for users, and then choose Search connected directory. This search will return a list of users from your connected directory. You can also search for groups.
    Screenshot of shooting for users
  3. To select permission sets, you first have to create one. Choose Create new permission set.
    Screenshot of choosing "Create new permission set"
  4. You can use an existing job function policy to create a permission set. This type of policy allows you to apply predefined AWS managed policies to a permission set that are based on common job functions in the IT industry. Alternatively, you can create a custom permission set based on custom policies.
    Screenshot of choosing a permission set
  5. For this example, I choose the SecurityAudit job function policy and then choose Create. As a result, this permission set will be available for me to pick on the next screen.
    Screenshot of choosing the SecurityAudit job function policy
  6. Choose a permission set to indicate what level of access you want to grant your users. For this example, I assign the SecurityAudit permission set I created in the previous step to the users I chose. I then choose Finish.
    Screenshot showing the SecurityAudit permission set
  7. Your users can sign in to the user portal and access the accounts to which you gave them access. AWS SSO automatically sets up the necessary trust between accounts to enable SSO. AWS SSO also sets up the necessary permissions in each account. This helps you scale your administrative tasks across multiple AWS accounts.
    Screenshot showing that the AWS accounts have been configured successfully
  8. The users can choose an account and a permission set to sign in to that account without needing to provide a password again. For example, if you grant a user two permission sets—one that is more restrictive and one that is less restrictive—the user can choose which permission set to use for a specific session. In the following screenshot, John has signed in to the AWS SSO user portal. He can see all the accounts to which he has access. For example, he can sign in to the Production Account with SecurityAudit permissions.
    Screenshot showing that this user can sign in to the Production Account with SecurityAudit permissions

Use case 2: Manage SSO access to business applications

AWS SSO has built-in support for SSO access to commonly used business applications such as Salesforce, Office 365, and Box. You can find these applications in the AWS SSO console and easily configure SSO access by using the application configuration wizard. After you configure an application for SSO access, you can grant users access by searching for users and groups in your corporate directory. For a complete list of supported applications, navigate to the AWS SSO console.

To configure SSO access to business applications:

  1. Navigate to the AWS SSO console and choose Applications from the navigation pane.
    Screenshot of choosing Applications and then Add a new application
  2. Choose Add a new application and choose one or more of the applications in the list. For this example, I have chosen Dropbox.
  3. Depending on which application you choose, you will be asked to complete step-by-step instructions to configure the application for SSO access. The instructions guide you to use the details provided in the AWS SSO metadata section to configure your application, and then to provide your application details in the Application metadata section. Choose Save changes when you are done.
  4. Optionally, you can provide additional SAML attribute mappings by choosing the Attribute mappings tab. You need to do this only if you want to pass user attributes from your corporate directory to the application.
    Screenshot of the configuration for Dropbox having been saved
  5. To give your users access to this application, choose the Assigned users tab. Choose Assign users to search your connected directory, and choose a user or group that can access this application.
    Screenshot of Assigned users tab

Use case 3: Manage SSO access to custom SAML-enabled applications

You also can enable SSO access to your custom-built or partner-built SAML applications by using the AWS SSO application configuration wizard.

To configure SSO access to SAML-enabled applications:

  1. Navigate to the AWS SSO console and choose Applications from the navigation pane.
    Screenshot of choosing Applications
  2. Choose Add a new application, choose Custom SAML 2.0 application, and choose Add.
  3. On the Custom SAML 2.0 application page, copy or download the AWS SSO metadata from the AWS SSO metadata section to configure your custom SAML-enabled application to recognize AWS SSO as an identity provider.
    Screenshot of AWS SSO metadata
  4. On the same page, complete the application configuration details in the Application metadata section, and choose Save changes.
    Screenshot of the Application metadata section
  5. You can provide additional SAML attribute mappings to be passed to your application in the SAML assertion by choosing the Attribute mappings tab. See the documentation for list of all available attributes.
    Screenshot of providing additional SAML attribute mappings
  6. To give your users access to this application, choose the Assigned users tab. Choose Assign users to search your connected directory, and choose a user or group that can access this application.
    Screenshot of the Assigned users tab, where you choose a user or group that can access this application

Summary

In this blog post, I introduced AWS SSO and explained its key features, benefits, and use cases. With AWS SSO, you can centrally manage and audit SSO access to all your AWS accounts, cloud applications, and custom applications. To start using AWS SSO, navigate to the AWS SSO console.

If you have feedback or questions about AWS SSO, start a new thread on the AWS SSO forum.

– Anand


Source: Aws Security

Like This (0)
Dislike This (0)

re:Invent Recap – Announcements to Boost Enterprise Innovation with Windows

My colleague Sandy Carter delivered the Enterprise Innovation State of the Union last week at AWS re:Invent. She wrote the guest post below to recap the announcements that she made from the stage.

Jeff;


“I want my company to innovate, but I am not convinced we can execute successfully.” Far too many times I have heard this fear expressed by senior executives that I have met at different points in my career. In fact, a recent study published by Price Waterhouse Coopers found that while 93% of executives depend on innovation to drive growth, more than half are challenged to take innovative ideas to market quickly in a scalable way.

Many customers are struggling with how to drive enterprise innovation, so I was thrilled to share the stage at AWS re:Invent this past week with several senior executives who have successfully broken this mold to drive amazing enterprise innovation. In particular, I want to thank Parag Karnik from Johnson & Johnson, Bill Rothe from Hess Corporation, Dave Williams from Just Eat, and Olga Lagunova from Pitney Bowes for sharing their stories of innovation, creativity, and solid execution.

Among the many new announcements from AWS this past week, I am particularly excited about the following newly-launched AWS products and programs that I announced at re:Invent to drive new innovations by our enterprise customers:

AI: New Deep Learning Amazon Machine Image (AMI) on EC2 Windows
As I shared at re:Invent, customers such as Infor are already successfully leveraging artificial intelligence tools on AWS to deliver tailored, industry-specific applications to their customers. We want to facilitate more of our Windows developers to get started quickly and easily with AI, leveraging machine learning based tools with popular deep learning frameworks, such as Apache MXNet, TensorFlow, and Caffe2. In order to enable this, I announced at re:Invent that AWS now offers a new Deep Learning AMI for Microsoft Windows. The AMI is tailored to facilitate large scale training of deep-learning models, and enables quick and easy setup of Windows Server-based compute resources for machine learning applications.

IoT: Visualize and Analyze SQL and IoT Data
Forecasts show as many as 31 billion IoT devices by 2020. AWS wants every Windows customer to take advantage of the data available from their devices. Pitney Bowes, for example, now has more than 130,000 IoT devices streaming data to AWS. Using machine learning, Pitney Bowes enriches and analyzes data to enhance their customer experience, improve efficiencies, and create new data products. AWS IoT Analytics can now be leveraged to run analytics on IoT data and get insights that help you make better and more accurate decisions for IoT applications and machine learning use cases. AWS IoT Analytics can automatically enrich IoT device data with contextual metadata such as your SQL Server transactional data.

New Capabilities for .NET Developers on AWS
In addition to all of the enhancements we’ve introduced to deliver a first class experience to Windows developers on AWS, we announced that we are including .NET Core 2.0 support in AWS Lambda and AWS CodeBuild, which will be available for broader use early next year. .NET Core 2.0 packs a number of new features such as Razor pages, better compatibility with .NET framework, more than double the number of APIs compared to the previous versions, and much more. With this announcement, you will be able to take advantage of all latest .NET Core features on Lambda and CodeBuild for building modern serverless and DevOps centric solutions.

Simplified Backup for Windows Applications
We recently introduced application consistent snapshots with Microsoft Volume Shadow Copy Service (VSS). This enables you to take VSS snapshots with Amazon Elastic Block Store (EBS) for your running Windows instances without the need to create custom scripts or to shut down the instances. This removes the overhead associated with backing up your Windows applications.

License optimization for BYOL
AWS provides you a wide variety of instance types and families that best meet your workload needs. If you are using software licensed by the number of vCPUs, you want the ability to further tweak vCPU count to optimize license spend. I announced the upcoming ability to optimize CPUs for EC2, giving you greater control over your EC2 instances on two fronts:

  1. You can specify a custom number of vCPUs when launching new instances to save on vCPU based licensing costs. For example, SQL Server licensing spend.
  2. You can disable Hyper-Threading Technology for workloads that perform well with single-threaded CPUs, like some high-performance computing (HPC) applications.

Using these capabilities, customers who bring their own license (BYOL) will be able to optimize their license usage and save on the license costs.

Server Migration Service for Hyper-V Virtual Machines
As Bill Rothe from Hess Corporation shared at re:Invent, Hess has successfully migrated a wide range of workloads to the cloud, including SQL Server, SharePoint, SAP HANA, and many others. AWS Server Migration Service (SMS) now supports Hyper-V virtual machine (VM) migration, in order to further support enterprise migrations like these. AWS Server Migration Service will enable you to more easily co-ordinate large-scale server migrations from on-premise Hyper-V environments to AWS. AWS Server Migration Service allows you to automate, schedule, and track incremental replications of live server volumes. The replicated volumes are encrypted in transit and saved as a new Amazon Machine Image (AMI), which can be launched as an EC2 instance on AWS.

Microsoft Premier Support for AWS End-Customers
I was pleased to announce that Microsoft and AWS have developed new areas of support integration to help ensure a great customer experience. Microsoft Premier Support is on board to help AWS assist end customers. AWS Support engineers can escalate directly to Microsoft Support on behalf of AWS customers running Microsoft workloads.

Best Practice Tools: HIPAA Compliance and Digital Innovation Workshop
In November, we updated our HIPAA-focused white paper, outlining how you can use AWS to create HIPAA-compliant applications. In the first quarter of next year, we will publish a HIPAA Implementation Guide that expands on our HIPAA Quick Start to enable you to follow strict security, compliance, and risk management controls for common healthcare use cases. I was also pleased to award a Digital Innovation Workshop to one of our customers in my re:Invent session, and look forward to seeing more customers take advantage of this workshop.

AWS: The Continuous Innovation Cloud
A common thread we see across customers is that continuous innovation from AWS enables their ongoing reinvention. Continuous innovation means that you are always getting a newer, better offering every single day. Sometimes it is in the form of brand new services and capabilities, and sometimes it is happening invisibly, under the covers where your environment just keeps getting better. I invite you to learn more about how you can accelerate your innovation journey with recently launched AWS services and AWS best practices. If you are migrating Windows workloads, speak with your AWS sales representative or an AWS Microsoft Workloads Competency Partner to learn how you can leverage our re:Think for Windows program for credits to start your migration.

– Sandy Carter, Vice President, AWS


Source: New feed

Like This (0)
Dislike This (0)

How to Easily Apply Amazon Cloud Directory Schema Changes with In-Place Schema Upgrades

Now, Amazon Cloud Directory makes it easier for you to apply schema changes across your directories with in-place schema upgrades. Your directory now remains available while Cloud Directory applies backward-compatible schema changes such as the addition of new fields. Without migrating data between directories or applying code changes to your applications, you can upgrade your schemas. You also can view the history of your schema changes in Cloud Directory by using version identifiers, which help you track and audit schema versions across directories. If you have multiple instances of a directory with the same schema, you can view the version history of schema changes to manage your directory fleet and ensure that all directories are running with the same schema version.

In this blog post, I demonstrate how to perform an in-place schema upgrade and use schema versions in Cloud Directory. I add additional attributes to an existing facet and add a new facet to a schema. I then publish the new schema and apply it to running directories, upgrading the schema in place. I also show how to view the version history of a directory schema, which helps me to ensure my directory fleet is running the same version of the schema and has the correct history of schema changes applied to it.

Note: I share Java code examples in this post. I assume that you are familiar with the AWS SDK and can use Java-based code to build a Cloud Directory code example. You can apply the concepts I cover in this post to other programming languages such as Python and Ruby.

Cloud Directory fundamentals

I will start by covering a few Cloud Directory fundamentals. If you are already familiar with the concepts behind Cloud Directory facets, schemas, and schema lifecycles, you can skip to the next section.

Facets: Groups of attributes. You use facets to define object types. For example, you can define a device schema by adding facets such as computers, phones, and tablets. A computer facet can track attributes such as serial number, make, and model. You can then use the facets to create computer objects, phone objects, and tablet objects in the directory to which the schema applies.

Schemas: Collections of facets. Schemas define which types of objects can be created in a directory (such as users, devices, and organizations) and enforce validation of data for each object class. All data within a directory must conform to the applied schema. As a result, the schema definition is essentially a blueprint to construct a directory with an applied schema.

Schema lifecycle: The four distinct states of a schema: Development, Published, Applied, and Deleted. Schemas in the Published and Applied states have version identifiers and cannot be changed. Schemas in the Applied state are used by directories for validation as applications insert or update data. You can change schemas in the Development state as many times as you need them to. In-place schema upgrades allow you to apply schema changes to an existing Applied schema in a production directory without the need to export and import the data populated in the directory.

How to add attributes to a computer inventory application schema and perform an in-place schema upgrade

To demonstrate how to set up schema versioning and perform an in-place schema upgrade, I will use an example of a computer inventory application that uses Cloud Directory to store relationship data. Let’s say that at my company, AnyCompany, we use this computer inventory application to track all computers we give to our employees for work use. I previously created a ComputerSchema and assigned its version identifier as 1. This schema contains one facet called ComputerInfo that includes attributes for SerialNumber, Make, and Model, as shown in the following schema details.

Schema: ComputerSchema
Version: 1

Facet: ComputerInfo
Attribute: SerialNumber, type: Integer
Attribute: Make, type: String
Attribute: Model, type: String

AnyCompany has offices in Seattle, Portland, and San Francisco. I have deployed the computer inventory application for each of these three locations. As shown in the lower left part of the following diagram, ComputerSchema is in the Published state with a version of 1. The Published schema is applied to SeattleDirectory, PortlandDirectory, and SanFranciscoDirectory for AnyCompany’s three locations. Implementing separate directories for different geographic locations when you don’t have any queries that cross location boundaries is a good data partitioning strategy and gives your application better response times with lower latency.

Diagram of ComputerSchema in Published state and applied to three directories

Legend for the diagrams in this post

The following code example creates the schema in the Development state by using a JSON file, publishes the schema, and then creates directories for the Seattle, Portland, and San Francisco locations. For this example, I assume the schema has been defined in the JSON file. The createSchema API creates a schema Amazon Resource Name (ARN) with the name defined in the variable, SCHEMA_NAME. I can use the putSchemaFromJson API to add specific schema definitions from the JSON file.

// The utility method to get valid Cloud Directory schema JSON
String validJson = getJsonFile("ComputerSchema_version_1.json")

String SCHEMA_NAME = "ComputerSchema";

String developmentSchemaArn = client.createSchema(new CreateSchemaRequest()
        .withName(SCHEMA_NAME))
        .getSchemaArn();

// Put the schema document in the Development schema
PutSchemaFromJsonResult result = client.putSchemaFromJson(new PutSchemaFromJsonRequest()
        .withSchemaArn(developmentSchemaArn)
        .withDocument(validJson));

The following code example takes the schema that is currently in the Development state and publishes the schema, changing its state to Published.

String SCHEMA_VERSION = "1";
String publishedSchemaArn = client.publishSchema(
        new PublishSchemaRequest()
        .withDevelopmentSchemaArn(developmentSchemaArn)
        .withVersion(SCHEMA_VERSION))
        .getPublishedSchemaArn();

// Our Published schema ARN is as follows
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1

The following code example creates a directory named SeattleDirectory and applies the published schema. The createDirectory API call creates a directory by using the published schema provided in the API parameters. Note that Cloud Directory stores a version of the schema in the directory in the Applied state. I will use similar code to create directories for PortlandDirectory and SanFranciscoDirectory.

String DIRECTORY_NAME = "SeattleDirectory"; 

CreateDirectoryResult directory = client.createDirectory(
        new CreateDirectoryRequest()
        .withName(DIRECTORY_NAME)
        .withSchemaArn(publishedSchemaArn));

String directoryArn = directory.getDirectoryArn();
String appliedSchemaArn = directory.getAppliedSchemaArn();

// This code section can be reused to create directories for Portland and San Francisco locations with the appropriate directory names

// Our directory ARN is as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX

// Our applied schema ARN is as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1

Revising a schema

Now let’s say my company, AnyCompany, wants to add more information for computers and to track which employees have been assigned a computer for work use. I modify the schema to add two attributes to the ComputerInfo facet: Description and OSVersion (operating system version). I make Description optional because it is not important for me to track this attribute for the computer objects I create. I make OSVersion mandatory because it is critical for me to track it for all computer objects so that I can make changes such as applying security patches or making upgrades. Because I make OSVersion mandatory, I must provide a default value that Cloud Directory will apply to objects that were created before the schema revision, in order to handle backward compatibility. Note that you can replace the value in any object with a different value.

I also add a new facet to track computer assignment information, shown in the following updated schema as the ComputerAssignment facet. This facet tracks these additional attributes: Name (the name of the person to whom the computer is assigned), EMail (the email address of the assignee), Department, and department CostCenter. Note that Cloud Directory refers to the previously available version identifier as the Major Version. Because I can now add a minor version to a schema, I also denote the changed schema as Minor Version A.

Schema: ComputerSchema
Major Version: 1
Minor Version: A 

Facet: ComputerInfo
Attribute: SerialNumber, type: Integer 
Attribute: Make, type: String
Attribute: Model, type: Integer
Attribute: Description, type: String, required: NOT_REQUIRED
Attribute: OSVersion, type: String, required: REQUIRED_ALWAYS, default: "Windows 7"

Facet: ComputerAssignment
Attribute: Name, type: String
Attribute: EMail, type: String
Attribute: Department, type: String
Attribute: CostCenter, type: Integer

The following diagram shows the changes that were made when I added another facet to the schema and attributes to the existing facet. The highlighted area of the diagram (bottom left) shows that the schema changes were published.

Diagram showing that schema changes were published

The following code example revises the existing Development schema by adding the new attributes to the ComputerInfo facet and by adding the ComputerAssignment facet. I use a new JSON file for the schema revision, and for the purposes of this example, I am assuming the JSON file has the full schema including planned revisions.

// The utility method to get a valid CloudDirectory schema JSON
String schemaJson = getJsonFile("ComputerSchema_version_1_A.json")

// Put the schema document in the Development schema
PutSchemaFromJsonResult result = client.putSchemaFromJson(
        new PutSchemaFromJsonRequest()
        .withSchemaArn(developmentSchemaArn)
        .withDocument(schemaJson));

Upgrading the Published schema

The following code example performs an in-place schema upgrade of the Published schema with schema revisions (it adds new attributes to the existing facet and another facet to the schema). The upgradePublishedSchema API upgrades the Published schema with backward-compatible changes from the Development schema.

// From an earlier code example, I know the publishedSchemaArn has this value: "arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1"

// Upgrade publishedSchemaArn to minorVersion A. The Development schema must be backward compatible with 
// the existing publishedSchemaArn. 

String minorVersion = "A"

UpgradePublishedSchemaResult upgradePublishedSchemaResult = client.upgradePublishedSchema(new UpgradePublishedSchemaRequest()
        .withDevelopmentSchemaArn(developmentSchemaArn)
        .withPublishedSchemaArn(publishedSchemaArn)
        .withMinorVersion(minorVersion));

String upgradedPublishedSchemaArn = upgradePublishedSchemaResult.getUpgradedSchemaArn();

// The Published schema ARN after the upgrade shows a minor version as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1/A

Upgrading the Applied schema

The following diagram shows the in-place schema upgrade for the SeattleDirectory directory. I am performing the schema upgrade so that I can reflect the new schemas in all three directories. As a reminder, I added new attributes to the ComputerInfo facet and also added the ComputerAssignment facet. After the schema and directory upgrade, I can create objects for the ComputerInfo and ComputerAssignment facets in the SeattleDirectory. Any objects that were created with the old facet definition for ComputerInfo will now use the default values for any additional attributes defined in the new schema.

Diagram of the in-place schema upgrade for the SeattleDirectory directory

I use the following code example to perform an in-place upgrade of the SeattleDirectory to a Major Version of 1 and a Minor Version of A. Note that you should change a Major Version identifier in a schema to make backward-incompatible changes such as changing the data type of an existing attribute or dropping a mandatory attribute from your schema. Backward-incompatible changes require directory data migration from a previous version to the new version. You should change a Minor Version identifier in a schema to make backward-compatible upgrades such as adding additional attributes or adding facets, which in turn may contain one or more attributes. The upgradeAppliedSchema API lets me upgrade an existing directory with a different version of a schema.

// This upgrades ComputerSchema version 1 of the Applied schema in SeattleDirectory to Major Version 1 and Minor Version A
// The schema must be backward compatible or the API will fail with IncompatibleSchemaException

UpgradeAppliedSchemaResult upgradeAppliedSchemaResult = client.upgradeAppliedSchema(new UpgradeAppliedSchemaRequest()
        .withDirectoryArn(directoryArn)
        .withPublishedSchemaArn(upgradedPublishedSchemaArn));

String upgradedAppliedSchemaArn = upgradeAppliedSchemaResult.getUpgradedSchemaArn();

// The Applied schema ARN after the in-place schema upgrade will appear as follows
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1

// This code section can be reused to upgrade directories for the Portland and San Francisco locations with the appropriate directory ARN

Note: Cloud Directory has excluded returning the Minor Version identifier in the Applied schema ARN for backward compatibility and to enable the application to work across older and newer versions of the directory.

The following diagram shows the changes that are made when I perform an in-place schema upgrade in the two remaining directories, PortlandDirectory and SanFranciscoDirectory. I make these calls sequentially, upgrading PortlandDirectory first and then upgrading SanFranciscoDirectory. I use the same code example that I used earlier to upgrade SeattleDirectory. Now, all my directories are running the most current version of the schema. Also, I made these schema changes without having to migrate data and while maintaining my application’s high availability.

Diagram showing the changes that are made with an in-place schema upgrade in the two remaining directories

Schema revision history

I can now view the schema revision history for any of AnyCompany’s directories by using the listAppliedSchemaArns API. Cloud Directory maintains the five most recent versions of applied schema changes. Similarly, to inspect the current Minor Version that was applied to my schema, I use the getAppliedSchemaVersion API. The listAppliedSchemaArns API returns the schema ARNs based on my schema filter as defined in withSchemaArn.

I use the following code example to query an Applied schema for its version history.

// This returns the five most recent Minor Versions associated with a Major Version
ListAppliedSchemaArnsResult listAppliedSchemaArnsResult = client.listAppliedSchemaArns(new ListAppliedSchemaArnsRequest()
        .withDirectoryArn(directoryArn)
        .withSchemaArn(upgradedAppliedSchemaArn));

// Note: The listAppliedSchemaArns API without the SchemaArn filter returns all the Major Versions in a directory

The listAppliedSchemaArns API returns the two ARNs as shown in the following output.

arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1
arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1/A

The following code example queries an Applied schema for current Minor Version by using the getAppliedSchemaVersion API.

// This returns the current Applied schema's Minor Version ARN 

GetAppliedSchemaVersion getAppliedSchemaVersionResult = client.getAppliedSchemaVersion(new GetAppliedSchemaVersionRequest()
	.withSchemaArn(upgradedAppliedSchemaArn));

The getAppliedSchemaVersion API returns the current Applied schema ARN with a Minor Version, as shown in the following output.

arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1/A

If you have a lot of directories, schema revision API calls can help you audit your directory fleet and ensure that all directories are running the same version of a schema. Such auditing can help you ensure high integrity of directories across your fleet.

Summary

You can use in-place schema upgrades to make changes to your directory schema as you evolve your data set to match the needs of your application. An in-place schema upgrade allows you to maintain high availability for your directory and applications while the upgrade takes place. For more information about in-place schema upgrades, see the in-place schema upgrade documentation.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing the solution in this post, start a new thread in the Directory Service forum or contact AWS Support.

– Mahendra

 


Source: Aws Security

Like This (0)
Dislike This (0)

Simplifying the Management of Kubernetes with Docker Enterprise Edition

Back in October at DockerCon Europe, we announced that Docker will be delivering a  seamless and simplified integration of Kubernetes into the Docker platform. By integrating Kubernetes with Docker EE, we provide the choice to use Kubernetes and/or Docker Swarm for orchestration while maintaining the consistent developer to operator workflow users have come to expect from Docker. For users, this means they get an unmodified, conformant version of Kubernetes with the added value of the Docker platform including security, management, a familiar developer workflow and tooling, broad ecosystem compatibility and an adherence to industry standards including containerd and the OCI.

Kubernetes and Docker

One of the biggest questions that we’ve been asked since we announced support for Kubernetes at  DockerCon EU –  what does this mean for an operations team that is already using Kubernetes to orchestrate containers within their enterprise? The answer is really fairly straightforward  –  Kubernetes teams using Docker EE will have the following:

  • Full access to the Kube API and all Kubernetes constructs
  • Native use of KubeCTL
  • If you are developing in Kube YML, seamless deployment
  • Ability to develop  in Docker with Compose and leverage your best practices around Kubernetes services

Docker Enterprise Edition with support for Kubernetes will be in beta this month and be generally available in February. Those who use Docker for Mac will get access to the first Docker CE edge release with Kubernetes support later this month. Docker for Windows will follow shortly thereafter.

To learn more about how Docker supports Kubernetes for orchestration, watch a quick overview that describes the security and portability benefits realized by using Kubernetes in Docker:

Through Integration, Kubernetes inherits Advanced Capabilities of the Docker Platform

Integrating Kubernetes into Docker EE means that Kubernetes inherits the Docker EE features for integrated management at scale, security and automation for enterprise operations.                                                                                                                                                                                                                                                                                                                                          

Easy Production Install

By leveraging Swarm for cluster management, creating a highly available and fault tolerant Kubernetes environment is much easier and much more secure.

For admins, they can get a cluster up and running with Kubernetes by simply installing Docker EE with a one-line command. Once the cluster is running, Kubernetes is installed as part of Docker EE and runs independently, free of any dependency on swarm. This includes mutually authenticated TLS for all nodes in the cluster with certificate rotation. This built-in security functionality means that only nodes that are authorized to be in the cluster can join.

Nodes once added to the cluster, can be assigned dynamically to run Kubernetes services or Swarm services, allowing a clear separation of concerns for administrators. With this advanced node management, operators can easily troubleshoot and diagnose issues with each orchestrator. Additionally, administrators can change the task allocation setting dynamically on each individual node to align their capacity needs with their workloads.

Secure and Unified Supply Chain

Docker EE delivers end-to-end security across the supply chain from the developer’s laptop to production deployment. With this integration, teams leverage the same supply chain capabilities for both Swarm and Kubernetes deployments. That includes:

  • Secure Image Management: With image scanning and Docker Content Trust, Docker EE provides a way to validate and verify images before being deployed and manage them in a privately hosted image registry.
  • Secure Automation: With policy-based image promotion, organizations remove bottlenecks in the supply chain while enforcing policies such as scanning for vulnerabilities.

Secure Multi-Tenancy

With flexible and granular role-based access controls (RBAC) down to the API-level, admins can integrate AD/LDAP once and support different teams bringing different apps (Windows, Linux, or mainframe) of different app types (microservices, ISV, or traditional), leveraging different orchestrators (Swarm or Kubernetes) all in to the same Docker EE environment with secure isolation between them. That allows development teams to bring their own tools and processes into the same environment.

For users deploying applications to the Docker EE cluster, they get the choice of API in Kubernetes or Swarm, across a common set of nodes and using a common set of container images secured by the Trusted Registry.

Deploy Apps to Kubernetes

A new capability with this upcoming release allows organizations to use existing Docker Compose files and deploy to a Docker EE with Kubernetes environment. The same Docker Compose file can be deployed with either orchestrator by mapping Docker service definitions to native Kubernetes resource types.

Centralized Management

With Docker EE, all of these different applications can be monitored and managed in a single environment that is integrated with centralized logging and monitoring capabilities. IT can manage and secure all applications in a unified operating model that aligns with their current responsibilities.

Infrastructure Independence

Docker EE delivers a consistent experience across certified infrastructure platforms including Linux distributions (RHEL, SLES, CentOS, Ubuntu, Oracle Linux), Windows as well as cloud platforms including AWS,Azure, and IBM Cloud. This means organizations are not locked into an underlying platform and get greater cloud portability.

Watch the video below for a complete demo of Kubernetes in Docker EE.

Sign up for the Beta

If you’re interested in getting an easy-to-manage and secure deployment of Kubernetes with advanced lifecycle management capabilities, visit https://www.docker.com/kubernetes and sign up for the upcoming Beta.

To learn more about Kubernetes in Docker:


Simplifying the Management of #Kubernetes w/ #Docker Enterprise Edition by @banjot
Click To Tweet


The post Simplifying the Management of Kubernetes with Docker Enterprise Edition appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Newly Updated Whitepaper: FERPA Compliance on AWS

One of the main tenets of the Family Educational Rights and Privacy Act (FERPA) is the protection of student education records, including personally identifiable information (PII) and directory information. We recently updated our FERPA Compliance on AWS whitepaper to include AWS service-specific guidance for 24 AWS services. The whitepaper describes how these services can be used to help secure protected data. In conjunction with more detailed service-specific documentation, this updated information helps make it easier for you to plan, deploy, and operate secure environments to meet your compliance requirements in the AWS Cloud.

The updated whitepaper is especially useful for educational institutions and their vendors who need to understand:

  • AWS’s Shared Responsibility Model.
  • How AWS services can be used to help deploy educational and PII workloads securely in the AWS Cloud.
  • Key security disciplines in a security program to help you run a FERPA-compliant program (such as auditing, data destruction, and backup and disaster recovery).

In a related effort to help you secure PII, we also added to the whitepaper a mapping of NIST SP 800-122, which provides guidance for protecting PII, as well as a link to our NIST SP 800-53 Quick Start, a CloudFormation template that automatically configures AWS resources and deploys a multi-tier, Linux-based web application. To learn how this Quick Start works, see the Automate NIST Compliance in AWS GovCloud (US) with AWS Quick Start Tools video. The template helps you streamline and automate secure baselines in AWS—from initial design to operational security readiness—by incorporating the expertise of AWS security and compliance subject matter experts.

For more information about AWS Compliance and FERPA or to request support for your organization, contact your AWS account manager.

– Chris Gile, Senior Manager, AWS Security Assurance


Source: Aws Security

Like This (0)
Dislike This (0)

Announcing the General Availability of containerd 1.0, the industry-standard runtime used by millions of users

Today, we’re pleased to announce that containerd (pronounced Con-Tay-Ner-D), an industry-standard runtime for building container solutions, has reached its 1.0 milestone. containerd has already been deployed in millions of systems in production today, making it the most widely adopted runtime and an essential upstream component of the Docker platform.

Built to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience. From Docker’s initial announcement last year that it was spinning out its core runtime to its donation to the CNCF in March 2017, the containerd project has experienced significant growth and progress over the past 12 months. .

Within both the Docker and Kubernetes communities, there has been a significant uptick in contributions from independents and CNCF member companies alike including Docker, Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU. Similarly, the maintainers have been working to add key functionality to containerd.The initial containerd donation provided everything users need to ensure a seamless container experience including methods for:

  • transferring container images,
  • container execution and supervision,
  • low-level local storage and network interfaces and
  • the ability to work on both Linux, Windows and other platforms. 

Additional work has been done to add even more powerful capabilities to containerd including a:

  • Complete storage and distribution system that supports both OCI and Docker image formats and
  • Robust events system
  • More sophisticated snapshot model to manage container filesystems

These changes helped the team build out a smaller interface for the snapshotters, while still fulfilling the requirements needed from things like a builder. It also reduces the amount of code needed, making it much easier to maintain in the long run.

The containerd 1.0 milestone comes after several months testing both the alpha and version versions, which enabled the  team to implement many performance improvements. Some of these,improvements include the creation of a stress testing system, improvements in garbage collection and shim memory usage.

“In 2017 key functionality has been added containerd to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes,” said Michael Crosby, Maintainer for containerd and engineer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embeddable in higher level systems to provide core container capabilities. We will continue to work with the community to create a runtime that’s lightweight yet powerful, balancing new functionality with the desire for code that is easy to support and maintain.”

containerd is already being used by Kubernetes for its cri-containerd project, which enables users to run Kubernetes clusters using containerd as the underlying runtime. containerd is also an essential upstream component of the Docker platform and is currently used by millions of end users. There is also strong alignment with other CNCF projects: containerd exposes an API using gRPC and exposes metrics in the Prometheus format. containerd also fully leverages the Open Container Initiative (OCI) runtime, image format specifications and OCI reference implementation (runC), and will pursue OCI certification when it is available.

Key Milestones in the progress to 1.0 include:

containerd 1.0

Notable containerd facts and figures:

  • 1922 GitHub stars, 401 forks
  • 108 contributors
  • 8 maintainers from independents and and member companies alike including Docker, Google, IBM, ZTE and ZJU .
  • 2949+ commits, 26 releases

Availability and Resources

To participate in containerd: https://github.com/docker/containerd/ .


Announcing the GA of @containerd 1.0, the industry-standard runtime used by millions of users
Click To Tweet


The post Announcing the General Availability of containerd 1.0, the industry-standard runtime used by millions of users appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

AWS Contributes to Milestone 1.0 Release and Adds Model Serving Capability for Apache MXNet

Post by Dr. Matt Wood

Today AWS announced contributions to the milestone 1.0 release of the Apache MXNet deep learning engine including the introduction of a new model-serving capability for MXNet. The new capabilities in MXNet provide the following benefits to users:

1) MXNet is easier to use: The model server for MXNet is a new capability introduced by AWS, and it packages, runs, and serves deep learning models in seconds with just a few lines of code, making them accessible over the internet via an API endpoint and thus easy to integrate into applications. The 1.0 release also includes an advanced indexing capability that enables users to perform matrix operations in a more intuitive manner.

  • Model Serving enables set up of an API endpoint for prediction: It saves developers time and effort by condensing the task of setting up an API endpoint for running and integrating prediction functionality into an application to just a few lines of code. It bridges the barrier between Python-based deep learning frameworks and production systems through a Docker container-based deployment model.
  • Advanced indexing for array operations in MXNet: It is now more intuitive for developers to leverage the powerful array operations in MXNet. They can use the advanced indexing capability by leveraging existing knowledge of NumPy/SciPy arrays. For example, it supports MXNet NDArray and Numpy ndarray as index, e.g. (a[mx.nd.array([1,2], dtype = ‘int32’]).

2) MXNet is faster: The 1.0 release includes implementation of cutting-edge features that optimize the performance of training and inference. Gradient compression enables users to train models up to five times faster by reducing communication bandwidth between compute nodes without loss in convergence rate or accuracy. For speech recognition acoustic modeling like the Alexa voice, this feature can reduce network bandwidth by up to three orders of magnitude during training. With the support of NVIDIA Collective Communication Library (NCCL), users can train a model 20% faster on multi-GPU systems.

  • Optimize network bandwidth with gradient compression: In distributed training, each machine must communicate frequently with others to update the weight-vectors and thereby collectively build a single model, leading to high network traffic. Gradient compression algorithm enables users to train models up to five times faster by compressing the model changes communicated by each instance.
  • Optimize the training performance by taking advantage of NCCL: NCCL implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides communication routines that are optimized to achieve high bandwidth over interconnection between multi-GPUs. MXNet supports NCCL to train models about 20% faster on multi-GPU systems.

3) MXNet provides easy interoperability: MXNet now includes a tool for converting neural network code written with the Caffe framework to MXNet code, making it easier for users to take advantage of MXNet’s scalability and performance.

  • Migrate Caffe models to MXNet: It is now possible to easily migrate Caffe code to MXNet, using the new source code translation tool for converting Caffe code to MXNet code.

MXNet has helped developers and researchers make progress with everything from language translation to autonomous vehicles and behavioral biometric security. We are excited to see the broad base of users that are building production artificial intelligence applications powered by neural network models developed and trained with MXNet. For example, the autonomous driving company TuSimple recently piloted a self-driving truck on a 200-mile journey from Yuma, Arizona to San Diego, California using MXNet. This release also includes a full-featured and performance optimized version of the Gluon programming interface. The ease-of-use associated with it combined with the extensive set of tutorials has led significant adoption among developers new to deep learning. The flexibility of the interface has driven interest within the research community, especially in the natural language processing domain.

Getting started with MXNet
Getting started with MXNet is simple. To learn more about the Gluon interface and deep learning, you can reference this comprehensive set of tutorials, which covers everything from an introduction to deep learning to how to implement cutting-edge neural network models. If you’re a contributor to a machine learning framework, check out the interface specs on GitHub.

To get started with the Model Server for Apache MXNet, install the library with the following command:

$ pip install mxnet-model-server

The Model Server library has a Model Zoo with 10 pre-trained deep learning models, including the SqueezeNet 1.1 object classification model. You can start serving the SqueezeNet model with just the following command:

$ mxnet-model-server 
  --models squeezenet=https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model 
  --service dms/model_service/mxnet_vision_service.py

Learn more about the Model Server and view the source code, reference examples, and tutorials here: https://github.com/awslabs/mxnet-model-server/

-Dr. Matt Wood


Source: New feed

Like This (0)
Dislike This (0)