DockerCon 2018 San Francisco CFP is Open

DockerCon CFP

Deadline: January 18th at 11:59 PST

The DockerCon San Francisco 2018 Call for Proposals is open! From beginners to experts, the Docker and Moby community come to DockerCon to learn, share and contribute. If you have Docker story to share, submit your talk today. The deadline for submissions is January 18th, 2018 at 11:59 PST.


Submit a talk

IT Pros How-tos (new)

SysAdmins, what is your container story? How did you operationalize Docker in your organization and what changes did it bring about? Tell us about a day or week in your life, and be sure to share your learnings, insights, recommendations and future plans!

Containers in Production – Customer Stories

Are you a Docker EE customer with production implementation advice and learnings to share? Can you share your technology stack, architecture decisions and trade offs, and your ROI? When attendees leave your session, they should understand how to apply your take-aways to their use case.

Great examples from previous events: Beyond Chicken Nuggets: 1 year and 1,000 Containers Later at ADP by James Ford and Taking Docker from Local to Production at Intuit by JanJaap Lahpor and Harish Jayakumar

Cool Apps

What are you building with the Docker Platform? We love hearing the interesting and unexpected ways that the community is using the technology to solve real problems.

Great examples from previous events: Take Control of your Maps with Docker by Petr Pridal and Play with Docker (PWD): Inside Out by Marcos Nils and Jonas Leibiusky

Edge Use Cases

If you have a use case that involves IoT, big data, serverless or any other emerging technology, then share your story!

Great examples from previous events: From Zero to Serverless in 60 Seconds, Anywhere by Alex Ellis Anywhere and Repainting the Past with Distributed Machine Learning and Docker by Finnian Anderson and Oli Callaghan

Deeply Technical

One way to achieve deep understanding of a complex system is to isolate the various components of that system, as well as those that interact with it, and examine all of them relentlessly. We are looking for deeply technical talks covering not only container technology, but also related projects.

Great examples from previous events: What Have Syscalls Done for you Lately?
By Liz Rice and
Deep Dive in Docker Overlay Networks by Laurent Bernaille

Transformational Stories

Technology is only one part of innovation – people, processes and how we interact on a day-to-day basis must also enable progress. We are looking for talks that focus on the impact of change – both for organizations and ourselves as individuals and communities. Share your inspirational and insightful story and leave the audience energized and equipped to drive innovation.

Great examples from previous events: Rock Stars, Builders and Janitors: You’re Doing it Wrong by Alice Goldfuss and Learn Fast, Fail Fast, Deliver Fast: The ModSquad Way at MetLife by Tim Tyler

Using Docker Technology

Using Docker sessions are practical sessions for Docker users, dev and ops alike. Filled with advice, learnings, and insight, these sessions will help you get started with Docker, understand it better, and bring Docker into your workflow.

Great examples from previous events: Taking Docker to Production, What you Need to Know and Decide by Bret Fisher and Creating Effective Docker Images by Abby Fuller


Do you have a cool Moby use case? Are you using LinuxKit, InfraKit, BuildKit, containerd or other plumbing in your projects? Share the ways you are using Moby’s open source components to build technology that solves real problems.

Great examples from previous events: Looking Under The Hood: containerD by Scott Coulton and BuildKit by Tonis Tiigi

Proposal Don’ts

No matter what type of proposal you submit, there are things that you should not do. Read on for the surefire ways of not getting past the initial review.

Sales pitches

No, just don’t. It’s acceptable to mention your company’s product during a presentation but it should never be the focus of your talk.

Bulk submissions

If your proposal reads as generic talk that has been submitted to a number of conferences, it will not pass the initial review. Granted that a talk can be a polished version of earlier talk, but the proposal should be tailored for DockerCon 2018.


If the proposal contains jargon, it’s very likely that the presentation will also contain jargon. Although DockerCon 2018 is a technology conference, we value the ability to explain and make your points with clear and easy to follow language.

So, what happens next?

After a proposal is submitted, it will be reviewed initially for content. Once past the initial review, a committee of reviewers will read the proposals and select the best ones. Note that there are a limited number of speaking slots and we work to achieve a balance of presentations that will interest the Docker community.

The deadline for proposal submission is January 18, 2018 at 11:59 PST.

We’re looking forward to reading your proposals!

Submit a talk

#DockerCon 2018 San Francisco CFP is open! Share your story before Jan 18th.
Click To Tweet

Learn more about DockerCon:

The post DockerCon 2018 San Francisco CFP is Open appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

A tour of containerd 1.0


We have done a few talks in the past on different features of containerd, how it was designed, and some of the problems that we have fixed along the way. Containerd is used by Docker, Kubernetes CRI, and a few other projects but this is a post for people who may not know what containerd actually does within these platforms.  I would like to do more posts on the feature set and design of containerd in the future but for now, we will start with the basics.

I think the container ecosystem can be confusing at times. Especially with the terminology that we use. Whats this? A runtime. And this? A runtime…  containerd (pronounced “container-dee”as the name implies, not contain nerd as some would like to troll me with, is a container daemon.  It was originally built as an integration point for OCI runtimes like runc but over the past six months it has added a lot of functionality to bring it up to par with the needs of modern container platforms like Docker and orchestration systems like Kubernetes.

So what do you actually get using containerd?  You get push and pull functionality as well as image management.  You get container lifecycle APIs to create, execute, and manage containers and their tasks. An entire API dedicated to snapshot management and an openly governed project to depend on.  Basically everything that you need to build a container platform without having to deal with the underlying OS details.  I think the most important part of containerd is having a versioned and stable API that will have bug fixes and security patches backported.


Since there is no such thing as Linux containers in the kernel, containers are various kernel features tied together, when you are building a large platform or distributed system you want an abstraction layer between your management code and the syscalls and duct tape of features to run a container.  That is where containerd lives.  It provides a client a layer of stable types that platforms can build on top of without ever having to drop down to the kernel level.  It’s so much nicer to work with Container, Task, and Snapshot types than it is to manage calls to clone() or mount(). Balanced with the flexibility to directly interact with the runtime or host-machine, these objects avoid the sacrifice of capabilities that typically come with higher-level abstractions. The result is that easy tasks are simple to complete and hard tasks are possible.

containerdContainerd was designed to be used by Docker and Kubernetes as well as any other container system that wants to abstract away syscalls or OS specific functionality to run containers on Linux, Windows, Solaris, or other Operating Systems.  With these users in mind, we wanted to make sure that containerd has only what they need and nothing that they don’t.  Realistically this is impossible but at least that is what we try for.  While networking is out of scope for containerd, what it doesn’t do lets higher level systems have full control.  The reason for this is, when you are building a distributed system, networking is a very central aspect.  With SDN and service discovery today, networking is way more platform specific than abstracting away netlink calls on linux.  Most of the new overlay networks are route based and require routing tables to be updated each time a new container is created or deleted.  Service discovery, DNS, etc all have to be notified of these changes as well.  It would be a large chunk of code to be able to support all the different network interfaces, hooks, and integration points to support this if we added networking to containerd.  What we did instead is opted for a robust events system inside containerd so that multiple consumers can subscribe to the events that they care about.  We also expose a Task API that lets users create a running task, have the ability to add interfaces to the network namespace of the container, and then start the container’s process without the need for complex hooks in various points of a container’s lifecycle.

Another area that has been added to containerd over the past few months is a complete storage and distribution system that supports both OCI and Docker image formats.  You have a complete content addressed storage system across the containerd API that works not only for images but also metadata, checkpoints, and arbitrary data attached to containers.

We also took the time to rethink how “graphdrivers” work.  These are the overlay or block level filesystems that allow images to have layers and you to perform efficient builds.  Graphdrivers were initially written by Solomon and I when we added support for devicemapper.  Docker only supported AUFS at the time so we modeled the graphdrivers after the overlay filesystem.  However, making a block level filesystem such as devicemapper/lvm act like an overlay filesystem proved to be much harder to do in the long run.  The interfaces had to expand over time to support different features than what we originally thought would be needed.  With containerd, we took a different approach, make overlay filesystems act like a snapshotter instead of vice versa.  This was much easier to do as overlay filesystems provide much more flexibility than snapshotting filesystems like BTRFS, ZFS, and devicemapper as they don’t have a strict parent/child relationship.  This helped us build out a smaller interface for the snapshotters while still fulfilling the requirements needed from things like a builder as well as reduce the amount of code needed, making it much easier to maintain in the long run.

You can find more details about the architecture of containerd in Stephen Day’s Dec 7th 2017 KubeCon SIG Node presentation.

In addition to the technical and design changes in the 1.0 codebase, we also switched the containerd governance model from the long standing BDFL to a Technical Steering Committee giving the community an independent third party resource to rely on.

Learn more about containerd:

A tour of @containerd 1.0 by @crosbymichael
Click To Tweet

The post A tour of containerd 1.0 appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Simplifying the Management of Kubernetes with Docker Enterprise Edition

Back in October at DockerCon Europe, we announced that Docker will be delivering a  seamless and simplified integration of Kubernetes into the Docker platform. By integrating Kubernetes with Docker EE, we provide the choice to use Kubernetes and/or Docker Swarm for orchestration while maintaining the consistent developer to operator workflow users have come to expect from Docker. For users, this means they get an unmodified, conformant version of Kubernetes with the added value of the Docker platform including security, management, a familiar developer workflow and tooling, broad ecosystem compatibility and an adherence to industry standards including containerd and the OCI.

Kubernetes and Docker

One of the biggest questions that we’ve been asked since we announced support for Kubernetes at  DockerCon EU –  what does this mean for an operations team that is already using Kubernetes to orchestrate containers within their enterprise? The answer is really fairly straightforward  –  Kubernetes teams using Docker EE will have the following:

  • Full access to the Kube API and all Kubernetes constructs
  • Native use of KubeCTL
  • If you are developing in Kube YML, seamless deployment
  • Ability to develop  in Docker with Compose and leverage your best practices around Kubernetes services

Docker Enterprise Edition with support for Kubernetes will be in beta this month and be generally available in February. Those who use Docker for Mac will get access to the first Docker CE edge release with Kubernetes support later this month. Docker for Windows will follow shortly thereafter.

To learn more about how Docker supports Kubernetes for orchestration, watch a quick overview that describes the security and portability benefits realized by using Kubernetes in Docker:

Through Integration, Kubernetes inherits Advanced Capabilities of the Docker Platform

Integrating Kubernetes into Docker EE means that Kubernetes inherits the Docker EE features for integrated management at scale, security and automation for enterprise operations.                                                                                                                                                                                                                                                                                                                                          

Easy Production Install

By leveraging Swarm for cluster management, creating a highly available and fault tolerant Kubernetes environment is much easier and much more secure.

For admins, they can get a cluster up and running with Kubernetes by simply installing Docker EE with a one-line command. Once the cluster is running, Kubernetes is installed as part of Docker EE and runs independently, free of any dependency on swarm. This includes mutually authenticated TLS for all nodes in the cluster with certificate rotation. This built-in security functionality means that only nodes that are authorized to be in the cluster can join.

Nodes once added to the cluster, can be assigned dynamically to run Kubernetes services or Swarm services, allowing a clear separation of concerns for administrators. With this advanced node management, operators can easily troubleshoot and diagnose issues with each orchestrator. Additionally, administrators can change the task allocation setting dynamically on each individual node to align their capacity needs with their workloads.

Secure and Unified Supply Chain

Docker EE delivers end-to-end security across the supply chain from the developer’s laptop to production deployment. With this integration, teams leverage the same supply chain capabilities for both Swarm and Kubernetes deployments. That includes:

  • Secure Image Management: With image scanning and Docker Content Trust, Docker EE provides a way to validate and verify images before being deployed and manage them in a privately hosted image registry.
  • Secure Automation: With policy-based image promotion, organizations remove bottlenecks in the supply chain while enforcing policies such as scanning for vulnerabilities.

Secure Multi-Tenancy

With flexible and granular role-based access controls (RBAC) down to the API-level, admins can integrate AD/LDAP once and support different teams bringing different apps (Windows, Linux, or mainframe) of different app types (microservices, ISV, or traditional), leveraging different orchestrators (Swarm or Kubernetes) all in to the same Docker EE environment with secure isolation between them. That allows development teams to bring their own tools and processes into the same environment.

For users deploying applications to the Docker EE cluster, they get the choice of API in Kubernetes or Swarm, across a common set of nodes and using a common set of container images secured by the Trusted Registry.

Deploy Apps to Kubernetes

A new capability with this upcoming release allows organizations to use existing Docker Compose files and deploy to a Docker EE with Kubernetes environment. The same Docker Compose file can be deployed with either orchestrator by mapping Docker service definitions to native Kubernetes resource types.

Centralized Management

With Docker EE, all of these different applications can be monitored and managed in a single environment that is integrated with centralized logging and monitoring capabilities. IT can manage and secure all applications in a unified operating model that aligns with their current responsibilities.

Infrastructure Independence

Docker EE delivers a consistent experience across certified infrastructure platforms including Linux distributions (RHEL, SLES, CentOS, Ubuntu, Oracle Linux), Windows as well as cloud platforms including AWS,Azure, and IBM Cloud. This means organizations are not locked into an underlying platform and get greater cloud portability.

Watch the video below for a complete demo of Kubernetes in Docker EE.

Sign up for the Beta

If you’re interested in getting an easy-to-manage and secure deployment of Kubernetes with advanced lifecycle management capabilities, visit and sign up for the upcoming Beta.

To learn more about Kubernetes in Docker:

Simplifying the Management of #Kubernetes w/ #Docker Enterprise Edition by @banjot
Click To Tweet

The post Simplifying the Management of Kubernetes with Docker Enterprise Edition appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Announcing the General Availability of containerd 1.0, the industry-standard runtime used by millions of users

Today, we’re pleased to announce that containerd (pronounced Con-Tay-Ner-D), an industry-standard runtime for building container solutions, has reached its 1.0 milestone. containerd has already been deployed in millions of systems in production today, making it the most widely adopted runtime and an essential upstream component of the Docker platform.

Built to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience. From Docker’s initial announcement last year that it was spinning out its core runtime to its donation to the CNCF in March 2017, the containerd project has experienced significant growth and progress over the past 12 months. .

Within both the Docker and Kubernetes communities, there has been a significant uptick in contributions from independents and CNCF member companies alike including Docker, Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU. Similarly, the maintainers have been working to add key functionality to containerd.The initial containerd donation provided everything users need to ensure a seamless container experience including methods for:

  • transferring container images,
  • container execution and supervision,
  • low-level local storage and network interfaces and
  • the ability to work on both Linux, Windows and other platforms. 

Additional work has been done to add even more powerful capabilities to containerd including a:

  • Complete storage and distribution system that supports both OCI and Docker image formats and
  • Robust events system
  • More sophisticated snapshot model to manage container filesystems

These changes helped the team build out a smaller interface for the snapshotters, while still fulfilling the requirements needed from things like a builder. It also reduces the amount of code needed, making it much easier to maintain in the long run.

The containerd 1.0 milestone comes after several months testing both the alpha and version versions, which enabled the  team to implement many performance improvements. Some of these,improvements include the creation of a stress testing system, improvements in garbage collection and shim memory usage.

“In 2017 key functionality has been added containerd to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes,” said Michael Crosby, Maintainer for containerd and engineer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embeddable in higher level systems to provide core container capabilities. We will continue to work with the community to create a runtime that’s lightweight yet powerful, balancing new functionality with the desire for code that is easy to support and maintain.”

containerd is already being used by Kubernetes for its cri-containerd project, which enables users to run Kubernetes clusters using containerd as the underlying runtime. containerd is also an essential upstream component of the Docker platform and is currently used by millions of end users. There is also strong alignment with other CNCF projects: containerd exposes an API using gRPC and exposes metrics in the Prometheus format. containerd also fully leverages the Open Container Initiative (OCI) runtime, image format specifications and OCI reference implementation (runC), and will pursue OCI certification when it is available.

Key Milestones in the progress to 1.0 include:

containerd 1.0

Notable containerd facts and figures:

  • 1922 GitHub stars, 401 forks
  • 108 contributors
  • 8 maintainers from independents and and member companies alike including Docker, Google, IBM, ZTE and ZJU .
  • 2949+ commits, 26 releases

Availability and Resources

To participate in containerd: .

Announcing the GA of @containerd 1.0, the industry-standard runtime used by millions of users
Click To Tweet

The post Announcing the General Availability of containerd 1.0, the industry-standard runtime used by millions of users appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

The Journey to 150,000 Containers at PayPal

PayPal is committed to democratizing financial services and empowering people and businesses to join and thrive in the global economy. Their open digital payments platform gives 218 million active account holders the confidence to connect and transact in new and powerful ways. To achieve this, PayPal has built a global presence that must be highly available to all its users: if PayPal is down, the effects ripple down to many of their small business customers, who rely on PayPal as their sole payment processing solution.

PayPal turned to Docker Enterprise Edition  to help them achieve new operational efficiencies, including a 50% increase in the speed of their build-test-deploy cycles. At the same time, they increased application availability through Docker’s dynamic placement capabilities and infrastructure independence; and they improved security by using Docker to automate and granularly control access to resources. On top of the operational benefits, PayPal’s use of Docker empowered developers to innovate and try new tools and frameworks that previously were difficult to introduce due to PayPal’s application and operational complexity.

Meghdoot Bhattacharya, Cloud Engineer at PayPal, shared the journey his team has helped PayPal undertake over the course of the past two years to introduce Docker in to the PayPal environment and grow its usage into the standard platform for application delivery. You can view the full DockerCon EU 2017 session here:

PayPal’s Past, Present, and Future with Docker

While PayPal may not be an “old” company, over the course of their 17+ years they have built a global payment processing platform consisting of hundreds, if not thousands, of individual applications and services. As is usual with this level of rapid development and growth, there are a plethora of different tools and processes for building, testing, deploying and running each of these applications. For both the developer teams and for operations, the lack of standardization created issues. Developers were locked into a limited set of tools and frameworks and it was difficult to add anything new to the mix because of the burden it placed on test and operations teams to figure out how to deploy and run each new stack. For operations, the strain of deploying apps to multiple architectures and multiple clouds, each with different sets of instrumentation and procedures, led to slow turnaround times for deployment and maintenance.

PayPal’s initial use of Docker was simply as a means to create a single, consistent application packaging format that could be used with as many of their existing applications as possible. They focused on the operational side of the house, creating a unified application delivery and operational model, to address their issues around application silos. PayPal undertook this phase prior to the launch of Docker’s official Modernize Traditional Applications program, yet they shared many of the same strategies and benefits: start with existing applications where you already have in-house expertise around the day-to-day operations, and focus on migrating the applications to containers without changing the code. At this stage, PayPal recognized several important benefits, even though the apps themselves remained the same:

  • Decoupled deployment from framework stacks: PayPal now had a platform that could deploy almost any application using the same procedures, regardless of programming languages, libraries, and stacks
  • Modernized OS and kernels: Since Docker packages all the application dependencies into the container, they were able to decouple them from the underlying OS and thus modernize the infrastructure without changing the app. Some applications got 10-20% boosts in efficiency for free, simply by running the containers on top of a modern OS and kernel.

PayPal Docker

As the Docker container format and operations became well understood, PayPal opened the door to developers to learn and use containers. Because Docker provided a standard methodology for shipping and running the applications, developers were free to experiment with new programming methodologies and frameworks, while operations and support teams did not have to worry about those complexities. With developers joining the Docker Supply Chain, PayPal recognized a greater than 50% boost to the build-test-deploy cycles for their applications. With Docker, every developer, operator, tester and automation tool was able to run the same application, with the same configuration. For system administrators, there were additional benefits as well. OS upgrades, which used to take a full year to roll out, were now done in one shot by simply updating the Dockerfile for a container image. As word of these benefits spread within PayPal, other organizations not originally part of the pilot started to adopt Docker for their own use cases and the Docker pilot team at PayPal helped them get going as well.

Results and What’s Next

In just under two years since the inception of phase I, PayPal has moved over 700 apps to Docker, running in 150,000 containers. Along the way, PayPal has seen a 50% increase in productivity in building, testing, and deploying their Dockerized applications. Projects like OS upgrades that used to take a year can now be done across dev, test, and production with a single change. And developers are free to explore emerging technologies for their applications, while operators and administrators enjoy a single set of operating procedures no matter the application.

In their final phase, PayPal has begun to roll out a full Container-as-a-Service (CaaS) platform across all their clouds. In this phase, they are focusing on efficiency gains. So far, the applications moved to the new Docker CaaS platform have reduced resource consumption, increased security and availability, and enabled PayPal to provide a single platform for tooling, monitoring and operations across all environments.

PayPal Docker Partner benefits

PayPal’s journey to implement Docker is a great blueprint for other organizations to follow. Focusing on adoption in manageable steps, building a team of in-house experts that can consult and assist other teams, and setting clear goals and measuring results are key to implementing a transformation that can drastically improve operations and development.

Learn more about PayPal by viewing their case study.

Find out how @paypal moved over 700 apps to Docker and improved app security and availability,…
Click To Tweet

To learn more about Docker Enterprise Edition:

The post The Journey to 150,000 Containers at PayPal appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Introducing Docker Community Leaders – Formerly Known as Meetup Organizers

As we continue to grow, we’ve been thinking of ways to better serve the Docker community and give more visibility and recognition to the people who don’t just organize events, but who also teach, mentor and volunteer in their community. docker community leader

What’s New?

  • New name! These folks don’t just organize meetups, they are leaders, mentors, teachers, speakers, and volunteers in their local community.
  • New mascot! About time right?  What better animal than the social dolphin to represent this amazing group! Our community leaders are pros at juggling several tasks and filling multiple roles at the same time.
  • New Docker Community Events site! We’ve brought the community to making it easier than ever for you to find a local event! Learn more here

They spoke and we listened

At each DockerCon, we host a Summit for our Meetup Organizers to make connections, review best practices, and give feedback on what we’re doing well and what we can do better. In Austin, the group reviewed their program as a whole, and came to the decision that the title Meetup Organizer didn’t fully encompass the complete role they played in their local community. After the conference, we continued the conversation, and they reached a decision. Moving forward, Docker Meetup Organizers will now be known as Docker Community Leaders.

Docker community leaders

What’s it like being a Community Leader?

Get Involved

Interested in becoming a Docker Community Leader? We’d love to hear from you! Check out this page for more information on getting involved and how to apply.

A Big Thank You!

With an ever-growing community and 550+ active community leaders in 81 countries, who collectively support a global community of over 200,000 members, there’s a lot that Docker has to be thankful for! We are continuously humbled by the dedication and time these individuals take to support their local communities.

Beyond being thankful for this amazing group, here are five more reasons why we’re thankful for our community.

Other ways to get involved:


Learn about #Docker Community Leaders (FKA meetup organizers) and check out their new…
Click To Tweet

The post Introducing Docker Community Leaders – Formerly Known as Meetup Organizers appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

How Docker Enterprise Edition Helps Open Doors at Assa Abloy

ASSA ABLOY is the world’s largest lock manufacturer with 47,000 employees worldwide and well-known brands like Yale, Sargent and Assa in their portfolio. The vision for ASSA ABLOY is to become the most innovative provider of door opening solutions through growth of electro-mechanical and digital entry solutions. With increasingly global operations to deal with as well, ASSA ABLOY recognized the opportunity to leverage public cloud, microservices and containers to fuel this digital transformation.

Jan Hedstrom, Cloud Infrastructure Architect in the Shared Technologies department at ASSA ABLOY, and Patrick Van Der Bleek, Solutions Engineer at Docker, presented at DockerCon Europe how ASSA ABLOY leveraged Docker Enterprise Edition (Docker EE)  as their central secure container management platform for their global hardware and software workflow . 

You can watch their entire talk here:


Journey from Docker CE to Docker EE

Some developers at ASSA ABLOY started using Docker for microservice development back in 2014, but it was uncoordinated with manual, scripted deployments of containers onto individual servers, inconsistent practices, no separation between teams, and without any image standards. Additionally, ASSA ABLOY knew that going to a public cloud like AWS would give them a “datacenter with superpowers”, but they were concerned about cloud lock-in.


Docker Assa Abloy


ASSA ABLOY started looking into a solution to help them manage their containers in the cloud and looked into Amazon ECR and ECS, Nexus with Kubernetes, and also Docker EE (Docker Datacenter). They chose Docker EE which gave them a centralized container management platform with orchestration and a secure, private registry solution that would also be portable to different clouds and infrastructure platforms.

Results and What’s Next

Assa Abloy Docker platform

Today, ASSA ABLOY has Docker EE deployed in AWS with the plan to extend across multiple regions, supporting multiple development teams. They are fully leveraging Docker Trusted Registry capabilities and have noticed these key improvements:

  • Secure Software Supply Chain : Prior to Docker EE, different teams could view, edit, and even delete other teams’ images. Now, they are isolated from one another with clear role-based permissions, and they are starting to use digital signing to validate ownership of images.
  • Seamless Dev to Ops Workflows: With the implementation of Docker EE, ASSA ABLOY is now enforcing the app progression from dev to test to pre-production to production with developers having more flexibility in dev and test and being more limited in their access to pre-production and production environments. Along with separate repositories that limit code changes in pre-production and production, this separation of roles and responsibilities also allows the infrastructure team to be responsible for things like network security and approved image layers.
  • Standardized Image Management: Adopting Docker EE drove a more disciplined approach to naming and labeling images and containers. These standardized practices are being used across different teams which enables administrators and operators to have a much easier time identifying and tracking applications. With garbage collection, they can also manage the fast-growing image library.

ASSA ABLOY is now getting ready to move their first projects into production. They are also looking to be able to ship their software to customers as containers to be deployed in their own data centers, introducing a new delivery model for their access control solutions.

Learn more about ASSA ABLOY by viewing their case study.

To learn more about Docker Enterprise Edition:

Find out how @assaabloy journeyed from Docker CE to Docker EE and improved their app security,…
Click To Tweet

The post How Docker Enterprise Edition Helps Open Doors at Assa Abloy appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Shifting Legacy WebLogic Apps to Docker Enterprise Edition for Improved Operations

Founded in 1792, Alm. Brand is a Danish insurance and banking company headquartered in Copenhagen, Denmark and one of the oldest companies to have ever presented at any DockerCon. Sune Keller, an IT architect, and Loke Johannessen, Systems Specialist, rode their bikes to DockerCon Europe 2017 to demonstrate how they helped lift and shift their legacy WebLogic applications to Docker Enterprise Edition (Docker EE).  You can watch their entire talk here:

Seeking Operational Stability for Legacy Apps

Alm. Brand started working with Docker EE after hearing about it at DockerCon 2015 (known as Docker Datacenter back then). After successfully deploying the first set of new greenfield apps in their Docker EE environment, Alm. Brand wanted to tackle their existing WebLogic applications which were causing the operations team the biggest headaches. The team operated the WebLogic applications in a large cluster, all running on the same JVM. When an app crashed, it would often crash the entire JVM or hang the entire cluster, making it hard to identify which application was the root cause. The setup was also very brittle and slow as they could only deploy one app at a time to the cluster.

Docker customer

With the skills and knowledge they picked up getting their first batch of stateless, greenfield applications running in Docker EE, Sune and Loke set some ambitious goals for migrating the legacy WebLogic applications. They set out to switch over from their existing cluster to Docker EE during work hours without changing code and with minimal impact to the users.

Results and Impact

Today, Alm. Brand has migrated 39 WebLogic applications to their Docker EE environment to go along with almost 50 new greenfield apps. They were able to execute the migrations with minimal disruption by load balancing applications between the legacy cluster and the new Docker cluster, gradually transitioning traffic from one cluster to the other without disrupting users. Sune and Loke demonstrated this process in their talk and it’s worth checking out!

Docker customers

The impact of this migration has been great for the operations team for a number of reasons:

  • Improved configuration management: One of the more challenging issues Alm. Brand dealt with was uncovering 8 years worth of configuration drift across the different applications. By carefully testing and unboxing the dependencies, they were able to go from 26 different custom virtual machine templates down to 12.
  • Easier maintenance and troubleshooting: Alm. Brand is experiencing far fewer service interruptions because the applications are isolated from one another. Application issues are easier to identify and resolve as a result.
  • Unified operating model for greenfield and legacy apps: Having both types of applications running on the same Docker EE platform allows for centralized logging and metrics collection across the portfolio. Alm. Brand is able to use the same set of processes and a single dashboard for managing all applications.  

What’s Next

Sune and Loke have about 10 more WebLogic applications to finish migrating and then they are going to start looking at their Windows app portfolio. They’re already leveraging the basic access controls and LDAP integration capabilities of Docker EE and are now also exploring some of the other advanced capabilities like node-based isolation for separation between production and non-production environments.

To learn more about Docker Enterprise Edition:

Find out how Alm. Brand streamlined operations by migrating legacy @OracleWebLogic apps to #Docker…
Click To Tweet

The post Shifting Legacy WebLogic Apps to Docker Enterprise Edition for Improved Operations appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

5 Reasons Why We’re Thankful for the Docker Community

As the weather gets colder and the days get shorter, we wanted to take some time to give a warm “thank you” to the Docker Community. With an ever-growing community and active community leaders, there’s a lot that Docker has to be thankful for this year! Here’s our top five:

Moby Docker ThanksgivingDocker Community Gives Back

1. Docker Community Gives Back

Members of  the Docker and Moby  Community work with unbelievable amounts of dedication and passion on some of the best open source projects around. Their hardwork never goes unnoticed, and the Moby Project is better because of it’s community of awesome contributors.  

Interested in getting started on the Moby Project? Get more info on how you can get involved here.


2. Our Community Knows how to Take Action

With over 500 Community Leaders and locations in 81 countries, we’re continually impressed by the initiative our community takes in mentoring and teaching one another – even when you’re on different parts of the globe. We’ve seen you collaborate and create, and we couldn’t be more thankful for all of the impressive tips, tricks and knowledge that you share with us and the community.


Docker captain


3. All Roads Lead to Docker

Docker London meetup

With local chapters in major cities around the world, it’s not hard to find a Docker community wherever you are! With over 200 active city meetup communities and just over 200,000 active meetup members worldwide, our community continues to grow. We know how excited you are to get involved and share your knowledge with others and with an ever-growing list of guest speakers and attendees at meetups, the Docker community is growing faster than ever.

Want to find a community near you? Find your local Docker Chapter here.

Want to help grow the community? You can find info on how to start your own meetup group here.


4. We Just Can’t Stop Getting Together

Docker meetup

We had nearly 1000 meetups this year from locations all over the world. We can’t believe how much encouragement you provide to one another! Meetups have now become one of the best places to share and receive knowledge and our community is always eager for more!


5. Ahoy There, Mate!

Docker Captains

Our Docker Captains have given so much love to our Docker Community. From tirelessly answering questions on Twitter, providing office hours or creating new and exciting ways to learn about Docker, our Captains are truly leaders in the Docker Community.

Get to know our Docker Captains and what they’re up to here.


It’s no secret that we have a lot to be thankful for! Docker couldn’t be more proud of the work our community has done, and we can’t wait to see what you all decide to do next. From all of us here at Docker – Thank you for all you do!


We’re so thankful for the #Docker Community! Here’s our top 5 reasons to be thankful.
Click To Tweet

The post 5 Reasons Why We’re Thankful for the Docker Community appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Docker for Windows 17.11 with Windows 10 Fall Creators Update

Experimental support for Microsoft’s Linux containers on Windows (LCOW) feature has been available for Windows Server 1709 via the Docker EE “Preview” release. The same functionality is now available for Windows 10 Fall Creators Update with the release of Docker for Windows 17.11. In fact, it’s been available since Docker for Windows 17.10. 

How to get it

Docker for Windows 17.11 is an Edge channel release. If your copy of Docker for Windows is set to the Edge or Nightly channel you will receive the update automatically. Users on the Stable channel will need to switch to Edge via the Setting dialog to receive the update.

Doesn’t Docker for Windows already run Linux containers?

That’s right. Docker for Windows can run Linux or Windows containers, with support for Linux containers via a Hyper-V Moby Linux VM (as of Docker for Windows 17.10 this VM is based on LinuxKit).

The setup for running Linux containers with LCOW is a lot simpler than the previous architecture where a Hyper-V Linux VM runs a Linux Docker daemon, along with all your containers. With LCOW, the Docker daemon runs as a Windows process (same as when running Docker Windows containers), and every time you start a Linux container Docker launches a minimal Hyper-V hypervisor running a VM with a Linux kernel, runc and the container processes running on top.

Because there’s only one Docker daemon, and because that daemon now runs on Windows, it will soon be possible to run Windows and Linux Docker containers side-by-side, in the same networking namespace. This will unlock a lot of exciting development and production scenarios for Docker users on Windows.

As part of Docker’s “batteries included” philosophy, Docker for Windows 17.11 includes a highly-optimized LinuxKit-based kernel and userspace to enable the utility VMs, and the configuration required to use LCOW out of the box. The container-optimized LinuxKit components provide high performance for starting and stopping containers compared to a traditional Linux OS.

As a Windows platform feature, LCOW represents a long term solution for Linux container support on Windows. When the platform features meet or exceed the existing functionality the existing Docker for Windows Linux container support will be retired.

Fire it up

When Docker for Windows 17.11 installs if your machine is running the Windows 10 Fall Creators Update you will get a splash screen offering to enable the Linux containers for Windows feature:

Linux container on Windows

When configured to use Linux containers for Windows, try running a simple Linux container:

docker run -it busybox

Docker run-it busy box

You can also take advantage of the localhost support in Docker for Windows:

docker run -d -p 8080:80 httpd

docker run -d -p 8080:80 httpd

Microsoft Edge


After installation the LCOW feature can be enabled and disabled via the Settings dialog:

Docker for Windows

When disabled, the existing options for switching between traditional Docker for Windows Linux containers and Windows containers will be available in the whale systray icon.

Future versions of LCOW will support running Linux and Windows containers side-by-side.

The LCOW feature is under active development:

If you build something cool let us know on Twitter!

#Docker for @Windows 17.11 with Windows 10 Fall Creators Update
Click To Tweet

The post Docker for Windows 17.11 with Windows 10 Fall Creators Update appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)