A Secure Supply Chain for Kubernetes, Part 2

Two weeks ago we shared how the upcoming release of Docker Enterprise Edition (Docker EE) is able to secure the software supply chain for Kubernetes; just as it does for Docker Swarm through a combination of scanning for vulnerabilities and implementing image promotion policies. In this blog, we’ll take a closer look at another part of this solution – Docker Content Trust and image signing.

When combined with granular Role Based Access Controls [RBAC] and the secure clustering features of Docker EE, organizations get a secure container platform solution that is ready for the enterprise.

Restricting Unverified Kubernetes Content

As discussed in Part 1 of this blog post, organizations typically have a “supply chain” for how applications progress from a developer’s laptop to production, whether that is on-premises or in the cloud. For larger organizations, the team that handles QA and testing is not always the same team that develops the applications. There may also be a separate team that handles staging and pre-production before an application is pushed to production. Since an application can pass through several teams before it gets deployed, it’s important for organizations to be able to validate the source of the application.

Docker Content Trust is a way for individuals and teams to add private cryptographic keys to an image, adding a digital signature that helps to ensure proof-of-origin, authenticity and provenance for images. With Docker EE, you can ensure that the images being deployed are the ones you trust and haven’t been altered either in the image registry or on their way from the image registry to your environment by choosing to only run signed images:

In the context of Kubernetes, this means that Docker EE will prevent any workloads from being deployed on the cluster if the underlying images used have not been signed by members of specific teams.

This can be used to enforce image signing at certain stages of your supply chain: when the developer checks in the initial image, when the QA team has completed testing, when the security and networking team has reviewed the app, etc. If an image has missed any of the required signatures, Docker EE will prevent it from being deployed. This allows operations teams to prevent unauthorized content from being deployed into Kubernetes.

Integration of Docker Content Trust to Your Automated Workflow

Image signing does not have to come from an individual or team. It can also be extended to authorized 3rd party tools to indicate that the image build came from an approved workflow. Docker EE makes this simple by giving you the ability to create and manage client bundles within the Docker EE UI. Docker EE creates a keypair that can be used by Continuous Integration (CI) tools like Jenkins or GitLab to sign images as they are created and added to the repository. Learn more about using trusted images with Jenkins here.

Docker EE helps you deliver safer applications by securing your software supply chain. No matter what type of applications you are containerizing (legacy, cloud native, or microservices), the stack it is built for (Windows or Linux), or where it will be deployed (on-prem or the cloud), image vulnerability scanning, automated image promotions, and image signing all give you the ability to enforce a common workflow for the governance and automation of your application delivery process.

Learn how #Docker EE, Content Trust and Image Signing can secure the #Kubernetes supply chain
Click To Tweet

 Learn more about Docker Enterprise Edition with Kubernetes integration:

The post A Secure Supply Chain for Kubernetes, Part 2 appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Video Series: Modernizing Java Apps for Developers Part 1

Modernizing Java Apps

Moving a monolithic application to a modern cloud architecture can be difficult and often result in a greenfield development effort. However, it is possible to move towards a cloud architecture using Docker Enterprise Edition with no code changes and gain portability, security and efficiency in the process.

Containerizing a monolithic application is a great starting point for modernizing application architecture.In many cases this can be done with no code changes at all.  From there, breaking down the application into smaller components makes it easier to deploy updates, introduce new components and manage scale.

This new video series covers modernization for Java applications. It walks through the process of evolving a N-tier Java application to a distributed application running across multiple containers. Docker provides the platform that plugs in and manages all the components into a coherent architecture.

This  series does not cover a full re-architecture to microservices. Shifting to a full microservices approach isn’t right for all applications, and the daunting task of a full rewrite of a monolithic application can be a massive endeavor that takes years to pay-off. Especially if what you have works. This series uses a feature driven approach. I select key features to update, in order to fix performance issues, add new functionality and support application updates. Gradual evolution toward a fully modern application can result in rapid benefits and an easier transition.

In Part 1, the series is introduced and covers the modernization process using Docker as it relates to the development workflow.  This part discusses Docker and how applications are moved into containers. It also includes a synopsis of the following videos and code in the series.

#Docker MTA Video Series: Modernizing Java Apps for Developers
Click To Tweet

To learn more about Docker solutions for IT:

The post Video Series: Modernizing Java Apps for Developers Part 1 appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Enhanced Layer 7 Routing for Swarm in Docker Enterprise Edition Beta

 The beta release of Docker Enterprise Edition has seen incredible activity. The highlight of the upcoming Docker Enterprise Edition (Docker EE) release is the integration of Kubernetes and bringing all of the advanced security, RBAC and management capabilities of Docker EE to Kubernetes. At the same time, we have been working to improve Swarm, delivering the only container platform that allows you to run both orchestrators in the same cluster. In this blog post, we’ll highlight some the key new capabilities around application-layer (Layer 7) routing and load balancing for Swarm-deployed applications. These enhancements come from the new Interlock 2.0 architecture which provides a highly scalable and highly available routing solution for Swarm. The new architecture brings some additional features to the platform, including path-based routing and SSL termination.

Path-Based Routing

Layer 7 load balancing allows traffic going to host domains like acme.com to be distributed across specific containers in your environment. With path-based routing, traffic headed to sub-domains within acme.com (eg. acme.com/app1 or acme.com/app2) can be separately routed to different sets of containers. This can be especially useful for optimizing application performance by driving different requests to different groups of containers.

Docker Load Balancing

Read more about setting up path-based routing for the beta release of Docker EE here: https://beta.docs.docker.com/ee/ucp/interlock/usage/context/ 

SSL Termination

HTTPS traffic ensures secure, encrypted communications from the client to your application, but there are many reasons to decrypt the message once it has reached the load balancer. Layer 7 routing often requires decrypting the incoming message in order to make the intelligent routing decision within the trusted network; in addition, managing multiple certificates at the container level does not necessarily scale well. SSL termination at a proxy service allows for secure external communications to end when it is already within your secure network, allowing you to manage certificates in one location and making it much more scalable.

Docker SSL Termination

Read more about leveraging Swarm Secrets for managing certificates for SSL termination here:  https://beta.docs.docker.com/ee/ucp/interlock/usage/ssl/ 

Proxy Extensions

The new Interlock architecture in Docker EE includes a pluggable extension service that can connect to different load-balancing proxies. As part of Docker’s “batteries included” strategy, the service comes with a supported NGINX proxy today and other proxy solutions will be pluggable into the architecture in the future. The pluggable framework allows you to use industry standard solutions while still having the simplicity of configuring them using standard Docker rolling updates.

Service Clusters

The new version of Docker EE also introduces the concept of Service Clusters which provides a separate instance of the reverse proxy to each application, making application traffic isolated to each service cluster. This allows you to segregate application clusters at Layer 7. Some of the benefits to this feature include isolating service disruptions, reconfiguration downtimes, mis-configurations, and update errors to a single application as opposed to the entire cluster.

Fully-Integrated Design

Layer 7 routing with Swarm is fully Docker native. It runs on Docker Swarm and routes traffic using cluster networking and Docker services, leverages Docker APIs, and is configurable via CLI and UI. It is also designed to be both scalable and highly available, meeting the needs of production applications.

Docker routing Mesh

If you are interested in exploring these routing enhancements, be sure to sign up for the beta that is available today.

Enhanced Layer 7 Routing in #Docker Enterprise Edition Beta
Click To Tweet

To learn more:

The post Enhanced Layer 7 Routing for Swarm in Docker Enterprise Edition Beta appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Announcing DockerCon Europe 2018

The Docker Team is excited to announce that the next DockerCon Europe 2018 will take place at the CCIB in Barcelona, Spain from December 3-5, 2018. With 3000 expected attendees, 7 tracks, 80+ speakers and sponsors, this upcoming edition should be the largest enterprise container conference for the IT industry in Europe.

From Docker basics and orchestration best practices to insights into how containers can enable edge computing, serverless and machine learning, DockerCon will include content for everyone. No matter your level of expertise with Docker or job title, attendees will have ample opportunities to learn and collaborate with their peers at other companies using the Docker platform as the cornerstone of their container strategy.

The CFP and official registration will open in the upcoming months but you can already pre-register to to get an additional 50 EUR off early bird price.

We can’t wait to welcome back many returning DockerCon alumni as well as open the DockerCon doors to so many new attendees and companies as we return to Barcelona.

Announcing @dockercon Europe 2018: December 3-5 at CCIB Barcelona, Spain. Early signup is open!…
Click To Tweet

Learn More about DockerCon EU 2018

The post Announcing DockerCon Europe 2018 appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

5 years later, where are you on your Docker journey?

Docker is turning five the week of March 19-25, 2018 and in celebration of this event, we would like to turn the spotlight to our users – the customers, partners, individuals and organizations that are using the Docker platform every day. From scientific use cases in the field of medical and space research to more traditional use cases in large banks and insurance companies, Docker’s adoption is only accelerating with everyone from individual developers to global corporations. Here are some of the key figures showing the widespread adoption of Docker across the community and within enterprises.


Docker has evolved from a container runtime to Docker Enterprise Edition (EE), a secure enterprise-ready container platform that brings unmatched freedom of choice to enterprises, while providing a foundation for their digital and multi-cloud initiatives. Millions of users rely on Docker, downloading 100M container images a day, and over 450 companies have turned to Docker Enterprise Edition – including hundreds of the largest enterprises in the world. With such vast adoption, the range of stories to tell and the diverse set of use cases continues to grow. So where are some of these users and enterprise customers on their Docker journey?

Docker users have deployed Docker containers for a variety of use cases. One research center is using Docker to create a scalable, parallelizable and rapidly deployable test infrastructure for DART, NASA’s mission to hit an asteroid at 6 km/s. Another use case involves Pinterest, a user that deployed Docker to  improve the production Machine Learning Systems powering their home feed. Still, some of our commercial customers are using Docker EE for a variety of mission-critical applications:

GE Digital  The company leveraged the Docker platform and Windows containers to repackage one of their main Asset Performance Management (APM) on-premise solutions and embrace a new fog computing approach with greater intelligence and more computing power, at the edge.

GlaxoSmithKline – The global pharmaceutical company is using Docker EE to help powers its new research environment for identifying new medical discoveries more rapidly.

MetLife – By using Docker EE for legacy application modernization, MetLife projects savings of 66% across nearly 600 of its applications, enabling the 150-year old company to self-fund innovation projects.

We are thrilled that MetLife will be hosting an onsite birthday event to celebrate their containerization journey, welcome new Docker users and share their first-hand experiences with how the Docker platform has impacted their business.

Docker Birthday events worldwide – RSVP Today

Come learn, mentor, celebrate, eat cake, and take an epic #dockerselfie! No matter where you are in your Docker journey, the birthday events are meant to help demystify some Docker concepts and guide users along specific learning path depending on their use case and expertise level. Advanced Docker users and partners will be available on-site and act as mentors to help everyone complete a series of Docker hands-on labs.  

Docker Birthday #5 Hands-on Labs will include the following:

  • Docker 101 for Linux users
  • Docker 101 for Windows users
  • Kubernetes on Docker for Mac and Docker for Window
  • Modernizing Traditional Java Apps
  • Modernizing Traditional .NET / Windows Apps
  • Deploying Multi-OS applications with Docker EE with Kubernetes

The main event will take place at Docker HQ in San Francisco on March 22nd but thanks to the hard work of awesome Community Leaders, we now have 65 and counting events scheduled worldwide! Check out the map below to confirm your RSVP or reach out to your community leaders to organize an event in your city.

5 years later, where are you on your #docker journey? #dockerbday  
Click To Tweet

The post 5 years later, where are you on your Docker journey? appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

A Secure Supply Chain for Kubernetes

The beta release of the Docker Enterprise Edition container platform <earlier this month?> (Docker EE) integrates Kubernetes orchestration, running alongside Swarm, to provide a single container platform that supports both legacy and new applications running on-premises or in the cloud. For organizations that are exploring Kubernetes or deploying it in production, Docker EE offers integrated security for the entire lifecycle of a containerized application, providing an additional layer of security before the workload is deployed by Kubernetes and continuing to secure the application while it is running.

Mike Coleman previously discussed access controls for Kubernetes. This week we’ll begin discussing how Docker EE secures the Kubernetes supply chain.

What is a Software Supply Chain?

When you purchase something from a retail store, there is an entire supply chain that gets the product from raw materials to the manufacturer to you. Similarly, there is a software supply chain that takes an application from code on a developer’s laptop to production.

Every company’s software supply chain may be slightly different; some outsource software development, some have adopted Continuous Integration and Continuous Delivery processes, and some deploy production applications across multiple clouds, some on-premises. Regardless of what the software supply chain consists of, Docker EE provides a set of solutions that integrates with your workflows while ensuring that applications remain secure, trusted and safe through all of these steps using both Kubernetes and Swarm.

In this week’s blog, we’ll take a closer look at one part of this solution – image scanning and policy-based image promotions.

Secure Automation of Workflows for Kubernetes

Before an application is deployed in production, organizations typically want to know that it does not have any known vulnerabilities that often come from older releases or unpatched versions of software. It’s also difficult for large organizations to keep a full inventory of every application they have running that may be affected by a new vulnerability.

Docker EE provides image security scanning to help organizations both identify vulnerabilities before the applications are deployed in production and to alert you when new vulnerabilities affect existing applications. This is done by executing a binary-level scan of your images against the NIST list of known vulnerabilities. As shown below, each layer of an image can be thoroughly scanned to provide insight into the workload.

Docker EE also has the ability to define policies to automate the movement of images between repositories. These image promotion policies can be combined with the results of security scanning to create a secure, automated workflow for images moving to production.

For example, a developer is working on a new Kubernetes project with access to the ‘dev’ repository from which they can push and pull images. The repository is set up with image scanning to automatically scan all images when they are pushed to the repository. When the developer is ready to move this into production, they add a specific tag like “latest” to the image. Their repository is set up with an image promotion policy that states that if an image has the “latest” tag and has no critical vulnerabilities, it gets automatically copied or promoted to the ‘QA’ repository.

In this example, only the QA team has access to the QA folder, limiting access to only those who require it. This policy also ensures that developers are responsible for fixing any vulnerabilities before they are passed on to the QA team.

By combining these Docker EE capabilities, organizations can:

  • Automate the movement of images between repositories at scale
  • Enforce security scanning practices at certain stages of development
  • Prevent applications with known vulnerabilities from being deployed in production
  • Limit the access to sensitive repositories (like ‘production’) to only those who require it, while still removing bottlenecks in the process by defining appropriate policies

These are all critical workflows that happen prior to the app being deployed in production with Kubernetes. With Docker EE you get the only container platform with integrated security across the entire supply chain. For more information about Docker’s Secure Supply Chain for Kubernetes watch the following on-demand video: 


Learn how #Docker EE security scanning and image promotions can secure the #Kubernetes supply chain
Click To Tweet

 Learn more about Docker Enterprise Edition with Kubernetes integration:

The post A Secure Supply Chain for Kubernetes appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

First Look at the DockerCon San Francisco 2018 Agenda

From June 12th – 15th, San Francisco will welcome 6,000+ developers, sysadmins, architects, VP of Apps and other IT leaders to get hands-on with the latest innovations in the container ecosystem at DockerCon 2018. Today, we are excited to share a first look at the DockerCon Agenda. We’ve brought back some of your favorite from past DockerCons and are also thrilled to welcome many first time DockerCon speakers to the stage. Here is a first look at some of our favorites sessions:



Customers in Production  

Use case sessions highlight how companies are using Docker to modernize their infrastructure and build, manage and secure  distributed applications. These sessions are heavy on business value, ROI and production implementation advice, and learnings.

  • Building your NoSQL ship: How an Enterprise transitioned from a RDBMS to NoSQL DB using Agile and Docker by Jonell Taylor, Metlife
  • Black Friday and 100K Deployments Per Year by Srikanth Bulusu & Sanjoy Mukherjee, JCPenney
  • Packaging Software for Distribution on the Edge with Docker and Windows Server
    Peter Ngai, GE Digital

Using Docker

Using Docker sessions are introductory sessions for Docker users, dev and ops alike. Filled with practical advice, learnings, and insight, these sessions will help you get started with Docker or better implement Docker into your workflow.

  • Taking Kubernetes to Production with Docker by Scott Coulton, Puppet
  • Don’t have a Meltdown! Practical Steps for Defending your Apps by Liz Rice, Aqua and Justin Cormack, Docker
  • Demystifying Container Connectivity with Kubernetes in the Docker Platform by Nicola Kabar, Docker and Karthik Prabhakar, Tigera
  • Proactive Ops for Container Orchestration Environments by John Harris, Docker

Black Belt

One way to achieve a deep understanding of a complex system is to isolate the various components of that system, as well as those that interact with it, and examine all of them relentlessly. This is what we do in the Black Belt track! It features deeply technical talks covering not only container technology but also related projects.

  • Kubernetes in Docker by Alex Mavrogiannis + Guillamine Rose, Docker
  • Windows Container Security by David Lawrence, Docker and Saruhan Karademir, Microsoft
  • Istio: Weaving, Securing and Observing the Microservices by Lin Sun, IBM and Zack Butcher, Google
  • Kubernetes Extensibility by Tim Hockin + Eric Tune, Google

Docker Docker Docker

Docker Docker Docker sessions provide a deeper dive into Docker tooling, implementation, and real world production use recommendations. If you are ready to get to the next level with your Docker usage, join this track for product updates and best practices from the Docker team.

  • Product Overview: Docker Desktop for Developers by Gareth Rushgrove
  • Product Overview: Docker Enterprise Edition by Vivek Saraswat
  • Continuous Compliance with Docker Container Platform and OSCAL by Andrew Weiss, Docker and Anil Karmel, C2 Labs


The Innovation Track shows how containers are redefining our technology toolbox, from solving old problems in a new way to pushing the boundaries of what we can accomplish with software. Sessions in this track provide a glimpse into the new container frontier as it relates to IoT, Machine Learning and distributed systems.

  • Hardware Testing with Docker for Space by Christopher.Heistand, Johns Hopkins Applied Physics Lab
  • Democratizing Machine Learning on Kubernetes by Joy Qiao + Lachlan Evenson, Microsoft
  • A Vision of Persistence by Justin Cormack, Docker and Luke Marsden, DotMesh


The transform track focuses on the impact of change – both for organizations and ourselves as individuals and communities. Filled with inspiration, insights and new perspectives, these stories will leave you energized and equipped to drive innovation.

  • Depend on Docker (DoD) – Imagine a world where the only dependency you need is Docker! by Alex Iankoulski and Arun Subramaniyan, Baker Hughes
  • Shaving my Head Made me a Better Programmer by Alex Qin, GAKKO
  • #bigwhale: An Unexpected Journey into Containerization @ Lockheed Martin – Part 1 by Arjuna Rivera, Lockheed Martin
  • A Strong Belief, Loosely Held: Bringing Empathy to IT by Nirmal Mehta, Booz Allen Hamilton

In addition to the tracks above, DockerCon also includes an Ecosystem Track which showcases the variety of container products and solutions developed by sponsoring partners from the container ecosystem. Please reach out to sponsors@docker.com if you’re interested in speaking opportunities.

We were humbled by the quantity and quality of submissions we received as part of the CFP and wanted to extend a sincere thank you to the Docker community for sharing all of the amazing things they are doing with Docker. Feel free to reach out to meetups@docker.com if you’d like to give a talk at your local Docker user group.

Useful DockerCon Links:


First Look at the @DockerCon San Francisco 2018 Agenda #dockercon
Click To Tweet

The post First Look at the DockerCon San Francisco 2018 Agenda appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Video Series: Modernizing .NET Apps for Developers

MTA for Devs

 Docker Enterprise Edition (EE)  is the container platform for modernizing your existing applications, and running them in the cloud or on-premises. You can take monoliths and run them in containers with no code changes, and that gets you portability, security and efficiency.

Running in Docker is also a great starting point for modernizing the application architecture. You can breaking down the monolith into smaller, independent components which makes it easier to deploy updates, manage scale and introduce new technologies.

This new video series covers app modernization, for .NET developers and architects. It walks through the evolution of a monolithic ASP.NET 3.5 app to a distributed application running across multiple containers, using the Docker platform to plug everything together and adding features with great open-source software from the Docker ecosystem.

This is not a full re-architecture to microservices – for large .NET apps that would be a 12 month project. This series uses a feature-driven approach, taking key features out of the monolith to fix performance issues, add new functionality and support fast application updates.

Part 1 introduces the series, talks about what “modernization” means and then gets started – this is a very demo-heavy video series, where you’ll see lots of code. You’ll learn in Part 1 how to run .NET Framework apps in containers on Windows – using Docker to compile and package the app, and how to share your apps on a registry like Docker Hub.

The demo application is an ASP.NET WebForms app which uses SQL Server for storage. Part 2 shows you how to run SQL Server databases in Docker containers on Windows, packaging the schema so containers start with the database already deployed. You’ll also see how to optimize Dockerfiles for deploying .NET apps, and how to integrate logging and configuration with the Docker platform – still without changing code.

Part 3 starts modernizing the application architecture, extracting the web application’s save feature into its own container. The web app publishes an event to a message queue when users save data, and a message handler listens for those events and saves the data to SQL Server. The monolith now has an event-driven architecture, with all the components running in Docker containers, and that supports independent scaling and updates for different features.

Part 4 takes advantage of the new event publishing mechanism to add self-service analytics for users, powered by the open-source Elasticsearch stack. The new reporting database and analytics UI are run in Docker containers, and there’s a new message handler which stores data in Elasticsearch. You’ll learn how Docker EE  lets you add features like this with a zero-downtime release to production.

The series ends with part 5, which looks at breaking up the web UI. The homepage for the app is taken out and run in a separate container, which means changes to the UI and UX of the homepage can be released without regression testing the monolith. You’ll learn how to break up UI components and see how Docker enables fast, safe application releases.

Modernizing .NET Apps for Developers shows you how to take an existing .NET application, run it in a container, and use Docker EE  to modernize the architecture. The source code for the app is on GitHub and all you need to follow along is Docker for Windows on Windows 10, or Docker EE on Windows Server 2016.

Every part of the series shows you how to run the app locally on your dev environment, and also shows you the path to production – deploying to a highly-available Docker Enterprise Edition cluster in the cloud. By the end of the series you’ll be ready to start using Docker to modernize your own .NET applications.

Learn More about Modernizing Traditional Apps with Docker EE:

Learn how to modernize @dotnet apps on @Windows w/ the #Docker EE #MTA #dotnet
Click To Tweet

The post Video Series: Modernizing .NET Apps for Developers appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Docker Compose and Kubernetes with Docker for Desktop

If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been using Docker on their Macbook or Windows laptop because they now have a fully compliant Kubernetes cluster at their fingertips without installing any other tools.

Developers using Docker to build containerized applications often build Docker Compose files to deploy them. With the integration of Kubernetes into the Docker product line, some developers may want to leverage their existing Compose files but deploy these applications in Kubernetes. There is, of course, Kompose, but that’s a translation layer which causes you to have two separate artifacts to manage. Is there a way to keep a native Docker-based workflow?

With Docker on the desktop (as well as Docker Enterprise Edition) you can use Docker compose to directly deploy an application onto a Kubernetes cluster.

Here’s how it works:

Let’s assume I have a simple Docker compose file like the one below that describes a three tier app: a web front end, a worker process (words) and a database.

Notice that our web front end is set to route traffic from port 80 on the host to port 80 on the service (and subsequently the underlying containers). Also, our words service is going to launch with 5 replicas.


    build: web
    image: dockerdemos/lab-web
     - "./web/static:/static"
     - "80:80"

    build: words
    image: dockerdemos/lab-words
      replicas: 5
      endpoint_mode: dnsrr
          memory: 16M
          memory: 16M

    build: db
    image: dockerdemos/lab-db

I’m using Docker for Mac, and Kubernetes is set as my default orchestrator. To deploy this application I simply use docker stack deploy providing the name of our compose file (words.yaml) and the name of the stack (words). What’s really cool is that this would be the exact same command you would use with Docker Swarm:

$ docker stack deploy --compose-file words.yaml words
Stack words was created
Waiting for the stack to be stable and running...
 - Service db has one container running
 - Service words has one container running
 - Service web has one container running
Stack words is stable and running


Under the covers the compose file has created a set of deployments, pods, and services which can be viewed using kubectl.

$ kubectl get deployment
db        1         1         1            1           2m
web       1         1         1            1           2m
words     5         5         5            5           2m

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
db-5489494997-2krr2      1/1       Running   0          2m
web-dd5755876-dhnkh      1/1       Running   0          2m
words-86645d96b7-8whpw   1/1       Running   0          2m
words-86645d96b7-dqwxp   1/1       Running   0          2m
words-86645d96b7-nxgbb   1/1       Running   0          2m
words-86645d96b7-p5qxh   1/1       Running   0          2m
words-86645d96b7-vs8x5   1/1       Running   0          2m

$ kubectl get services
NAME            TYPE          CLUSTER-IP       EXTERNAL-IP    PORT(S)       AGE
db              ClusterIP     None                    55555/TCP     2m
web             ClusterIP     None                    55555/TCP     2m
web-published   LoadBalancer        80:32315/TCP  2m
words           ClusterIP     None                    55555/TCP     2m

If you look at the list of services you might notice something that seems a bit odd at first glance. There are services for both web and web-published. The web service allows for intra-application communication, whereas the web-published service (which is a load balancer backed by vpnkit in Docker for Mac) exposes our web front end out to the rest of the world.

So if we visit http://localhost:80 we can see the application running. You can actually see the whole process in this video that Elton recorded.

Now if we wanted to remove the service you might think you would remove the deployments using kubectl (I know I did). But what you actually do is use docker stack rm and that will remove all the components created when we brought the stack up.

$ docker stack rm words
Removing stack: words

$ kubectl get deployment
No resources found

And, to me, the cool thing is that this same process can be used with Docker EE – I simply take my Compose file and deploy it directly in the UI of Docker Enterprise Edition (EE) – but that’s another post.

Want to try it for yourself? Grab Docker for Mac or Docker for Windows, but be sure to check out the documentation (Mac and Windows) for more info.

Learn more:

Use Docker Compose to deploy a multiservice app on #Kubernetes w/ #Docker for Mac by @mikegcoleman
Click To Tweet

The post Docker Compose and Kubernetes with Docker for Desktop appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Top 3 reasons to attend DockerCon 2018

In case you missed it, DockerCon 2018 will take place at Moscone Center in San Francisco, CA on June 13-15, 2018. DockerCon is where the Docker community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate and advanced users who are all looking to level up their skills and go home inspired. With a 2 full days of training, more than 100 sessions, free workshops and hands-on labs, and the wealth of experience brought by each attendee, DockerCon is the place to be if you’re looking to learn Docker in 2018.

Want to go but need information to convince your manager? Here is a document to help you build a case for it including content, budget and reasons why you should attend.


Register for DockerCon 2018


Reason #1: Inspiring and informative breakout sessions

From beginner to experts, DockerCon brings together the brightest minds to talk about all things containers including Docker Platform, Kubernetes, Digital Transformation in the Enterprise, Moby and CNCF projects, Container Security, Service Mesh and more. Although the full schedule won’t be announced until the end of the month, below is a sneak peak of some of the sessions we have lined up:

  • Kubernetes extensibility by Tim Hockin and Eric Tune (Google)
  • Accelerating Development Velocity of Production ML Systems with Docker by Kinnary Jangla (Pinterest)
  • Digital Transformation with Docker, Cloud and DevOps: How JCPenney Handles Black Friday and 100K Deployments Per Year by Sanjoy Mukherjee, (JCPenney)
  • Don’t have a Meltdown! Practical Steps for Defending your Apps by Liz Rice (Aqua) and Justin Cormack (Docker)
  • Creating Effective Docker Images by Abby Fuller (AWS)
  • App Transformation with Docker: 5 Patterns for Success by Elton Stoneman (Docker)

DockerCon 2018


Reason #2: Concrete learnings through Hands-on Labs and Workshops

Hands-on Labs will provide conference attendees with the opportunity to gain experience with various Docker features and use cases. Each lab runs around 45 minutes, covering both introductory and more advanced topics. Labs are self paced, so attendees can learn at their own speed when they have time during the day.

Conference attendees will also get the opportunity to attend free 2 hour hands-on, instructors led workshops will run as a track and free to first-come-first-served basis.


Reason #3: Effective networking with the Docker Hallway Track

Docker Hallway Track is an innovative platform that helps you find like-minded people to meet one-on-one and share knowledge in a structured way, so you get tangible results from networking.

Register for DockerCon 2018

Top 3 reasons to attend @DockerCon 2018 including a sneak peak at the speaker lineup
Click To Tweet

Useful DockerCon Links:

The post Top 3 reasons to attend DockerCon 2018 appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)