Now Open: DockerCon US Diversity Scholarship!

DockerCon Scholarship

DockerCon is a hub for the IT industry , bringing together members from all parts of our growing ecosystem and  global community. By actively promoting inclusivity, our goal is to make DockerCon a safe place for everyone to learn, belong and collaborate. With the support of Docker and our DockerCon scholarship sponsor, the Open Container Initiative (OCI), we are excited to announce the launch of this year’s DockerCon Diversity Scholarship Program to provide members of the Docker community, who are traditionally underrepresented, a financial scholarship to attend DockerCon US 2018. This year, we are increasing the number of scholarships we are granting to ensure attending DockerCon is an option for all.

Apply Now!

Deadline to Apply:

Wednesday, April 25, 2018 at 5:00PM PST

Selection Process

A committee of Docker community members will review and select the scholarship recipients. Recipients will be notified by the week of May 7, 2018

What’s included:

Full Access DockerCon Conference Pass

Requirements

Must be able to attend DockerCon US 2018

Must be 18 years old or older to apply

Learn more about the DockerCon Diversity Scholarship here. 

Have questions or concerns? Reach us at dockercon@docker.com


#DockerCon US Diversity Scholarship is now open! Learn more and apply here!
Click To Tweet


More free Docker ressources:

The post Now Open: DockerCon US Diversity Scholarship! appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Video Series: Modernizing Java Apps for Developers Part 4

Java Apps

Moving a monolithic application to a modern cloud architecture can be difficult and often result in a greenfield development effort. However, it is possible to move towards a cloud architecture using Docker Enterprise Edition (EE)  with no code changes and gain portability, security and efficiency in the process.

Java Apps

Part 4 takes advantage of the messaging service I added in part 3. In this installment, I’ll add self service analytics powered by the open source Elasticsearch / Kibana stack. The reporting database and analytics UI run in containers and the worker is updated to also store data in Elasticsearch. The Docker platform supports adding new components to a running deployment without shutting down the application containers that are currently running. You’ll learn how Docker lets you add new capabilities to the application with zero downtime in production.


Docker MTA Video Series: Modernizing Java Apps for Developers
Click To Tweet


To learn more about Docker for Java Developers:

The post Video Series: Modernizing Java Apps for Developers Part 4 appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Webinar Recap: The Docker Platform on Windows – from 101 to Modernizing .NET Apps

 Windows

Our live Docker webinars are always hugely popular. Last month we hosted a session on Docker and Windows containers, covering everything from the basics to how you can use the  Docker platform (Docker Enterprise Edition and Docker Desktop)  to modernize existing .NET apps and move them to the cloud.

The recording is available now – it clocks in at 60 minutes and has lots of demos showing you how to build and run Windows applications in containers with Docker for Windows:

The sample code, Dockerfiles and Docker Compose files for all the demos are here on GitHub.

In fact there were so many demos, I ran out of time for the Q&A part – so here are the unanswered questions we had from viewers:

Q. Are there any best practices or tutorial to architect sql databases in containers? What about data persistence, database references etc? How can we use SSDT with containers?

SQL Server running in a container is just like any remote SQL Server – you can connect with SSDT or Visual Studio or VS Code or any SQL client. There’s a good SQL Server in Docker tutorial on GitHub which walks through the build and deployment process with containers, and we have a Docker Platform Reference Architecture covering storage solutions.

Q. Will there be Docker PowerShell cmdlets available on windows?

The Docker platform runs as a background Windows Service, and it exposes a REST API for managing containers. The Docker command line talks to the API, and Microsoft have provided a Docker PowerShell module which talks to the same API. We advise using the Docker command line though because it’s always up-to-date with the Docker server, and the PowerShell module hasn’t been updated for a while.

Q. Can you please demo running unit tests (MSTest) on a Windows container?

I do this in my video series on modernizing .NET apps with Docker. In the series I demonstrate CI/CD with Docker Enterprise Edition for .NET apps, running end-to-end tests in a container. This example uses NUnit – but the principle is the same for MSTest – and I also use SpecFlow and Selenium to show tests which flex the whole application, running across multiple containers.

Q. Can I run Windows containers on Linux?

No. Windows containers use the underlying operating system kernel of the host, so you need to be running Windows Server 2016 (or Windows 10) to run Windows containers. You can run Linux containers on Windows, using the Hyper-V container feature.

Q. Is there any specific version of Docker that needs to be installed on Windows 10 and Windows 2016? Like Docker for Windows for Windows 10 and Docker Enterprise Edition (EE) for Windows 2016?

That’s exactly right – Docker for Windows is the desktop version you use on Windows 10, it has a bunch of extra features for developers. On Windows Server 2016 you run Docker Enterprise Edition which comes with support for your containers from Microsoft and Docker (the Windows Server licence includes a licence for Docker EE Basic).

Q. I have a .NET Core Web Api and multiple CMS API. How can I create container for this mixture of .NET framework and .NET Core?

You can run multiple apps inside one container, but you shouldn’t. You should build each part of your app (e.g. the API, the CMS and the Web UI) into a separate Docker image and run them in separate containers. That means you can upgrade, scale and deploy each part separately. The Windows Server Core base image from Microsoft includes the full .NET Framework, and there is a .NET Core image built on Nano Server.

Q. Will all 3rd party installers be working in the Docker container?

Not necessarily, it depends on the installer. You can run any command inside a Dockerfile as long as it doesn’t have a dependency on the Windows UI. If you have an MSI which can run unattended (i.e. with no windows popping up), then it will run fine in the Dockerfile using the normal msiexec command.

Q. How to handle containerized executables that depend on updating the container registry?

This is not a problem. Every container has its own copy of the Windows Registry. If you need to make Registry changes during deployment in the Dockerfile, or during runtime in the container, it will only update that container’s Registry, and it won’t interfere with other containers.

Q. Does the container have direct access to frame buffers and hardware (i.e. GPU)? (on Windows)

Not at the moment. Access to host hardware is one area where Windows containers do not have full parity with Linux containers. But Microsoft are investing heavily in container features for Windows Server – there are some big improvements in the Semi-Annual release 1709, and we can expect more when Windows Server 2019 is released.


Learn about #Docker on #Windows – from the basics to modernizing #dotnet apps
Click To Tweet


Learn more about Docker on Windows

The post Webinar Recap: The Docker Platform on Windows – from 101 to Modernizing .NET Apps appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Improved Docker Container Integration with Java 10

Docker and Java

Many applications that run in a Java Virtual Machine (JVM), including data services such as Apache Spark and Kafka and traditional enterprise applications, are run in containers. Until recently, running the JVM in a container presented problems with memory and cpu sizing and usage that led to performance loss. This was because Java didn’t recognize that it was running in a container. With the release of Java 10, the JVM now recognizes constraints set by container control groups (cgroups). Both memory and cpu constraints can be used manage Java applications directly in containers, these include:

  • adhering to memory limits set in the container
  • setting available cpus in the container
  • setting cpu constraints in the container

Java 10 improvements are realized in both Docker for Mac or Windows and Docker Enterprise Edition environments.

Container Memory Limits

Until Java 9 the JVM did not recognize memory or cpu limits set by the container using flags. In Java 10, memory limits are automatically recognized and enforced.

Java defines a server class machine as having 2 CPUs and 2GB of memory and the default heap size is ¼ of the physical memory. For example, a Docker Enterprise Edition installation has 2GB of memory and 4 CPUs. Compare the difference between containers running Java 8 and Java 10. First, Java 8:

docker container run -it -m512 --entrypoint bash openjdk:latest

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
    uintx MaxHeapSize                              := 524288000                          {product}
openjdk version "1.8.0_162"

The max heap size is 512M or ¼ of the 2GB set by the Docker EE installation instead of the limit set on the container to 512M. In comparison, running the same commands on Java 10 shows that the memory limit set in the container is fairly close to the expected 128M:

docker container run -it -m512M --entrypoint bash openjdk:10-jdk

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
   size_t MaxHeapSize                              = 134217728                                {product} {ergonomic}
openjdk version "10" 2018-03-20

Setting Available CPUs

By default, each container’s access to the host machine’s CPU cycles is unlimited. Various constraints can be set to limit a given container’s access to the host machine’s CPU cycles. Java 10 recognizes these limits:

docker container run -it --cpus 2 openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 2

All CPUs allocated to Docker EE get the same proportion of CPU cycles. The proportion can be modified by changing the container’s CPU share weighting relative to the weighting of all other running containers. The  proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle, other containers can use the leftover CPU time. The actual amount of CPU time will vary depending on the number of containers running on the system. These can be set in Java 10:

docker container run -it --cpu-shares 2048 openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 2

The cpuset constraint sets which CPUs allow execution in Java 10.

docker run -it --cpuset-cpus="1,2,3" openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 3

Allocating memory and CPU

With Java 10, container settings can be used to estimate the allocation of memory and CPUs needed to deploy an application. Let’s assume that the memory heap and CPU requirements for each process running in a container has already been determined and JAVA_OPTS set. For example, if you have an application distributed across 10 nodes; five nodes require 512Mb of memory with 1024 CPU-shares each and another five nodes require 256Mb with 512 CPU-shares each. Note that 1 CPU share proportion is represented by 1024.

For memory, the application would need 5Gb allocated at minimum.

512Mb x 5 = 2.56 Gb

256Mb x 5 = 1.28 Gb

The application would require 8 CPUs to run efficiently.

1024 x 5 = 5 CPUs

512 x 5 = 3 CPUs

Best practice suggests profiling the application to determine the memory and CPU allocations for each process running in the JVM. However, Java 10 removes the guesswork when sizing containers to prevent out of memory errors in Java applications as well allocating sufficient CPU to process work loads.


Improved Docker Container Integration with @Java 10
Click To Tweet


To learn more about Docker solutions for Java Developers:

 

The post Improved Docker Container Integration with Java 10 appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Au revoir

Today I’m announcing my departure from Docker, the company I helped create ten years ago and have been building ever since. A founder’s departure is usually seen as a dramatic event. Sadly, I must report that reality is far less exciting in this case. I’ve had many roles at Docker over the years, and today I have a new, final one – as an active board member, a major shareholder and, I expect, a high maintenance Docker user. But I will no longer be part of day-to-day operations. Instead, after obsessing for so many years over my own ideas, I am rediscovering the joys of putting myself at the service of others – my friends, my family, and the brilliant entrepreneurs I’ve been lucky enough to advise and invest in over the years. Over the coming months I plan to use my experience to help them in any way I can.

This transition is simply another chapter in a long story of change, growth, hard work… and a lot of luck.

Ten years ago, I quit my job, returned to live with my mother in Paris and, together with my friends Kamel Founadi and Sebastien Pahl, started a company called Dotcloud. Our goal was to harness an obscure technology called containers, and use it to create what we called “tools of mass innovation”: programming tools which anyone could use. I was 24 and had no idea what I was doing. We needed a CEO, so that became my new role.

Five years ago, Dotcloud reinvented itself as Docker, around a battle-hardened core of five people: Eric Bardin, Sam Alba, Jerome Petazzoni, Julien Barbier and myself. Soon growth was off the charts, and we hired an experienced CEO to help us sustain it. I was 29 and eager to do my part. Docker needed a CTO, so that became my new role.

Today, as I turn 34, Docker has quietly transformed into an enterprise business with explosive revenue growth and a developer community in the millions, under the leadership of our CEO, the legendary Steve Singh. Our strategy is simple: every large enterprise in the world is preparing to migrate their applications and infrastructure to the cloud, en masse. They need a solution to do so reliably and securely, without expensive code or process changes, and without locking themselves to a single operating system or cloud. Today the only solution meeting these requirements is Docker Enterprise Edition. This puts Docker at the center of a massive growth opportunity. To take advantage of this opportunity, we need a CTO by Steve’s side with decades of experience shipping and supporting software for the largest corporations in the world. So I now have a new role: to help find that ideal CTO, provide the occasional bit of advice, and get out of the team’s way as they continue to build a juggernaut of a business. As a shareholder, I couldn’t be happier to accept this role.

As a founder, of course, I have mixed emotions. When you create a company, your job is to make sure it can one day succeed without you. Then eventually that one day comes and the celebration can be bittersweet.

It’s never easy for a founder to part ways with their life’s work. But I realize how incredibly lucky I am to have this problem. Most ideas never materialize. Most software goes unused. Most businesses fail in their first year. Yet here we are, one of the largest open-source communities ever assembled, collectively building software that will run on millions of computers around the world. To know that your work was meaningful, and that a vibrant community of people will continue building upon it…. can any founder ask for anything more?

I want to thank from the bottom of my heart every member of the Docker team and community, past and present, for making Docker what it is today. Thanks to you, this founder’s bittersweet moment is mostly sweet. We have built something great together. I look forward to seeing where you will take it next.

Happy hacking,

Solomon

 

The post Au revoir appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Video Series: Modernizing Java Apps for Developers Part 3

Docker for Java

Moving a monolithic application to a modern cloud architecture can be difficult and often result in a greenfield development effort. However, it is possible to move towards a cloud architecture using Docker Enterprise Edition with no code changes and gain choice , security and operational agility in the process.

Docker for Java

Part 3 of the series begins the modernization process. I’ll take one aspect of the current application and break it out into a microservice. As written, the application writes to the database directly, but direct writes to the database can easily overwhelm the application by a large number of requests.

One solution is to implement a messaging queue. As in the part 2, I’ll follow several guidelines:

  1. Leave existing code in place.
  2. Design the new code as a separate and reusable application
  3. Deploy the microservice locally using Docker Enterise Edition
  4. Test the code

In this part of the modernization process, I add a message queue comprised of a REST interface that writes to a Redis database. The user data is held in Redis until it’s requested by a worker service that does the write to the database. The message queue uses Spring Boot to implement both the REST interface and the Redis database functions. I chose Redis because it has a method to pull the data without continuous polling, creating an efficient messaging queue.


#Docker MTA Video Series: Modernizing @Java Apps for #Developers
Click To Tweet


To learn more about Docker solutions for Developers:

The post Video Series: Modernizing Java Apps for Developers Part 3 appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

5 years later, Docker has come a long way

The evolution of Docker: From introducing a container runtime to building an enterprise-ready container platform

Back in March 2013, Docker was introduced publicly for the first time during Docker founder, Solomon Hykes’ lightning talk at PyCon. Since that moment in 2013, Docker has evolved in conjunction with the needs of users and customers to drive innovation around security, orchestration, networking and more. From building out advanced security features across the software supply chain and offering the choice of both Swarm and Kubernetes, to developing Docker for Mac/Windows and the Docker Enterprise Edition (EE) container platform, Docker has come a long way. Now at the age of five, Docker has millions of users and over 450 commercial customers – including hundreds of the world’s largest companies – that rely on Docker EE to power their digital and multi-cloud initiatives.

Docker Birthday

The history of Docker has shaped where we are today and as we celebrate our 5th birthday this week, we take a look back at the journey that lead us here.

Following the enthusiastic reception at PyCon 2013, Docker’s image format and container runtime quickly emerged as the de facto standard and building block for the community, customers and the broader industry. The power of Docker is  that it created independence between applications and infrastructure through its portable format and easy to use tooling. The outcome was that Docker democratized container technology and brought it into mainstream by solving the “matrix from hell” problem for millions of developers.

This sparked the formation of a vibrant community around Docker – where we have been committed to driving collaboration and innovation from its inception. Docker led the formation of the  Open Container Initiative (OCI) and drove OCI v1.0 runtime and image specifications last year. Similarly, our own Michael Crosby was just elected chairman of the Technical Oversight Board to help guide the direction of the OCI. Last year, we also announced the donation of containerd, the industry standard containerd runtime used by millions of users and Notary, a project designed to provide high levels of trust over digital content using strong cryptographic signatures to the Cloud Native Computing Foundation (CNCF).

The road to an enterprise container platform

As we continued to work closely with the community, partners and customers, we recognized the need for an end-to-end container platform. Orchestration was a piece of the puzzle but security proved to be a requirement for most customers in addition to automation, networking, storage and more. Most importantly, our customers want an enterprise-ready container platform that provides freedom of choice, enabling them to build and manage their entire application portfolio at their own pace without the risk of being locked in.

Docker EE

To meet the evolving needs of our customers, in the past year alone we:

  • Announced the integration of an unmodified Kubernetes – alongside Swarm – to give users the ability to choose between orchestrators based on their respective needs.
  • Continued to drive innovation on the security front with rich features including role-based access control (RBAC), secrets management and image scanning to bring end-to-end security to the software supply chain.
  • Rolled out the Modernize Traditional Applications (MTA) program to give companies a clear path to the cloud – and ultimately microservices – by containerizing their legacy applications to make them more secure, cost-efficient and portable, without touching the source code.

Thank you to our community, partners and customers for being a guiding light on this journey. This celebration of Docker’s 5th birthday is for you.

Docker Birthday events worldwide – RSVP Today

From customers to partners and local Docker community chapters, we’re excited that to celebrate Docker’s 5th Birthday with all of you. Come learn, mentor, celebrate, eat cake, and take an epic #dockerselfie! No matter where you are in your Docker journey, the birthday events are meant to help demystify some Docker concepts and guide users along specific learning path depending on their use case and expertise level. Advanced Docker users and partners will be available on-site and act as mentors to help everyone complete a series of Docker hands-on labs.  




5 years later, Docker has come a long way #dockerbday
Click To Tweet


Learn More about Docker:

The post 5 years later, Docker has come a long way appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Video Series: Modernizing Java Apps for Developers Part 2

Java apps

Moving a monolithic application to a modern cloud architecture can be difficult and often results in a greenfield development effort. However, it is possible to move towards a cloud architecture using Docker Enterprise Edition with no code changes and gain portability, security and efficiency in the process.

 Java apps

In the first post in this series, we discussed how you don’t need to do a full re-architecture of your application to microservices when using Docker Enterprise Edition.

In the second installment of the series, I go into the details of containerization of the application. This process builds containers using the application code as-is. I’ll follow three simple rules:

  1. Keep the existing architecture
  2. Keep the save version of the OS, components and application
  3. Keep deployment simple, i.e. static not elastic

I’ll also demonstrate how to use multi-stage build file to compile the code and deploy it to application server container such as Tomcat. It also shows how to deploy the application and database using a Docker Compose file.


Video Series: Modernizing @Java Apps for #Developers with #docker EE
Click To Tweet


To learn more about Docker solutions for Developers:

The post Video Series: Modernizing Java Apps for Developers Part 2 appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

A Secure Supply Chain for Kubernetes, Part 2

Two weeks ago we shared how the upcoming release of Docker Enterprise Edition (Docker EE) is able to secure the software supply chain for Kubernetes; just as it does for Docker Swarm through a combination of scanning for vulnerabilities and implementing image promotion policies. In this blog, we’ll take a closer look at another part of this solution – Docker Content Trust and image signing.

When combined with granular Role Based Access Controls [RBAC] and the secure clustering features of Docker EE, organizations get a secure container platform solution that is ready for the enterprise.

Restricting Unverified Kubernetes Content

As discussed in Part 1 of this blog post, organizations typically have a “supply chain” for how applications progress from a developer’s laptop to production, whether that is on-premises or in the cloud. For larger organizations, the team that handles QA and testing is not always the same team that develops the applications. There may also be a separate team that handles staging and pre-production before an application is pushed to production. Since an application can pass through several teams before it gets deployed, it’s important for organizations to be able to validate the source of the application.

Docker Content Trust is a way for individuals and teams to add private cryptographic keys to an image, adding a digital signature that helps to ensure proof-of-origin, authenticity and provenance for images. With Docker EE, you can ensure that the images being deployed are the ones you trust and haven’t been altered either in the image registry or on their way from the image registry to your environment by choosing to only run signed images:

In the context of Kubernetes, this means that Docker EE will prevent any workloads from being deployed on the cluster if the underlying images used have not been signed by members of specific teams.

This can be used to enforce image signing at certain stages of your supply chain: when the developer checks in the initial image, when the QA team has completed testing, when the security and networking team has reviewed the app, etc. If an image has missed any of the required signatures, Docker EE will prevent it from being deployed. This allows operations teams to prevent unauthorized content from being deployed into Kubernetes.

Integration of Docker Content Trust to Your Automated Workflow

Image signing does not have to come from an individual or team. It can also be extended to authorized 3rd party tools to indicate that the image build came from an approved workflow. Docker EE makes this simple by giving you the ability to create and manage client bundles within the Docker EE UI. Docker EE creates a keypair that can be used by Continuous Integration (CI) tools like Jenkins or GitLab to sign images as they are created and added to the repository. Learn more about using trusted images with Jenkins here.

Docker EE helps you deliver safer applications by securing your software supply chain. No matter what type of applications you are containerizing (legacy, cloud native, or microservices), the stack it is built for (Windows or Linux), or where it will be deployed (on-prem or the cloud), image vulnerability scanning, automated image promotions, and image signing all give you the ability to enforce a common workflow for the governance and automation of your application delivery process.


Learn how #Docker EE, Content Trust and Image Signing can secure the #Kubernetes supply chain
Click To Tweet


 Learn more about Docker Enterprise Edition with Kubernetes integration:

The post A Secure Supply Chain for Kubernetes, Part 2 appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

Video Series: Modernizing Java Apps for Developers Part 1

Modernizing Java Apps

Moving a monolithic application to a modern cloud architecture can be difficult and often result in a greenfield development effort. However, it is possible to move towards a cloud architecture using Docker Enterprise Edition with no code changes and gain portability, security and efficiency in the process.

Containerizing a monolithic application is a great starting point for modernizing application architecture.In many cases this can be done with no code changes at all.  From there, breaking down the application into smaller components makes it easier to deploy updates, introduce new components and manage scale.

This new video series covers modernization for Java applications. It walks through the process of evolving a N-tier Java application to a distributed application running across multiple containers. Docker provides the platform that plugs in and manages all the components into a coherent architecture.

This  series does not cover a full re-architecture to microservices. Shifting to a full microservices approach isn’t right for all applications, and the daunting task of a full rewrite of a monolithic application can be a massive endeavor that takes years to pay-off. Especially if what you have works. This series uses a feature driven approach. I select key features to update, in order to fix performance issues, add new functionality and support application updates. Gradual evolution toward a fully modern application can result in rapid benefits and an easier transition.

In Part 1, the series is introduced and covers the modernization process using Docker as it relates to the development workflow.  This part discusses Docker and how applications are moved into containers. It also includes a synopsis of the following videos and code in the series.


#Docker MTA Video Series: Modernizing Java Apps for Developers
Click To Tweet


To learn more about Docker solutions for IT:

The post Video Series: Modernizing Java Apps for Developers Part 1 appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)