Amazon Aurora Backtrack – Turn Back Time

We’ve all been there! You need to make a quick, seemingly simple fix to an important production database. You compose the query, give it a once-over, and let it run. Seconds later you realize that you forgot the WHERE clause, dropped the wrong table, or made another serious mistake, and interrupt the query, but the damage has been done. You take a deep breath, whistle through your teeth, wish that reality came with an Undo option. Now what?

New Amazon Aurora Backtrack
Today I would like to tell you about the new backtrack feature for Amazon Aurora. This is as close as we can come, given present-day technology, to an Undo option for reality.

This feature can be enabled at launch time for all newly-launched Aurora database clusters. To enable it, you simply specify how far back in time you might want to rewind, and use the database as usual (this is on the Configure advanced settings page):

Aurora uses a distributed, log-structured storage system (read Design Considerations for High Throughput Cloud-Native Relational Databases to learn a lot more); each change to your database generates a new log record, identified by a Log Sequence Number (LSN). Enabling the backtrack feature provisions a FIFO buffer in the cluster for storage of LSNs. This allows for quick access and recovery times measured in seconds.

After that regrettable moment when all seems lost, you simply pause your application, open up the Aurora Console, select the cluster, and click Backtrack DB cluster:

Then you select Backtrack and choose the point in time just before your epic fail, and click Backtrack DB cluster:

Then you wait for the rewind to take place, unpause your application and proceed as if nothing had happened. When you initiate a backtrack, Aurora will pause the database, close any open connections, drop uncommitted writes, and wait for the backtrack to complete. Then it will resume normal operation and being to accept requests. The instance state will be backtracking while the rewind is underway:

The console will let you know when the backtrack is complete:

If it turns out that you went back a bit too far, you can backtrack to a later time. Other Aurora features such as cloning, backups, and restores continue to work on an instance that has been configured for backtrack.

I’m sure you can think of some creative and non-obvious use cases for this cool new feature. For example, you could use it to restore a test database after running a test that makes changes to the database. You can initiate the restoration from the API or the CLI, making it easy to integrate into your existing test framework.

Things to Know
This option applies to newly created MySQL-compatible Aurora database clusters and to MySQL-compatible clusters that have been restored from a backup. You must opt-in when you create or restore a cluster; you cannot enable it for a running cluster.

This feature is available now in all AWS Regions where Amazon Aurora runs, and you can start using it today.

Jeff;


Source: New feed

Like This (0)
Dislike This (0)

Introducing the AWS Machine Learning Competency for Consulting Partners

Today I’m excited to announce a new Machine Learning Competency for Consulting Partners in the Amazon Partner Network (APN). This AWS Competency program allows APN Consulting Partners to demonstrate a deep expertise in machine learning on AWS by providing solutions that enable machine learning and data science workflows for their customers. This new AWS Competency is in addition to the Machine Learning comptency for our APN Technology Partners, that we launched at the re:Invent 2017 partner summit.

These APN Consulting Partners help organizations solve their machine learning and data challenges through:

  • Providing data services that help data scientists and machine learning practitioners prepare their enterprise data for training.
  • Platform solutions that provide data scientists and machine learning practitioners with tools to take their data, train models, and make predictions on new data.
  • SaaS and API solutions to enable predictive capabilities within customer applications.

Why work with an AWS Machine Learning Competency Partner?

The AWS Competency Program helps customers find the most qualified partners with deep expertise. AWS Machine Learning Competency Partners undergo a strict validation of their capabilities to demonstrate technical proficiency and proven customer success with AWS machine learning tools.

If you’re an AWS customer interested in machine learning workloads on AWS, check out our AWS Machine Learning launch partners below:

 

Interested in becoming an AWS Machine Learning Competency Partner?

APN Partners with experience in Machine Learning can learn more about becoming an AWS Machine Learning Competency Partner here. To learn more about the benefits of joining the AWS Partner Network, see our APN Partner website.

Thanks to the AWS Partner Team for their help with this post!
Randall


Source: New feed

Like This (0)
Dislike This (0)

AWS Online Tech Talks – May and Early June 2018

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

 

 

 

 


Source: New feed

Like This (0)
Dislike This (0)

Freedom from Lock-in With Docker Enterprise Edition

Major business initiatives such as digitization and cloud migration have threatened to disrupt IT organizations that are already spread thin simply supporting the core business. Containerization is viewed as a way to help with these initiatives because they speed the delivery of software and typically cut operational costs by more than 50% in the first year alone. To support a containerization strategy, many enterprises are turning to container platforms to manage and secure the delivery of their mission-critical software from development through production.

Docker EEFor customers, choosing the right container platform is more than a technical decision – it is a business decision. As with any decision of this nature, it is critical that the container solution has the flexibility to evolve as business requirements change. Unfortunately, all container platforms are not the same – some lack security while others force organizations into a prescriptive operating model. And even worse, most container platforms will “lock in” an organization to a single OS, single cloud, single type of application, single development  –  leading CIOs down a single dead-end path they will have to abandon in less than two years.

So how can organization address continue to move forward with modern technologies, cope with the heterogeneous mess of systems and applications and remain independent and not locked-in?

Docker EEDocker Enterprise Edition is the only enterprise-grade platform that enables IT leaders to cost-effectively build, manage and secure their their application portfolio at their own pace, without fear of lock-in:

  • Multi-OS Support: Most organizations have multiple operating systems so getting locked into a single Linux distribution could be both costly and limiting. Docker’s Enterprise Edition offers availability across certified infrastructure platforms, including multiple Linux distributions (SLES, CentOS, RHEL, Ubuntu, Oracle Linux) and Windows Server.
  • Multi-Cloud: A Rightscale survey estimates that more than 85% have indicated plans for a multi-cloud strategy and IDC estimate that by the end of this year, more than 60% will have committed to multi-cloud architectures by the end of this year.. Organizations require a multi-cloud option as various workloads and business needs are better suited for different clouds: do you need more stringent security? More functionality for machine learning? Or a disaster recovery plan to mitigate downtime?  Docker’s container platform is the only solution that doesn’t lock organizations into an underlying infrastructure and provides flexibility to move to, from and in between all major clouds including AWS and Azure.
  • Orchestration choice: Docker Enterprise Edition is the only platform that runs both Swarm and Kubernetes simultaneously on the same cluster – so developers do not need to make an orchestration choice. Operations teams have the flexibility to choose orchestrators interchangeably. And the only container platform that manages and secures applications on Kubernetes in multi-Linux and multi-cloud customer environment
  • Any application, new or existing: With Docker, users can use the same platform to work with any application or development language,whether it be an existing legacy or new cloud native application.

With Docker Enterprise Edition, organizations have the flexibility to make changes to these portfolios on the fly, course correct or pursue new business goals. As one of Docker’s CIO customers put it:  “I want the freedom to make choices and the flexibility to move from one choice to another without mortgaging my future.” Docker EE puts customers in control, protecting investments while providing a path to the future.

To learn more about Docker Enterprise Edition:


Freedom from Lock-in With #Docker Enterprise Edition
Click To Tweet


The post Freedom from Lock-in With Docker Enterprise Edition appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

How AWS Meets a Physical Separation Requirement with a Logical Separation Approach

We have a new resource available to help you meet a requirement for physically-separated infrastructure using logical separation in the AWS cloud. Our latest guide, Logical Separation: An Evaluation of the U.S. Department of Defense Cloud Security Requirements for Sensitive Workloads outlines how AWS meets the U.S. Department of Defense’s (DoD) stringent physical separation requirement by pioneering a three-pronged logical separation approach that leverages virtualization, encryption, and deploying compute to dedicated hardware.

This guide will help you understand logical separation in the cloud and demonstrates its advantages over a traditional physical separation model. Embracing this approach can help organizations confidently meet or exceed security requirements found in traditional on-premises environments, while also providing increased security control and flexibility.

Logical Separation is the second guide in the AWS Government Handbook Series, which examines cybersecurity policy initiatives and identifies best practices.

If you have questions or want to learn more, contact your account executive or AWS Support.


Source: Aws Security

Like This (0)
Dislike This (0)

Creating a 1.3 Million vCPU Grid on AWS using EC2 Spot Instances and TIBCO GridServer

Many of my colleagues are fortunate to be able to spend a good part of their day sitting down with and listening to our customers, doing their best to understand ways that we can better meet their business and technology needs. This information is treated with extreme care and is used to drive the roadmap for new services and new features.

AWS customers in the financial services industry (often abbreviated as FSI) are looking ahead to the Fundamental Review of Trading Book (FRTB) regulations that will come in to effect between 2019 and 2021. Among other things, these regulations mandate a new approach to the “value at risk” calculations that each financial institution must perform in the four hour time window after trading ends in New York and begins in Tokyo. Today, our customers report this mission-critical calculation consumes on the order of 200,000 vCPUs, growing to between 400K and 800K vCPUs in order to meet the FRTB regulations. While there’s still some debate about the magnitude and frequency with which they’ll need to run this expanded calculation, the overall direction is clear.

Building a Big Grid
In order to make sure that we are ready to help our FSI customers meet these new regulations, we worked with TIBCO to set up and run a proof of concept grid in the AWS Cloud. The periodic nature of the calculation, along with the amount of processing power and storage needed to run it to completion within four hours, make it a great fit for an environment where a vast amount of cost-effective compute power is available on an on-demand basis.

Our customers are already using the TIBCO GridServer on-premises and want to use it in the cloud. This product is designed to run grids at enterprise scale. It runs apps in a virtualized fashion, and accepts requests for resources, dynamically provisioning them on an as-needed basis. The cloud version supports Amazon Linux as well as the PostgreSQL-compatible edition of Amazon Aurora.

Working together with TIBCO, we set out to create a grid that was substantially larger than the current high-end prediction of 800K vCPUs, adding a 50% safety factor and then rounding up to reach 1.3 million vCPUs (5x the size of the largest on-premises grid). With that target in mind, the account limits were raised as follows:

  • Spot Instance Limit – 120,000
  • EBS Volume Limit – 120,000
  • EBS Capacity Limit – 2 PB

If you plan to create a grid of this size, you should also bring your friendly local AWS Solutions Architect into the loop as early as possible. They will review your plans, provide you with architecture guidance, and help you to schedule your run.

Running the Grid
We hit the Go button and launched the grid, watching as it bid for and obtained Spot Instances, each of which booted, initialized, and joined the grid within two minutes. The test workload used the Strata open source analytics & market risk library from OpenGamma and was set up with their assistance.

The grid grew to 61,299 Spot Instances (1.3 million vCPUs drawn from 34 instance types spanning 3 generations of EC2 hardware) as planned, with just 1,937 instances reclaimed and automatically replaced during the run, and cost $30,000 per hour to run, at an average hourly cost of $0.078 per vCPU. If the same instances had been used in On-Demand form, the hourly cost to run the grid would have been approximately $93,000.

Despite the scale of the grid, prices for the EC2 instances did not move during the bidding process. This is due to the overall size of the AWS Cloud and the smooth price change model that we launched late last year.

To give you a sense of the compute power, we computed that this grid would have taken the #1 position on the TOP 500 supercomputer list in November 2007 by a considerable margin, and the #2 position in June 2008. Today, it would occupy position #360 on the list.

I hope that you enjoyed this AWS success story, and that it gives you an idea of the scale that you can achieve in the cloud!

Jeff;


Source: New feed

Like This (0)
Dislike This (0)

New – AWS Service Catalog Connector for ServiceNow

Enterprises want to provide their employees with apps and tools that will allow them to do a better and more efficient job, while still providing oversight and governance. AWS Service Catalog helps enterprise IT to meet all of these needs, with a focus on cloud-based solutions. Administrators assemble portfolios of products, add rules to control and manage user access, and make the resulting portfolios available to their organization. Employees browse the catalog to find and launch the desired product. ServiceNow is an IT service management (ITSM) platform built around activities, tasks, processes, and workflows. The ServiceNow Service Catalog allows users to locate and order IT services, powered by a workflow that includes approval and fulfillment steps.

We recently launched the AWS Service Catalog Connector for ServiceNow and I would like to tell you about it today. The connector is available in the ServiceNow Store. It synchronizes AWS Service Catalog portfolios and products with the ServiceNow Service Catalog so that ServiceNow users can request approved AWS products without having to log in to an AWS account. The ServiceNow Service Catalog administrator has full control of the AWS-powered IT services (visible as products in the AWS Service Catalog) that they make available to their user base. This includes service configuration, AWS tagging, and access control at the individual, group, and role level. Provisioning requests can be connected to workflows and can also make use of a default workflow. ServiceNow users can browse the catalog and request provisioning of products that are managed within AWS Service Catalog, including AWS Marketplace products that have been copied to AWS Service Catalog.

To learn more and to get started, read How to Install and Configure the AWS Service Catalog Connector for ServiceNow.

Jeff;

 


Source: New feed

Like This (0)
Dislike This (0)

Integrating Kubernetes with Docker Enterprise Edition 2.0 – Top 10 Questions from the Docker Virtual Event

At our recent virtual event, we shared our excitement around Docker Enterprise Edition (EE) 2.0 – the most complete enterprise-ready container platform in the market. We shared how this release enables organizations like Liberty Mutual and Franklin American Mortgage Company, both presenters at DockerCon 2018, to efficiently scale their container environment across many teams while delivering choice and flexibility. We demonstrated some of the new advanced capabilities around access controls with secure application zones and building a consistent software supply chain across regions, and highlighted how easy and interchangeable it is to leverage both Swarm and Kubernetes orchestration in the same environment.

If you missed the live event, don’t worry! You can still catch the recording on-demand here.

We got great questions throughout the event and will address the most common ones in our blog over the next few days.

Choice of Orchestration – Swarm and Kubernetes

One of the highlights of this release is the integration of Kubernetes, making Docker EE the only platform that runs both Swarm and Kubernetes simultaneously on the same cluster – so developers do not need to make an orchestration choice. Operations teams have the flexibility to choose orchestrators interchangeably.

Docker EE with Kubernetes

Q: Is Kubernetes going to displace Swarm?

A: No – Swarm and Kubernetes are both available and can be used interchangeably in the same cluster. We believe in providing choice and flexibility and that there are good use cases for both Swarm and Kubernetes. For some more info, check out this earlier blog.

Q: Which orchestrator would you recommend going forward?

A: It will depend on many things including what features and use cases are most appropriate to your environment. For example Swarm makes it easy to get apps up and running quickly and has full support for Windows containers, whereas Kubernetes has more configuration options and resource types for things like stateful workloads (e.g. StatefulSets). You can use both within the same cluster.

Q: How does mixing Swarm and Kubernetes work in the same cluster?

A: Docker EE 2.0 installs Kubernetes and Swarm components across every node in a cluster. You can then set whether each individual worker node is Kubernetes or Swarm or in “Mixed” mode, and Docker EE will ensure workloads are scheduled appropriately. “Mixed” mode is a great option for development and test, but this is not recommended in production due to potential resource contention issues.

Q: Does Docker Enterprise Edition (EE) just add more layers of management on top of Kubernetes?

A: With Docker EE, you get a fully-conformant Kubernetes distribution with access to all the native APIs, CLIs and interfaces. To help organizations operationalize this at scale, Docker EE adds a number of enhancements intended for enterprise production deployments including things like a comprehensive private registry solution, a unified control plane for defining access controls that integrate with your corporate LDAP/Active Directory, and a set of security solutions that deliver safer applications.

Q: Do you only provide the abstraction so that the Kubernetes interface tools work, like kubectl or kubeadm? Or is it really all of the Kubernetes modules?

A: It’s not just an API abstraction or similar – we’re running a full Kubernetes installation out of the box when you install Docker EE 2.0 – one that has been tested and certified for conformance.

Q: Which version(s) of Kubernetes does Docker EE 2.0 supports and how will it be aligning with future upgrades of the orchestrators?

A: Docker EE 2.0 ships with Kubernetes 1.8 and patches to Docker EE 2.0 may include future patches to Kubernetes 1.8. Docker is committed to staying conformant with Kubernetes and future releases of Docker EE will include newer versions of Kubernetes.

Q: Can Kubernetes be upgraded across clusters without downtime? if so, is that handled via Docker EE?

A: In Docker EE 2.0, you cannot separately upgrade the Kubernetes version. When you upgrade the cluster management component (Universal Control Plane), you will get the appropriate included updates for Kubernetes as well.

Q: Can you use Kubernetes YAML with Swarm? What about Compose with Kubernetes?

A: With Docker EE 2.0, you can leverage existing Compose files and deploy with either Swarm or Kubernetes through the UI. You can also leverage Kubernetes YAML to deploy applications with Kubernetes. Deploying Kubernetes YAML with Swarm is not supported.

Q: Does Kubernetes support .NET applications?

A: Right now, Kubernetes has beta support for .NET/Windows containers. It is not yet generally available. You can deploy .NET applications with Swarm as that has been fully supported with Docker EE for over a year.

Q: I’m just new to Docker, does this mean I need to get familiarized with Kubernetes first to actually start learning Docker?        

A: No – you don’t need to know Kubernetes to use Docker EE. Kubernetes is there for users that need its specific features, but it’s optional. For many use cases and workloads Swarm may be easier to learn. If you do need to move to Kubernetes later, you can leverage the same Docker Compose file to run your applications on both Swarm and Kubernetes so it’s easy to change your mind down the road.

To learn more about Kubernetes integration into Docker EE 2.0, check out a few resources:


Docker EE 2.0: Demystifying orchestration: Top 10 #Swarm and #Kubernetes questions answered
Click To Tweet


The post Integrating Kubernetes with Docker Enterprise Edition 2.0 – Top 10 Questions from the Docker Virtual Event appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)

EC2 Price Reduction – H1 Instances

EC2’s H1 instances offer 2 to 16 terabytes of fast, dense storage for big data applications, optimized to deliver high throughput for sequential I/O. Enhanced Networking, 32 to 256 gigabytes of RAM, and Intel Xeon E5-2686 v4 processors running at a base frequency of 2.3 GHz round out the feature set.

I am happy to announce that we are reducing the On-Demand and Reserved Instance prices for H1 instances in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions by 15%, effective immediately.

Jeff;

 


Source: New feed

Like This (0)
Dislike This (0)

What are Docker Captains looking forward to at DockerCon

With less than 6 weeks until DockerCon 2018, we can barely contain our excitement! From their favorite tips and tricks for using Docker in production or levering Docker for Machine Learning, Docker Captains come together at DockerCon to share their knowledge and collaborate with the broader community. We’ve asked Docker Captains to share what they are most looking forward to at DockerCon. Here are some of their responses.  

DockerCon

 

“I’m looking forward to meeting the many other Docker enthusiasts and champions and listening to other cool things that Docker makes possible” – Kinnary Jangla, Pinterest

“ In 2015, I attended DockerCon for the first time. I was sitting in a chair and listening to the amazing stories and ideas presented by speakers at the conference, which set off a chain of events that led to today. I feel privileged, and am really looking forward to being on stage and sharing our transformational journey to inspire the people who would sit in that chair. I am also looking forward to hearing the keynotes and the exciting new announcements that I am sure are being lined up for the big event.” – Alexandre Iankoulski, Baker Hughes

“Learning about the production use cases for Docker and the new improvements to the Docker ecosystem” – Arun Subramanyian, Baker Hughes

“ Meeting people from the community, both new faces and old! Sign up for some of the hallway tracks and make sure to talk to as many people as you can” – Adrian Mouat, Container Solutions

“Conferences like DockerCon are great for networking with peers, DockerCon team members and attendees – to discuss how we build microservices solutions today. It’s great to hear about the difference approaches, perspectives and challenges.” Michele Leroux Bustamante, Solliance.

Meet these Docker captains by attending their  breakout sessions at DockerCon:


What are #DockerCaptain looking forward to at @DockerCon ft. @adrianmouat @kjangla @michelebusta
Click To Tweet


Useful DockerCon Links:

The post What are Docker Captains looking forward to at DockerCon appeared first on Docker Blog.


Source: Docker

Like This (0)
Dislike This (0)