Modernizing Java Apps with Docker

Modernizing Traditional Applications or MTA was one of the themes at DockerCon EU 2017. Traditional applications are typically built a number of  years ago but are critical to business operations. The developer and operational skill set to maintain the application may be hard to find. The code base can be difficult to maintain, if it is available at all. The team that wrote the original app may not even be around. The applications go into maintenance mode, which may mean they are patched regularly for vulnerabilities. Any revisions to the code can take significant number of hours to test and deploy, so updates are infrequent. It can also hold back infrastructure improvements as dependency management becomes a huge pain point.

Java Apps Docker

Any modern application needs a faster delivery time that can adapt to change in the market conditions. The pipeline from pushing the code to a source code repository to the application delivery should be efficient, automated, secure and fast. The application should be able to scale to demand, typically done by horizontal scaling across multiple instances. Portability of the application across different infrastructure becomes key in that case. In case of a failure, MTTR should be short and automated processes should roll back to a previous stable version of the application. This objectives could be challenging to achieve for a traditional application.

That is exactly the gap that is filled by Docker Enterprise Edition [EE]!

With Docker EE, updating an application means uploading a new Docker image with the latest code to a private registry in Docker Hub, remove an existing container and then start a new container using this new image. And this is all done in a secure and fully automated software supply chain. The image is scanned for any vulnerabilities and monitored against leading CVE databases during this pipeline. The application can scale across multiple nodes. Docker EE also gives you portability across multiple clouds.

In this session, we discussed how to modernize a Java EE 7 application running on an old application server. Using Docker, we were able to easily build a new version and deploy to a newer, more lightweight Java EE application server. This was an actual app Arun wrote 4 years ago.

The entire build and app server deployment is done through a single Dockerfile instead of a manual run book process. This is possible because of multi-stage builds. In addition, to deploying the application in a container, we show how to incrementally update the application by replacing the old Java Server Faces client with a Javascript React client by taking advantage of the REST interface in the client.

If you want to try out the process, the application is available in the Dockersamples. You can can also watch the Dockercon EU 2017 session.

Learn more about the MTA program:

Watch the Dockercon EU 2017 session: Modernize Java Apps with @spara and @arungupta
Click To Tweet

The post Modernizing Java Apps with Docker appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Amazon ElastiCache for Redis Is Now a HIPAA Eligible Service and You Can Use It to Power Real-Time Healthcare Applications

HIPAA image

Amazon ElastiCache for Redis is now a HIPAA Eligible Service and has been added to the AWS Business Associate Addendum (BAA). This means you can use ElastiCache for Redis to help you power healthcare applications as well as process, maintain, and store protected health information (PHI). ElastiCache for Redis is a Redis-compatible, fully-managed, in-memory data store and cache in the cloud that provides sub-millisecond latency to power applications. Now you can use the speed, simplicity, and flexibility of ElastiCache for Redis to build secure, fast, and internet-scale healthcare applications.

ElastiCache for Redis with HIPAA eligibility is available for all current-generation instance node types and requires Redis engine version 3.2.6. You must ensure that nodes are configured to encrypt the data in transit and at rest, and to authenticate Redis commands before the engine executes them. See Architecting for HIPAA Security and Compliance on Amazon Web Services for information about how to configure Amazon HIPAA Eligible Services to store, process, and transmit PHI.

ElastiCache for Redis uses Advanced Encryption Standard (AES)-512 symmetric keys to encrypt data on disk. The Redis backups stored in Amazon S3 are encrypted with server-side encryption (SSE) using AES-256 symmetric keys. ElastiCache for Redis uses Transport Layer Security (TLS) to encrypt data in transit. It uses the Redis AUTH token that you provide at the time of Redis cluster creation to authenticate the Redis commands coming from clients. The AUTH token is encrypted using AWS Key Management Service.

There is no additional charge for using ElastiCache for Redis clusters with HIPAA eligibility. To get started, see HIPAA Compliance for Amazon ElastiCache for Redis.

– Manan

Source: Aws Security

Like This (0)
Dislike This (0)

Intesa Sanpaolo Builds a Resilient Foundation for Banking With Docker Enterprise Edition

Intesa Sanpaolo is the largest bank in Italy and maintains a network of over 5,000 banking branches across Europe and North Africa. With nearly 19 million customers and €739 billion in assets, Intesa Sanpaolo is an integral part of the financial fabric and as such, Italian regulations require that they keep their business and applications online to serve their customers.

As a bank that can trace its roots back to the early 1800s, the majority of Intesa’s edge applications are still monolithic and hard to move between data centers, never mind migrate to the cloud. Diego Braga, Intesa IT Infrastructure Architect, looked to Docker Enterprise Edition (EE) at the recommendation of his Kiratech business partner Lorenzo Fontana to improve their application availability, portability and add cloud friendly application delivery. With Docker EE, Intesa was able to consolidate infrastructure by nearly 60%, thus saving significant money over their previous design, while also enabling higher application availability across regional data centers and preparing themselves for the cloud.

Docker Enterprise

Prior to the Docker EE implementation, not only were the applications monoliths, but Intesa maintained two separate data centers as mirrors of each other to achieve high availability. This design required excess, cold standby hardware capacity in each data center to handle any outages and successfully failover operations; hardware they hoped they would never need to switch on and thus it sat idle and depreciating. Intesa desired a solution that would enable them to utilize the available capacity of both data centers.

Docker Customers

The new design with Docker EE enables Intesa to utilize the resources in both data centers, eliminating waste and delivering cost savings. Through testing and analysis, Diego calculated that Docker Enterprise Edition allowed him to consolidate and run over 2.5 times more applications per physical server with Docker EE than in the previous virtualized environment. This consolidation benefit delivered the cost savings required to move the project forward.

Docker Customers

To show the benefits of Docker EE as an application delivery platform, Diego and Lorenzo identified the first batch of candidate applications to containerize, focusing on Microsoft Windows Server and Linux Java-based stacks due to their complete support within Docker. The hybrid management capability of Docker EE was important to Intesa as they run a hybrid mix of Windows and Linux services. “The team could write the Docker compose file and just that,” Lorenzo noted. “[Docker] deployed the Windows containers on Windows and the Linux containers on Linux.”

Docker Enterprise

As with most customers, Intesa also needed their solution to work for both Docker-based and non-Docker applications. Backed by private, high-speed networking and complementary storage and networking solutions from Elastifile and AVI Networks to deliver continuously replicated storage and software-defined networking for both their Dockerized and non-Docker applications across the entire configuration.

Now that Intesa is using Docker EE in production with their first applications, they are looking to expand their use cases. They are planning to further reduce their on-premises infrastructure and use the cloud for peak traffic bursts. Their Dockerized applications already have the flexibility and portability to run on the major public clouds, and Intesa is evaluating cloud broker models to help them analyze which clouds to use for their applications.

To learn more about Docker Enterprise Edition:

.@intesasanpaolo uses #Docker EE to build a resilient infrastructure foundation for banking
Click To Tweet

The post Intesa Sanpaolo Builds a Resilient Foundation for Banking With Docker Enterprise Edition appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Now You Can Monitor DDoS Attack Trends with AWS Shield Advanced

AWS Shield Advanced has always notified you about DDoS attacks on your applications via the AWS Management Console and API as well as Amazon CloudWatch metrics. Today, we added the global threat environment dashboard to AWS Shield Advanced to allow you to view trends and metrics about DDoS attacks across Amazon CloudFront, Elastic Load Balancing, and Amazon Route 53. This information can help you understand the DDoS target profile of the AWS services you use and, in turn, can help you create a more resilient and distributed architecture for your application.

The global threat environment dashboard shows comprehensive and easy-to-understand data about DDoS attacks. The dashboard displays a summary of the global threat environment, including the largest attacks, top vectors, and the relative number of significant attacks. You also can view the dashboard for different time durations to give you a history of DDoS attacks.

To get started with the global threat environment dashboard:

  1. Sign in to the AWS Management Console and navigate to the AWS WAF and AWS Shield console.
  2. To activate AWS Shield Advanced, choose Protected resources in the navigation pane, choose Activate AWS Shield Advanced, and then accept the terms by typing I accept.
  3. Navigate to the global threat environment dashboard through the navigation pane.
  4. Choose your desired time period from the time period drop-down menu in the top right part of the page.

You can use the information on the global threat environment dashboard to understand the threat landscape as well as to inform decisions you make that will help to better protect your AWS resources.

To learn more information, see Global Threat Environment Dashboard: View DDoS Attack Trends Across AWS.

– Ritwik

Source: Aws Security

Like This (0)
Dislike This (0)

New – AWS Direct Connect Gateway – Inter-Region VPC Access

As I was preparing to write this post, I took a nostalgic look at the blog post I wrote when we launched AWS Direct Connect back in 2012. We created Direct Connect after our enterprise customers asked us to allow them to establish dedicated connections to an AWS Region in pursuit of enhanced privacy, additional data transfer bandwidth, and more predictable data transfer performance. Starting from one AWS Region and a single colo, Direct Connect is now available in every public AWS Region and accessible from dozens of colos scattered across the world (over 60 locations at last count). Our customers have taken to Direct Connect wholeheartedly and we have added features such as Link Aggregation, Amazon EFS support, CloudWatch monitoring, and HIPAA eligibility. In the past five weeks alone we have added Direct Connect locations in Houston (Texas), Vancouver (Canada), Manchester (UK), Canberra (Australia), and Perth (Australia).

Today we are making Direct Connect simpler and more powerful with the addition of the Direct Connect Gateway. We are also giving Direct Connect customers in any Region the ability to create public virtual interfaces that receive our global IP routes and enable access to the public endpoints for our services and updating the Direct Connect pricing model.

Let’s take a look at each one!

New Direct Connect Gateway
You can use the new Direct Connect Gateway to establish connectivity that spans Virtual Private Clouds (VPCs) spread across multiple AWS Regions. You no longer need to establish multiple BGP sessions for each VPC; this reduces your administrative workload as well as the load on your network devices.

This feature also allows you to connect to any of the participating VPCs from any Direct Connect location, further reducing your costs for making using AWS services on a cross-region basis.

Here is a diagram that illustrates the simplification that you can achieve with a Direct Connect Gateway (each “lock” icon represents a Virtual Private Gateway). Start with this:

And end up like this:

The VPCs that reference a particular Direct Connect Gateway must have IP address ranges that do not overlap. Today, the VPCs must all be in the same AWS account; we plan to make this more flexible in the future.

Each Gateway is a global object that exists across all of the public AWS Regions. All communication between the Regions via the Gateways takes place across the AWS network backbone.

Creating a Direct Connect Gateway
You can create a Direct Connect Gateway from the Direct Connect Console or by calling the CreateDirectConnectGateway function from your code. I’ll use the Console!

I open the Direct Connect Console and click on Direct Connect Gateways to get started:

The list is empty since I don’t have any Gateways yet. Click on Create Direct Connect Gateway to change that:

I give my Gateway a name, enter a private ASN for my network, then click on Create. The ASN (Autonomous System Number) must be in one of the ranges defined as private in RFC 6996:

My new Gateway will appear in the other AWS Regions within a moment or two:

I have a Direct Connect Connection in Ohio that I will use to create my VIF:

Now I create a private VIF that references the Gateway and the Connection:

It is ready to use within seconds:

I already have a pair of VPCs with non-overlapping CIDRs, and a Virtual Private Gateway attached to each one. Here are the VPCs (since this is a demo I’ll show both in the same Region for convenience):

And the Virtual Private Gateways:

I return to the Direct Connect Console and navigate to the Direct Connect Gateways. I select my Gateway and choose Associate Virtual Private Gateway from the Actions menu:

Then I select both of my Virtual Private Gateways and click on Associate:

If, as would usually be the case, my VPCs are in distinct AWS Regions, the same procedure would apply. For this blog post it was easier to show you the operations once rather than twice.

The Virtual Gateway association is complete within a minute or so (the state starts out as associating):

When the state transitions to associated, traffic can flow between your on-premises network and your VPCs, over your AWS Direct Connect connection, regardless of the AWS Regions where your VPCs reside.

Public Virtual Interfaces for Service Endpoints
You can now create Public Virtual Interfaces that will allow you to access AWS public service endpoints for AWS services running in any AWS Region (except AWS China Region) over Direct Connect. These interfaces receive (via BGP) Amazon’s global IP routes. You can create these interfaces in the Direct Connect Console; start by selecting the Public option:

After you create it you will need to associate it with a VPC.

Updated Pricing Model
In light of the ever-expanding number of AWS Regions and AWS Direct Connect locations, data transfer pricing is now based on the location of the Direct Connect and the source AWS Region. The new pricing is simpler that the older model which was based on AWS Direct Connect locations.

Now Available
This new feature is available today and you can start to use it right now. You can create and use Direct Connect Gateways at no charge; you pay the usual Direct Connect prices for port hours and data transfer.



Source: New feed

Like This (0)
Dislike This (0)

98, 99, 100 CloudFront Points of Presence!

Nine years ago I showed you how you could Distribute Your Content with Amazon CloudFront. We launched CloudFront in 2008 with 14 Points of Presence and have been expanding rapidly ever since. Today I am pleased to announce the opening of our 100th Point of Presence, the fifth one in Tokyo and the sixth in Japan. With 89 Edge Locations and 11 Regional Edge Caches, CloudFront now supports traffic generated by millions of viewers around the world.

23 Countries, 50 Cities, and Growing
Those 100 Points of Presence span the globe, with sites in 50 cities and 23 countries. In the past 12 months we have expanded the size of our network by about 58%, adding 37 Points of Presence, including nine in the following new cities:

  • Berlin, Germany
  • Minneapolis, Minnesota, USA
  • Prague, Czech Republic
  • Boston, Massachusetts, USA
  • Munich, Germany
  • Vienna, Austria
  • Kuala Lumpur, Malaysia
  • Philadelphia, Pennsylvania, USA
  • Zurich, Switzerland

We have even more in the works, including an Edge Location in the United Arab Emirates, currently planned for the first quarter of 2018.

Innovating for Our Customers
As I mentioned earlier, our network consists of a mix of Edge Locations and Regional Edge Caches. First announced at re:Invent 2016, the Regional Edge Caches sit between our Edge Locations and your origin servers, have even more memory than the Edge Locations, and allow us to store content close to the viewers for rapid delivery, all while reducing the load on the origin servers.

While locations are important, they are just a starting point. We continue to focus on security with the recent launch of our Security Policies feature and our announcement that CloudFront is a HIPAA-eligible service. We gave you more content-serving and content-generation options with the launch of Lambda@Edge, letting you run AWS Lambda functions close to your users.

We have also been working to accelerate the processing of cache invalidations and configuration changes. We now accept invalidations within milliseconds of the request and confirm that the request has been processed world-wide, typically within 60 seconds. This helps to ensure that your customers have access to fresh, timely content!

Visit our Getting Started with Amazon CloudFront page for sign-up information, tutorials, webinars, on-demand videos, office hours, and more.



Source: New feed

Like This (0)
Dislike This (0)

Join Us for AWS Security Week November 6–9 in New York City

Want to learn how to securely deploy applications and services in the AWS Cloud? Join us in New York City at the AWS Pop-up Loft for AWS Security Week, November 6–9. At this free technical event, you will learn security concepts and strategies from AWS security professionals in sessions, demos, and labs.

Here is a sampling of the security offerings during the week:

  • Become a Cloud Security Ninja
  • Data Protection in Transit and at Rest
  • Soup to Nuts: Identity Federation for AWS
  • Brewing an Effective Cloud Security Strategy

Learn more about the available sessions and register!

– Craig

Source: Aws Security

Like This (0)
Dislike This (0)

Tips and Tricks of the Docker Captains

My talk at DockerCon EU was designed to provide the audience with a bunch of tips for making the most of Docker. The tips were inspired by suggestions, blogs and presentations by other Docker Captains as well members of the larger Docker community.

The motivation for the talk was to enable users to quickly gain a higher level of proficiency and understanding in Docker. The metaphor I use is with traditional carpentry tools; whilst a novice can pick up a saw and cut a piece of wood, an expert will be able to do the same job more quickly, more accurately, and with less frustration. The reason why is partly experience, but also because the expert has a more thorough understanding-of and affinity-with her tools. The tips in my talk are designed both to reduce frustration and increase efficiency when working with Docker.

To give an example, one of the tips I present is on configuring the `docker ps` output format. By default `docker ps` prints out a really long line that looks messy except on the widest of terminals. You can fix this by using the `--format` argument to pick what fields you’re interested in e.g:

 $ docker ps --format "table {{.Names}}t{{.Image}}t{{.Status}}"
 blissful_turing nginx Up 50 seconds

You can then make this the default by configuring it in your `.docker/config.json` file by adding the “psFormat” key. Brian Goff, a Docker employee, wrote a blog that goes into more details If you found this interesting and want to learn more tips, please go and check out the video and slides!

To learn more about Docker:

Tips and Tricks of #DockerCaptain @amouat
Click To Tweet

The post Tips and Tricks of the Docker Captains appeared first on Docker Blog.

Source: Docker

Like This (0)
Dislike This (0)

Hot Startups on AWS – October 2017

In 2015, the Centers for Medicare and Medicaid Services (CMS) reported that healthcare spending made up 17.8% of the U.S. GDP – that’s almost $3.2 trillion or $9,990 per person. By 2025, the CMS estimates this number will increase to nearly 20%. As cloud technology evolves in the healthcare and life science industries, we are seeing how companies of all sizes are using AWS to provide powerful and innovative solutions to customers across the globe. This month we are excited to feature the following startups:

  • ClearCare – helping home care agencies operate efficiently and grow their business.
  • DNAnexus – providing a cloud-based global network for sharing and managing genomic data.

ClearCare (San Francisco, CA)

ClearCare envisions a future where home care is the only choice for aging in place. Home care agencies play a critical role in the economy and their communities by significantly lowering the overall cost of care, reducing the number of hospital admissions, and bending the cost curve of aging. Patients receiving home care typically have multiple chronic conditions and functional limitations, driving over $190 billion in healthcare spending in the U.S. each year. To offset these costs, health insurance payers are developing in-home care management programs for patients. ClearCare’s goal is to help home care agencies leverage technology to improve costs, outcomes, and quality of life for the aging population. The company’s powerful software platform is specifically designed for use by non-medical, in-home care agencies to manage their businesses.

Founder and CEO Geoff Nudd created ClearCare because of his own grandmother’s need for care. Keeping family members and caregivers up to date on a loved one’s well being can be difficult, so Geoff created what is now ClearCare’s Family Room, which enables caregivers and agency staff to check schedules and receive real-time updates about what’s happening in the home. Since then, agencies have provided feedback on others areas of their businesses that could be streamlined. ClearCare has now built over 20 modules to help home care agencies optimize operations with services including a telephony service, billing and payroll, and more. ClearCare now serves over 4,000 home care agencies, representing 500,000 caregivers and 400,000 seniors.

Using AWS, ClearCare is able to spin up reliable infrastructure for proofs of concept and iterate on those systems to quickly get value to market. The company runs many AWS services including Amazon Elasticsearch Service, Amazon RDS, and Amazon CloudFront. Amazon EMR and Amazon Athena have enabled ClearCare to build a Hadoop-based ETL and data warehousing system that processes terabytes of data each day. By utilizing these managed services, ClearCare has been able to go from concept to customer delivery in less than three months.

To learn more about ClearCare, check out their website.

DNAnexus (Mountain View, CA)

DNAnexus is accelerating the application of genomic data in precision medicine by providing a cloud-based platform for sharing and managing genomic and biomedical data and analysis tools. The company was founded in 2009 by Stanford graduate student Andreas Sundquist and two Stanford professors Arend Sidow and Serafim Batzoglou, to address the need for scaling secondary analysis of next-generation sequencing (NGS) data in the cloud. The founders quickly learned that users needed a flexible solution to build complex analysis workflows and tools that enable them to share and manage large volumes of data. DNAnexus is optimized to address the challenges of security, scalability, and collaboration for organizations that are pursuing genomic-based approaches to health, both in clinics and research labs. DNAnexus has a global customer base – spanning North America, Europe, Asia-Pacific, South America, and Africa – that runs a million jobs each month and is doubling their storage year-over-year. The company currently stores more than 10 petabytes of biomedical and genomic data. That is equivalent to approximately 100,000 genomes, or in simpler terms, over 50 billion Facebook photos!

DNAnexus is working with its customers to help expand their translational informatics research, which includes expanding into clinical trial genomic services. This will help companies developing different medicines to better stratify clinical trial populations and develop companion tests that enable the right patient to get the right medicine. In collaboration with Janssen Human Microbiome Institute, DNAnexus is also launching Mosaic – a community platform for microbiome research.

AWS provides DNAnexus and its customers the flexibility to grow and scale research programs. Building the technology infrastructure required to manage these projects in-house is expensive and time-consuming. DNAnexus removes that barrier for labs of any size by using AWS scalable cloud resources. The company deploys its customers’ genomic pipelines on Amazon EC2, using Amazon S3 for high-performance, high-durability storage, and Amazon Glacier for low-cost data archiving.

Learn more about DNAnexus here.


Source: New feed

Like This (0)
Dislike This (0)

AWS HIPAA Eligibility Update (October 2017) – Sixteen Additional Services

Our Health Customer Stories page lists just a few of the many customers that are building and running healthcare and life sciences applications that run on AWS. Customers like Verge Health, Care Cloud, and Orion Health trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

Sixteen More Services
In my last HIPAA Eligibility Update I shared the news that we added eight additional services to our list of HIPAA eligible services. Today I am happy to let you know that we have added another sixteen services to the list, bringing the total up to 46. Here are the newest additions, along with some short descriptions and links to some of my blog posts to jog your memory:

Amazon Aurora with PostgreSQL Compatibility – This brand-new addition to Amazon Aurora allows you to encrypt your relational databases using keys that you create and manage through AWS Key Management Service (KMS). When you enable encryption for an Amazon Aurora database, the underlying storage is encrypted, as are automated backups, read replicas, and snapshots. Read New – Encryption at Rest for Amazon Aurora to learn more.

Amazon CloudWatch Logs – You can use the logs to monitor and troubleshoot your systems and applications. You can monitor your existing system, application, and custom log files in near real-time, watching for specific phrases, values, or patterns. Log data can be stored durably and at low cost, for as long as needed. To learn more, read Store and Monitor OS & Application Log Files with Amazon CloudWatch and Improvements to CloudWatch Logs and Dashboards.

Amazon Connect – This self-service, cloud-based contact center makes it easy for you to deliver better customer service at a lower cost. You can use the visual designer to set up your contact flows, manage agents, and track performance, all without specialized skills. Read Amazon Connect – Customer Contact Center in the Cloud and New – Amazon Connect and Amazon Lex Integration to learn more.

Amazon ElastiCache for Redis – This service lets you deploy, operate, and scale an in-memory data store or cache that you can use to improve the performance of your applications. Each ElastiCache for Redis cluster publishes key performance metrics to Amazon CloudWatch. To learn more, read Caching in the Cloud with Amazon ElastiCache and Amazon ElastiCache – Now With a Dash of Redis.

Amazon Kinesis Streams – This service allows you to build applications that process or analyze streaming data such as website clickstreams, financial transactions, social media feeds, and location-tracking events. To learn more, read Amazon Kinesis – Real-Time Processing of Streaming Big Data and New: Server-Side Encryption for Amazon Kinesis Streams.

Amazon RDS for MariaDB – This service lets you set up scalable, managed MariaDB instances in minutes, and offers high performance, high availability, and a simplified security model that makes it easy for you to encrypt data at rest and in transit. Read Amazon RDS Update – MariaDB is Now Available to learn more.

Amazon RDS SQL Server – This service lets you set up scalable, managed Microsoft SQL Server instances in minutes, and also offers high performance, high availability, and a simplified security model. To learn more, read Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk and Amazon RDS for Microsoft SQL Server – Transparent Data Encryption (TDE) to learn more.

Amazon Route 53 – This is a highly available Domain Name Server. It translates names like into IP addresses. To learn more, read Moving Ahead with Amazon Route 53.

AWS Batch – This service lets you run large-scale batch computing jobs on AWS. You don’t need to install or maintain specialized batch software or build your own server clusters. Read AWS Batch – Run Batch Computing Jobs on AWS to learn more.

AWS CloudHSM – A cloud-based Hardware Security Module (HSM) for key storage and management at cloud scale. Designed for sensitive workloads, CloudHSM lets you manage your own keys using FIPS 140-2 Level 3 validated HSMs. To learn more, read AWS CloudHSM – Secure Key Storage and Cryptographic Operations and AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads.

AWS Key Management Service – This service makes it easy for you to create and control the encryption keys used to encrypt your data. It uses HSMs to protect your keys, and is integrated with AWS CloudTrail in order to provide you with a log of all key usage. Read New AWS Key Management Service (KMS) to learn more.

AWS Lambda – This service lets you run event-driven application or backend code without thinking about or managing servers. To learn more, read AWS Lambda – Run Code in the Cloud, AWS Lambda – A Look Back at 2016, and AWS Lambda – In Full Production with New Features for Mobile Devs.

Lambda@Edge – You can use this new feature of AWS Lambda to run Node.js functions across the global network of AWS locations without having to provision or manager servers, in order to deliver rich, personalized content to your users with low latency. Read Lambda@Edge – Intelligent Processing of HTTP Requests at the Edge to learn more.

AWS Snowball Edge – This is a data transfer device with 100 terabytes of on-board storage as well as compute capabilities. You can use it to move large amounts of data into or out of AWS, as a temporary storage tier, or to support workloads in remote or offline locations. To learn more, read AWS Snowball Edge – More Storage, Local Endpoints, Lambda Functions.

AWS Snowmobile – This is an exabyte-scale data transfer service. Pulled by a semi-trailer truck, each Snowmobile packs 100 petabytes of storage into a ruggedized 45-foot long shipping container. Read AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks to learn more (and to see some of my finest LEGO work).

AWS Storage Gateway – This hybrid storage service lets your on-premises applications use AWS cloud storage (Amazon Simple Storage Service (S3), Amazon Glacier, and Amazon Elastic File System) in a simple and seamless way, with storage for volumes, files, and virtual tapes. To learn more, read The AWS Storage Gateway – Integrate Your Existing On-Premises Applications with AWS Cloud Storage and File Interface to AWS Storage Gateway.

And there you go! Check out my earlier post for a list of resources that will help you to build applications that comply with HIPAA and HITECH.



Source: New feed

Like This (0)
Dislike This (0)