Our Newest AWS Community Heroes (Spring 2018 Edition)

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.

This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:

Peter Sbarski

Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.

Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.

Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.




Michael Wittig

Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. widdix maintains several AWS related open source projects, most notably a collection of production-ready CloudFormation templates. In 2016, widdix released marbot: a Slack bot supporting your DevOps team to detect and solve incidents on AWS.

In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.





Fernando Hönig

Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.

Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.

During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.

Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.




Anders Bjørnestad

Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.

He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.

Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.

You can follow him on Twitter or connect with him on LinkedIn.

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.









Source: New feed

Like This (0)
Dislike This (0)

AWS Documentation is Now Open Source and on GitHub

Earlier this year we made the AWS SDK developer guides available as GitHub repos (all found within the awsdocs organization) and invited interested parties to contribute changes and improvements in the form of pull requests.

Today we are adding over 138 additional developer and user guides to the organization, and we are looking forward to receiving your requests. You can fix bugs, improve code samples (or submit new ones), add detail, and rewrite sentences and paragraphs in the interest of accuracy or clarity. You can also look at the commit history in order to learn more about new feature and service launches and to track improvements to the documents.

Making a Contribution
Before you get started, read the Amazon Open Source Code of Conduct and take a look at the Contributing Guidelines document (generally named CONTRIBUTING.md) for the AWS service of interest. Then create a GitHub account if you don’t already have one.

Once you find something to change or improve, visit the HTML version of the document and click on Edit on GitHub button at the top of the page:

This will allow you to edit the document in source form (typically Markdown or reStructuredText). The source code is used to produce the HTML, PDF, and Kindle versions of the documentation.

Once you are in GitHub, click on the pencil icon:

This creates a “fork” — a separate copy of the file that you can edit in isolation.

Next, make an edit. In general, as a new contributor to an open source project, you should gain experience and build your reputation by making small, high-quality edits. I’ll change “dozens of services” to “over one hundred services” in this document:

Then I summarize my change and click Propose file change:

I examine the differences to verify my changes and then click Create pull request:

Then I review the details and click Create pull request again:

The pull request (also known as a PR) makes its way to the Elastic Beanstalk documentation team and they get to decide if they want to accept it, reject it, or to engage in a conversation with me to learn more. The teams endeavor to respond to PRs within 48 hours, and I’ll be notified via GitHub whenever the status of the PR changes.

As is the case with most open source projects, a steady stream of focused, modest-sized pull requests is preferable to the occasional king-sized request with dozens of edits inside.

If I am interested in tracking changes to a repo over time, I can Watch and/or Star it:

If I Watch a repo, I’ll receive an email whenever there’s a new release, issue, or pull request for that service guide.

Go Fork It
This launch gives you another way to help us to improve AWS. Let me know what you think!


Source: New feed

Like This (0)
Dislike This (0)

New – Usage-Based Pricing for Amazon Chime

I am a regular and frequent user of Amazon Chime, as are most of my colleagues. In addition to dozens of ongoing point-to-point sessions with individual colleagues, I participate in one chat room for the AWS Blog team and another for the AWS Evangelists. I attend several meetings per day, and use Amazon Chime’s conferencing feature to set up my own meetings once or twice a month. While I don’t actually see the monthly bill, I am on Amazon Chime’s Pro plan, at a per-user rate of $15 per month. This is, based on feedback from our customers, a good value for users who regularly host meetings, but a bit too high for those who use the other features and/or spend more time attending meetings than hosting them.

Usage-Based Pricing
To meet the needs of these users and to make Amazon Chime even more economical we are introducing a new, usage-based pricing model that goes in to effect on April 1, 2018.

During the initial, 30-day free trial, users have access to all Amazon Chime features at no charge. After the trial period ends, the users can chat with each other and attend meetings, both at no charge. On days that they host meetings, a $3 per day charge will be made, up to a maximum of $15 per month. Based on historical usage patterns, this will result in an overall price reduction for virtually all Amazon Chime customers.

In order to use the scheduling and hosting features after the end of the trial period, users must be connected to an AWS account and the administrator must enable the Amazon Pro Features features for the account. Thanks to the usage-based model, administrators will no longer have to purchase licenses for individual users in their organization. Instead, they can use Active Directory settings and policy management within the Amazon Chime Console to set permissions as desired.

As part of this change we are also eliminating the Plus plan and adding the screen sharing and corporate directory features to the Basic plan.

This change goes in to effect on April 1, 2018! To learn more, take a look at the Amazon Chime Pricing page.


Source: New feed

Like This (0)
Dislike This (0)

AWS Summit Season is Almost Here – Get Ready to Register!

I’m writing this post from my hotel room in Tokyo while doing my best to fight jet lag! I’m here to speak at JAWS Days and Startup Day, and to meet with some local customers.

I do want to remind you that the AWS Global Summit series is just about to start! With events planned for North America, Latin America, Japan and the rest of Asia, Europe, the Middle East, Africa, and Greater China, odds are that there’s one not too far from you. You can register for the San Francisco Summit today and you can ask to be notified as soon as registration for the other 30+ cities opens up.

The Summits are offered at no charge and are an excellent way for you to learn more about AWS. You’ll get to hear from our leaders and tech teams, our partners, and from other customers. You can also participate in hands-on workshops, labs, and team challenges.

Because the events are multi-track, you may want to bring a colleague or two in order to make sure that you don’t miss something of interest to your organization.


PS – I keep meaning to share this cool video that my friend Mike Selinker took at AWS re:Invent. Check it out!

Source: New feed

Like This (0)
Dislike This (0)

Join the AWS Quest – Help me to Rebuild Ozz!

If you have been watching my weekly videos, you may have noticed an orange robot in the background from time to time. That’s Ozz, my robot friend and helper. Built from the ground up in my home laboratory, Ozz is an invaluable part of the AWS blogging process!

Sadly, when we announced we are adding the AWS Podcast to the blog, Ozz literally went to pieces and all I have left is a large pile of bricks and some great memories of our time together. From what I can tell, Ozz went haywire over this new development due to excessive enthusiasm!

Ozz, perhaps anticipating that this could happen at some point, buried a set of clues (each pointing to carefully protected plans) in this blog, in the AWS Podcast, and in other parts of the AWS site. If we can find and decode these plans, we can rebuild Ozz, better, stronger, and faster. Unfortunately, due to concerns about the ultra-competitive robot friend market, Ozz concealed each of the plans inside a set of devious, brain-twisting puzzles. You are going to need to look high, low, inside, outside, around, and through the clues in order to figure this one out. You may even need to phone a friend or two.

Your mission, should you choose to accept it, is to find these clues, decode the plans, and help me to rebuild Ozz. The information that I have is a bit fuzzy, but I think there are 20 or so puzzles, each one describing one part of Ozz. If we can solve them all, we’ll get together on Twitch later this month and put Ozz back together.

Are you with me on this? Let’s do it!


Source: New feed

Like This (0)
Dislike This (0)

AWS Online Tech Talks – March 2018

Can you believe it’s already the month of March? With some great new Tech Talks available this month, there’s no better time to grow your knowledge about AWS services and solutions.

AWS Online Tech Talks

March 2018– Schedule

Below is the full schedule for the live, online technical sessions being held during the month of March. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Analytics & Big Data

March 21, 2018 | 09:00 AM – 09:45 AM PTAmazon Elasticsearch Service Deep Dive (300) – Learn how to run Elasticsearch using Amazon Elasticsearch Service.

March 28, 2018 | 11:00 AM – 12:00 PM PTDeep Dive on Amazon Athena (300) – Dive deep into the most common Amazon Athena use cases, including working with other AWS services.


March 26, 2018 | 01:00 PM – 01:45 PM PT High Performance Computing in the Cloud (200) – Learn how AWS is enabling faster time to results and higher ROI when it comes to solving the big problems in science, engineering and business with high performance computing in the cloud.

March 27, 2018 | 01:00 PM – 01:45 PM PTIntroduction to Hybrid Cloud on AWS (200) – Learn how AWS is building the industry’s broadest capabilities for Hybrid Cloud deployments.


March 19, 2018 | 01:00 PM – 01:45 PM PT – Running Kubernetes with Amazon EKS (200) – Learn about how Amazon EKS makes it easy to run and scale Kubernetes in the cloud.


March 21, 2018 | 11:00 AM – 11:45 AM PTBest Practices for Migrating Oracle Databases to the Cloud (200) – Learn best practices for migrating your Oracle databases to the cloud.


March 21, 2018 | 01:00 PM – 01:45 PM PT – Life of a Code Change to a Tier 1 Service (200) – Come follow the journey of an AWS code change on its way to production.


March 26, 2018 | 11:00 AM – 11:45 AM PTImprove Efficiency by Migrating Messaging to Amazon MQ (200) – Learn how Amazon MQ makes it easy for enterprises, like GE, to save time and money by migrating to a managed message broker.


March 22, 2018 | 11:00 AM – 12:00 PM PTLearn Step by Step How iDevices Uses AWS IoT Analytics (300) – Learn how iDevices, with AWS IoT Analytics, built an anomaly system and gains deep insights into user behavior.

Machine Learning

March 20, 2018 | 01:00 PM – 01:45 PM PTWorking with Scalable Machine Learning Algorithms in Amazon SageMaker (400) – Get an introduction the collection of distributed streaming machine learning algorithms that come with Amazon SageMaker.

March 22, 2018 | 01:00 PM – 01:45 PM PTEnabling Deep Learning in IoT Applications with Apache MXNet (200) – Learn how to use deep learning in IoT apps using Apache MXNet.

March 28, 2018 | 01:00 PM – 02:00 PM PTMedia Processing Workflows at High Velocity and Scale using AI and ML (200) – Hear how AWS customers have improved media supply chains using AI in areas such as metadata tagging (Rekognition and Comprehend), translations, transcriptions, and cloud services (Elemental).

Management Tools

March 19, 2018 | 11:00 AM – 12:00 PM PTAchieving Global Consistency Using AWS CloudFormation StackSets (300) – Learn how to centrally manage your accounts and regions using AWS CloudFormation StackSets.


March 22, 2018 | 09:00 AM – 09:45 AM PTNew Mobile CLI and Console Experience (200) – Learn how AWS Mobile Services has introduced a new CLI and streamlined console experience in order to simplify and speed up the development of mobile applications with innovative AWS features and back-end functionality.


March 28, 2018 | 09:00 AM – 09:45 AM PT – Deep Dive on New AWS Networking Features (300) – Learn how AWS PrivateLink, Direct Connect gateway, and new features with Elastic Load Balancers (ELB) come together to meet the needs of a modern enterprise.

Security, Identity & Compliance

March 27, 2018 | 09:00 AM – 09:45 AM PT – The Evolution of Identity and Access Management on AWS (300) – Learn how identity federation can address many security and management scenarios as you expand your use of AWS.

March 29, 2018 | 09:00 AM – 09:45 AM PTNavigating GDPR Compliance on AWS (300) – Get a walkthrough of potential General Data Protection Regulation (GDPR) obligations and see how the AWS cloud offers services and features that are consistent with GDPR considerations in the ramp-up to the May 25th, 2018 enforcement date.


March 20, 2018 | 11:00 AM – 11:45 AM PT – Operating Your Serverless API in Production at Scale (400) – Learn how to deploy, monitor, and manage serverless APIs in production.


March 27, 2018 | 11:00 AM – 11:45 AM PT Enterprise Applications with Amazon EFS (300) – Join us for a technical deep dive on Amazon EFS, where you’ll learn tips and tricks for integrating your enterprise applications with Amazon EFS.

March 29, 2018 | 11:00 AM – 11:45 AM PTTransforming Data Lakes with Amazon S3 Select & Amazon Glacier Select (300) – Join us for a webinar where we’ll demonstrate how Amazon S3 Select can increase analytics query performance up to 400%, and Amazon Glacier Select makes it practical to extend queries to archive storage, significantly reducing data lake storage costs.

Source: New feed

Like This (0)
Dislike This (0)

Auto Scaling is now available for Amazon SageMaker

Kumar Venkateswar, Product Manager on the AWS ML Platforms Team, shares details on the announcement of Auto Scaling with Amazon SageMaker.

With Amazon SageMaker, thousands of customers have been able to easily build, train and deploy their machine learning (ML) models. Today, we’re making it even easier to manage production ML models, with Auto Scaling for Amazon SageMaker. Instead of having to manually manage the number of instances to match the scale that you need for your inferences, you can now have SageMaker automatically scale the number of instances based on an AWS Auto Scaling Policy.

SageMaker has made managing the ML process easier for many customers. We’ve seen customers take advantage of managed Jupyter notebooks and managed distributed training. We’ve seen customers deploying their models to SageMaker hosting for inferences, as they integrate machine learning with their applications. SageMaker makes this easy –  you don’t have to think about patching the operating system (OS) or frameworks on your inference hosts, and you don’t have to configure inference hosts across Availability Zones. You just deploy your models to SageMaker, and it handles the rest.

Until now, you have needed to specify the number and type of instances per endpoint (or production variant) to provide the scale that you need for your inferences. If your inference volume changes, you can change the number and/or type of instances that back each endpoint to accommodate that change, without incurring any downtime. In addition to making it easy to change provisioning, customers have asked us how we can make managing capacity for SageMaker even easier.

With Auto Scaling for Amazon SageMaker, in the SageMaker console, the AWS Auto Scaling API, and the AWS SDK, this becomes much easier. Now, instead of having to closely monitor inference volume, and change the endpoint configuration in response, customers can configure a scaling policy to be used by AWS Auto Scaling. Auto Scaling adjusts the number of instances up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values defined in the policy. In this way, customers can automatically adjust their inference capacity to maintain predictable performance at a low cost. You simply specify the target inference throughput per instance and provide upper and lower bounds for the number of instances for each production variant. SageMaker will then monitor throughput per instance using Amazon CloudWatch alarms, and then it will adjust provisioned capacity up or down as needed.

After you configure the endpoint with Auto Scaling, SageMaker will continue to monitor your deployed models to automatically adjust the instance count. SageMaker will keep throughput within desired levels, in response to changes in application traffic. This makes it easier to manage models in production, and it can help reduce the cost of deployed models, as you no longer have to provision sufficient capacity in order to manage your peak load. Instead, you configure the limits to accommodate your minimum expected traffic and the maximum peak, and Amazon SageMaker will work within those limits to minimize cost.

How do you get started? Open the SageMaker console. For existing endpoints, you first access the endpoint to modify the settings.

Then, scroll to the Endpoint runtime settings section, select the variant, and choose Configure auto scaling.

First, configure the minimum and maximum number of instances.

Next, choose the throughput per instance at which you want to add an additional instance, given previous load testing.

You can optionally set cool down periods for scaling in or out, to avoid oscillation during periods of wide fluctuation in workload. If not, SageMaker will assume default values.

And that’s it! You now have an endpoint that will automatically scale with increasing inferences.

You pay for the capacity used at regular SageMaker pay-as-you-go pricing, so you no longer have to pay for unused capacity during relative idle periods!

Auto Scaling in Amazon SageMaker is available today in the US East (N. Virginia & Ohio), EU (Ireland), and U.S. West (Oregon) AWS regions. To learn more, see the Amazon SageMaker Auto Scaling documentation.

Kumar Venkateswar is a Product Manager in the AWS ML Platforms team, which includes Amazon SageMaker, Amazon Machine Learning, and the AWS Deep Learning AMIs. When not working, Kumar plays the violin and Magic: The Gathering.







Source: New feed

Like This (0)
Dislike This (0)

Attending Mobile World Congress? Check Out Our Connected Car Demo!

Are you planning to attend Mobile World Congress 2018 in Barcelona (one of my favorite cities)? If so, please be sure to check out the connected car demo in Hall 5 Booth 5E41.

The AWS Greengrass team has been working on a proof of concept with our friends at Vodafone and Saguna to show you how connected cars can change the automotive industry. The demo is built around the emerging concept of multi-access edge computing, or MEC.

Car manufacturers want to provide advanced digital technology in their vehicles but don’t want to make significant upgrades to the on-board computing resources due to cost, power, and time-to-market considerations, not to mention the issues that arise when attempting to retrofit cars that are already on the road. MEC offloads processing resources to the edge of the mobile network, for instance a hub site in the access network. This model helps car manufacturers to take advantage of low-latency compute resources while building features that can evolve and improve over the lifetime of the vehicle, often 20 years or more. It also reduces the complexity and the cost of the on-board components.

The MWC demo streams a live video feed over Vodafone’s 4G LTE network, with Saguna’s AI-powered MEC solution that leverages AWS Greengrass. The demo focuses on driver safety, with the goal of helping to detect drivers that are distracted by talking to someone or something in the car. With an on-board camera aimed at the driver, backed up by AI-powered movement tracking and pattern detection running at the edge of the mobile network, distractions can be identified and the driver can be alerted. This architecture also allows manufacturers to enhance existing cars since most of the computing is handled at the edge of the mobile network.

If you couldn’t make it to Mobile World Congress, you can also check out the video for this solution, here.


Source: New feed

Like This (0)
Dislike This (0)

Amazon GameLift FleetIQ and Spot Instances – Save up to 90% On Game Server Hosting

Amazon GameLift is a scalable, cloud-based runtime environment for session-based multiplayer games. You simply upload a build of your game, tell Amazon GameLift which type of EC2 instances you’d like to host it on, and sit back while Amazon GameLift takes care of setting up sessions and maintaining a suitably-sized fleet of EC2 instances. This automatic scaling allows you to accommodate demand that varies over time without having to keep compute resources in reserve during quiet periods.

Use Spot Instances
Last week we added a new feature to further decrease your per-player, per-hour costs when you host your game on Amazon GameLift. Before that launch, Amazon GameLift instances were always launched in On-Demand form. Instances of this type are always billed at fixed prices, as detailed on the Amazon GameLift Pricing page.

You can now make use Amazon GameLift Spot Instances in your GameLift fleets. These instances represent unused capacity and have prices that rise and fall over time. While your results will vary, you may see savings of up to 90% when compared to On-Demand Instances.

While you can use Spot Instances as a simple money-saving tool, there are other interesting use cases as well. Every game has a life cycle, along with a cadre of loyal players who want to keep on playing until you finally unplug and decommission the servers. You could create an Amazon GameLift fleet comprised of low-cost Spot Instances and keep that beloved game up and running as long as possible without breaking the bank. Behind the scenes, an Amazon GameLift Queue will make use of both Spot and On-Demand Instances, balancing price and availability in an attempt to give you the best possible service at the lowest price.

As I mentioned earlier, Spot Instances represent capacity that is not in use by On-Demand Instances. When this capacity decreases, existing Spot Instances could be interrupted with two minutes of notification and then terminated. Fortunately, there’s a lot of capacity and terminations are, statistically speaking, quite rare. To reduce the frequency even further, Amazon GameLift Queues now include a new feature that we call FleetIQ.

FleetIQ is powered by historical pricing and termination data for Spot Instances. This data, in combination with a very conservative strategy for choosing instance types, further reduces the odds that any particular game will be notified and then interrupted. The onProcessTerminate callback in your game’s server process will be activated if the underlying Spot Instance is about to be interrupted. At that point you have two minutes to close out the game, save any logs, free up any resources, and otherwise wrap things up. While you are doing this, you can call GetTerminationTime to see how much time remains.

Creating a Fleet
To take advantage of Spot Instances and FleetIQ, you can use the Amazon GameLift console or API to set up Queues with multiple fleets of Spot and On-Demand Instances. By adding more fleets into each Queue, you give FleetIQ more options to improve latency, interruption rate, and cost. To start a new game session on an instance, FleetIQ first selects the region with the lowest latency for each player, then chooses the fleet with the lowest interruption rate and cost.

Let’s walk through the process. I’ll create a fleet of On-Demand Instances and a fleet of Spot Instances, in that order:


I take a quick break while the fleets are validated and activated:

Then I create a queue for my game. I select the fleets as the destinations for the queue:

If I am building a game that will have a global user base, I can create fleets in additional AWS Regions and use a player latency policy so that game sessions will be created in a suitable region:

To learn more about how to use this feature, take a look at the Spot Fleet Integration Guide.

Now Available
You can use Amazon GameLift Spot Instance fleets to host your session-based games now! Take a look, give it a try, and let me know what you think.

If you are planning to attend GDC this year, be sure to swing by booth 1001. Check out our GDC 2018 site for more information on our dev day talks, classroom sessions, and in-booth demos.



Source: New feed

Like This (0)
Dislike This (0)

Now Available – AWS Serverless Application Repository

Last year I suggested that you Get Ready for the AWS Serverless Application Repository and gave you a sneak peek. The Repository is designed to make it as easy as possible for you to discover, configure, and deploy serverless applications and components on AWS. It is also an ideal venue for AWS partners, enterprise customers, and independent developers to share their serverless creations.

Now Available
After a well-received public preview, the AWS Serverless Application Repository is now generally available and you can start using it today!

As a consumer, you will be able to tap in to a thriving ecosystem of serverless applications and components that will be a perfect complement to your machine learning, image processing, IoT, and general-purpose work. You can configure and consume them as-is, or you can take them apart, add features, and submit pull requests to the author.

As a publisher, you can publish your contribution in the Serverless Application Repository with ease. You simply enter a name and a description, choose some labels to increase discoverability, select an appropriate open source license from a menu, and supply a README to help users get started. Then you enter a link to your existing source code repo, choose a SAM template, and designate a semantic version.

Let’s take a look at both operations…

Consuming a Serverless Application
The Serverless Application Repository is accessible from the Lambda Console. I can page through the existing applications or I can initiate a search:

A search for “todo” returns some interesting results:

I simply click on an application to learn more:

I can configure the application and deploy it right away if I am already familiar with the application:

I can expand each of the sections to learn more. The Permissions section tells me which IAM policies will be used:

And the Template section displays the SAM template that will be used to deploy the application:

I can inspect the template to learn more about the AWS resources that will be created when the template is deployed. I can also use the templates as a learning resource in preparation for creating and publishing my own application.

The License section displays the application’s license:

To deploy todo, I name the application and click Deploy:

Deployment starts immediately and is done within a minute (application deployment time will vary, depending on the number and type of resources to be created):

I can see all of my deployed applications in the Lambda Console:

There’s currently no way for a SAM template to indicate that an API Gateway function returns binary media types, so I set this up by hand and then re-deploy the API:

Following the directions in the Readme, I open the API Gateway Console and find the URL for the app in the API Gateway Dashboard:

I visit the URL and enter some items into my list:

Publishing a Serverless Application
Publishing applications is a breeze! I visit the Serverless App Repository page and click on Publish application to get started:

Then I assign a name to my application, enter my own name, and so forth:

I can choose from a long list of open-source friendly SPDX licenses:

I can create an initial version of my application at this point, or I can do it later. Either way, I simply provide a version number, a URL to a public repository containing my code, and a SAM template:

Available Now
The AWS Serverless Application Repository is available now and you can start using it today, paying only for the AWS resources consumed by the serverless applications that you deploy.

You can deploy applications in the US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo) Regions. You can publish from the US East (N. Virginia) or US East (Ohio) Regions for global availability.



Source: New feed

Like This (0)
Dislike This (0)