How to do cache purge of Amazon cloudfront through AWS-CLI

AWS CLI support for this service that we can purge cache of Cloudfront. When we configured AWS cli on our system, we need aws_access_key_id, aws_secret_access_key and region of particular user that we have entered in “config” file in “.aws” directory.

STEP 1 : [default]
output = text
aws_access_key_id = **********************
aws_secret_access_key = **********************************
region = us-west-2

[preview]
cloudfront=true

STEP 2: Go to AWS console and IAM tab and attach policy “CloudFrontFullAccess” to user.

STEP 3: create a “invaliate.json” script for it.

{
“Paths”: {
“Quantity”: 1,
“Items”: [“/*”]
},
“CallerReference”: “my-invalidation-2015-09-01”
}

STEP 4: We need distribution id of that particular CDN. We can execute final command for cache purging.
aws cloudfront create-invalidation –invalidation-batch file://invalidate.json –distribution-id ************

Like This (3)
Dislike This (0)

New Whitepaper: Aligning to the NIST Cybersecurity Framework in the AWS Cloud

NIST logo

Today, we released the Aligning to the NIST Cybersecurity Framework in the AWS Cloud whitepaper. Both public and commercial sector organizations can use this whitepaper to assess the AWS environment against the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) and improve the security measures they implement and operate (also known as security in the cloud). The whitepaper also provides a third-party auditor letter attesting to the AWS Cloud offering’s conformance to NIST CSF risk management practices (also known as security of the cloud), allowing organizations to properly protect their data across AWS.

In February 2014, NIST published the Framework for Improving Critical Infrastructure Cybersecurity in response to Presidential Executive Order 13636, “Improving Critical Infrastructure Cybersecurity,” which called for the development of a voluntary framework to help organizations improve the cybersecurity, risk management, and resilience of their systems. The Cybersecurity Enhancement Act of 2014 reinforced the legitimacy and authority of the NIST CSF by codifying it and its voluntary adoption into law, and federal agency Federal Information Security Modernization Act (FISMA) reporting metrics now align to the NIST CSF. Though it is intended for adoption by the critical infrastructure sector, the foundational set of security disciplines in the NIST CSF has been endorsed by government and industry as a recommended baseline for use by any organization, regardless of its sector or size.

We recognize the additional level of effort an organization has to expend for each new security assurance framework it implements. To reduce that burden, we provide a detailed breakout of AWS Cloud offerings and associated customer and AWS responsibilities to facilitate alignment with the NIST CSF. Organizations ranging from federal and state agencies to regulated entities to large enterprises can use this whitepaper as a guide for implementing AWS solutions to achieve the risk management outcomes in the NIST CSF.

Security, compliance, and customer data protection are our top priorities, and we will continue to provide the resources and services for you to meet your desired outcomes while integrating security best practices in the AWS environment. When you use AWS solutions, you can be confident that we protect your data with a level of assurance that meets, if not exceeds, your requirements and needs, and gives you the resources to secure your AWS environment. To request support for implementing the NIST CSF in your organization by using AWS services, contact your AWS account manager.

– Chris Gile, Senior Manager, Security Assurance


Source: Aws Security

Like This (1)
Dislike This (0)

Important security updates for Jenkins core

They just released security updates to Jenkins, versions 2.57 and 2.46.2, that fix several security vulnerabilities, including a critical one.

That critical vulnerability is an unauthenticated remote code execution via the remoting-based CLI. When they announced the fix for the previous vulnerability of this kind, they announced our plans to revisit the design of the CLI that enabled this class of vulnerabilities.

Since Jenkins 2.54, they now have a new CLI implementation that isn’t based on remoting, and deprecated its remoting mode. Despite it being a major feature, they decided to backport it to 2.46.2, so LTS users can also disable the unsafe remoting mode while retaining almost all of the CLI’s existing functionality.

For an overview of what was fixed, see the security advisory. For an overview on the possible impact of these changes on upgrading Jenkins LTS, see our LTS upgrade guide. we recommend you read these documents, especially if you’re using the CLI with Jenkins LTS, as there are possible side effects of these fixes.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

Like This (3)
Dislike This (0)

The AWS EU (London) Region Achieves Public Services Network (PSN) Assurance

UK flag

AWS is excited to announce that the AWS EU (London) Region has achieved Public Services Network (PSN) assurance. This means that the EU (London) Region can now be connected to the PSN (or PSN customers) by PSN-certified AWS Direct Connect partners. PSN assurance demonstrates to our UK Public Sector customers that the EU (London) Region has met the stringent requirements of PSN and provides an assured platform on which to build UK Public Sector services. Customers are required to ensure that applications and configurations applied to their AWS instances meet the PSN standards, and they must undertake PSN certification for the content, platform, applications, systems, and networks they run on AWS (but no longer need to include AWS infrastructure and products in their certification).

In conjunction with our Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner. AWS has also created a UK-OFFICIAL on AWS Quick Start, which provisions an environment suitable for UK-OFFICIAL classified data. This Quick Start includes guidance and controls that help public sector organizations manage risks and ensure security when handling UK-OFFICIAL information assets.

You can download the EU (London) Region PSN Code of Connection and Service Compliance certificates through AWS Artifact. For further information about using AWS in the context of the National Cyber Security Centre (NCSC) UK’s Cloud Security Principles, see Using AWS in the Context of NCSC UK’s Cloud Security Principles.

– Oliver


Source: Aws Security

Like This (1)
Dislike This (0)

How to Visualize and Refine Your Network’s Security by Adding Security Group IDs to Your VPC Flow Logs

Many organizations begin their cloud journey to AWS by moving a few applications to demonstrate the power and flexibility of AWS. This initial application architecture includes building security groups that control the network ports, protocols, and IP addresses that govern access and traffic to their AWS Virtual Private Cloud (VPC). When the architecture process is complete and an application is fully functional, some organizations forget to revisit their security groups to optimize rules and help ensure the appropriate level of governance and compliance. Not optimizing security groups can create less-than-optimal security, with ports open that may not be needed or source IP ranges set that are broader than required.

Last year, I published an AWS Security Blog post that showed how to optimize and visualize your security groups. Today’s post continues in the vein of that post by using Amazon Kinesis Firehose and AWS Lambda to enrich the VPC Flow Logs dataset and enhance your ability to optimize security groups. The capabilities in this post’s solution are based on the Lambda functions available in this VPC Flow Log Appender GitHub repository.

Solution overview

Removing unused rules or limiting source IP addresses requires either an in-depth knowledge of an application’s active ports on Amazon EC2 instances or analysis of active network traffic. In this blog post, I discuss a method to:

  • Use VPC Flow Logs to capture information about the IP traffic in an Amazon VPC.
  • Enrich the VPC Flow Logs dataset with security group IDs by using Firehose and Lambda.
  • Demonstrate how to visualize and analyze network traffic from VPC Flow Logs by using Amazon Elasticsearch Service (Amazon ES).

Using this approach can help you remediate security group rules to necessary source IPs, ports, and nested security groups, helping to improve the security of your AWS resources while minimizing the potential risk to production environments.

Solution diagram

As illustrated in the preceding diagram, this is how the data flows in this model:

  1. The VPC posts its flow log data to Amazon CloudWatch Logs.
  2. The Lambda ingestor function passes the data to Firehose.
  3. Firehose then passes the data to the Lambda decorator function.
  4. The Lambda decorator function performs a number of lookups for each record and returns the data to Firehose with additional fields.
  5. Firehose then posts the enhanced dataset to the Amazon ES endpoint and any errors to Amazon S3.

The solution

Step 1: Set up your Amazon ES cluster and VPC Flow Logs

Create an Amazon ES cluster

The first step in this solution is to create an Amazon ES cluster. Do this first because it takes some time for the cluster to become available. If you are new to Amazon ES, you can learn more about it in the Amazon ES documentation.

To create an Amazon ES cluster:

  1. In the AWS Management Console, choose Elasticsearch Service under Analytics.
  2. Choose Create a new domain or Get started.
  3. Type es-flowlogs for the Elasticsearch domain name.
  4. Set Version to 5.1 in the drop-down list. Choose Next.
  5. Set Instance count to 2 and select the Enable zone awareness check box. (This ensures cluster stability in the event of an Availability Zone outage.) Accept the defaults for the rest of the page.
    • [Optional] If you use this domain for production purposes, I recommend using dedicated master nodes. Select the Enable dedicated master check box and select medium.elasticsearch from the Instance type drop-down list. Leave the Instance count at 3, which is the default.
  6. Choose Next.
  7. From the Set the domain access policy to drop-down list on the next page, select Allow access to the domain from specific IP(s). In the dialog box, type or paste the comma-separated list of valid IPv4 addresses or Classless Inter-Domain Routing (CIDR) blocks you would like to be able to access the Amazon ES domain.
  8. Choose Next.
  9. On the next page, choose Confirm and create.

It will take a few minutes for the cluster to be available. In the meantime, you can begin enabling VPC Flow Logs.

Enable VPC Flow Logs

VPC Flow Logs is a feature that lets you capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. For more information about VPC Flow Logs, see VPC Flow Logs and CloudWatch Logs.

To enable VPC Flow Logs:

  1. In the AWS Management Console, choose CloudWatch under Management Tools.
  2. Click Logs in the navigation pane.
  3. From the Actions drop-down list, choose Create log group.
  4. Type Flowlogs as the Log Group Name.
  5. In the AWS Management Console, choose VPC under Networking & Content Delivery.
  6. Choose Your VPCs in the navigation pane, and select the VPC you would like to analyze. (You can also enable VPC Flow Logs on only a subnet if you do not want to enable it on the entire VPC.)
  7. Choose the Flow Logs tab in the bottom pane, and then choose Create Flow Log.
  8. In the text beneath the Role box, choose Set Up Permissions (this will open an IAM management page).
  9. Choose Allow on the IAM management page. Return to the VPC Flow Logs setup page.
  10. Choose All from the Filter drop-down list.
  11. Choose flowlogsRole from the Role drop-down list (you created this role in steps 3 and 4 in this procedure).
  12. Choose Flowlogs from the Destination Log Group drop-down list.
  13. Choose Create Flow Log.

Step 2: Set up AWS Lambda to enrich the VPC Flow Logs dataset with security group IDs

If you completed Step 1, VPC Flow Logs data is now streaming to CloudWatch Logs. Next, you will deploy two Lambda functions. The first, the ingestor function, moves the data into Firehose, and the second, the decorator function, adds three new fields to the VPC Flow Logs dataset and returns records to Firehose for delivery to Amazon ES.

The new fields added by the decorator function are:

  1. Direction – By comparing the primary IP address of the elastic network interface (ENI) in the destination IP address, you can set the direction for the IP connection.
  2. Security group IDs – Each ENI can be associated with as many as five security groups. The security group IDs are added as an array in the record.
  3. Source – This includes a number of fields that result from looking up srcaddr from a free service for geographical lookups.
    1. The Source includes:
      • source-country-code
      • source-country-name
      • source-region-code
      • source-region-name
      • source-city
      • source-location, latitude, and longitude.

Follow the instructions in this GitHub repository to deploy the two Lambda functions and the associated permissions that are required.

Step 3: Set up Firehose

Firehose is a fully managed service that allows you to transform flow log data and stream it into Amazon ES. The service scales automatically with load, and you only pay for the data transmitted through the service.

To create a Firehose delivery stream:

  1. In the AWS Management Console, choose Kinesis under Analytics.
  2. Choose Go to Firehose and then choose Create Delivery Stream.

Step 3.1: Define the destination

  1. Choose Amazon Elasticsearch Service from the Destination drop-down list.
  2. For Delivery stream name, type VPCFlowLogsToElasticSearch (the name must match the default environment variable in the ingestion Lambda function).
  3. Choose es-flowlogs from the Elasticsearch domain drop-down list. (The Amazon ES cluster configuration state needs to be Active for es-flowlogs to be available in the drop-down list.)
  4. For Index, type cwl.
  5. Choose OneDay from the Index rotation drop-down list.
  6. For Type, type log.
  7. For Backup mode, select Failed Documents Only.
  8. For S3 bucket, select New S3 bucket in the drop-down list and type a bucket name of your choice. Choose Create bucket.
  9. Choose Next.

Step 3.2: Configure Lambda

  1. Choose Enable for Data transformation.
  2. Choose vpc-flow-log-appender-dev-FlowLogDecoratorFunction-xxxxx from the Lambda function drop-down list (make sure you select the Decorator function).
  3. Choose Create/Update existing IAM role, Firehose delivery IAM roll from the IAM role drop-down list.
  4. Choose Allow. This takes you back to the Firehose Configuration.
  5. Choose Next and then choose Create Delivery Stream.

Step 4: Stream data to Firehose

The next step is to enable the data to stream from CloudWatch Logs to Firehose. You will use the Lambda ingestion function you deployed earlier: vpc-flow-log-appender-dev-FlowLogIngestionFunction-xxxxxxx.

  1. In the AWS Management Console, choose CloudWatch under Management Tools.
  2. Choose Logs in the navigation pane, and select the check box next to Flowlogs under Log Groups.
  3. From the Actions menu, choose Stream to AWS Lambda. Choose vpc-flow-log-appender-dev-FlowLogIngestionFunction-xxxxxxx (select the Ingestion function). Choose Next.
  4. Choose Amazon VPC Flow Logs from the Log Format drop-down list. Choose Next.
    Screenshot of Log Format drop-down list
  5. Choose Start Streaming.

VPC Flow Logs will now be forwarded to Firehose, capturing information about the IP traffic going to and from network interfaces in your VPC. Firehose appends additional data fields and forwards the enriched data to your Amazon ES cluster.

Data is now flowing to your Amazon ES cluster, but be patient because it can take up to 30 minutes for the data to begin appearing in your Amazon ES cluster.

Step 5: Verify that the flow log data is streaming through Firehose to the Amazon ES cluster

You should see VPC Flow Logs with ENI IDs under Log Streams (see the following screenshot) and Stored Bytes greater than zero in the CloudWatch log group.

Do you have logs from the Lambda ingestion function in the CloudWatch log group? As shown in the following screenshot, you should see START, END and REPORT records. These show that the ingestion function is running and streaming data to Firehose.

Screenshot showing logs from the Lambda ingestion function

Do you have logs from the Lambda decorator function in the CloudWatch log group? You should see START, END, and REPORT records as well as entries similar to: “Processing completed. Successful records XXX, Failed records 0.”

Screenshot showing logs from the Lambda decorator function

Do you have cwl-* indexes in the Amazon ES dashboard, as shown in the following screenshot? If you do, you are successfully streaming through Firehose and populating the Amazon ES cluster, and you are ready to proceed to Step 6. Remember, it can take up to 30 minutes for the flow logs from your workloads to begin flowing to the Amazon ES cluster.

Screenshot showing cwl-* indexes in the Amazon ES dashboard

Step 6: Using the SGDashboard to analyze VPC network traffic

You now need set up a Kibana dashboard to monitor the traffic in your VPC.

To find the Kibana URL:

  1. In the AWS Management Console, click Elasticsearch Service under Analytics.
  2. Choose es-flowlogs under Elasticsearch domain name.
  3. Click the link next to Kibana, as shown in the following screenshot.
    Screenshot showing the Kibana link

The first time you access Kibana, you will be asked to set the defaultindex. To set the defaultindex in the Amazon ES cluster:

  1. Set the Index name or pattern to cwl-*.
    Screenshot of configuring an index pattern
  2. For Time-field name, type @timestamp.
  3. Choose Create.

Load the SGDashboard:

  1. Download this JSON file and save it to your computer. The file includes a dashboard and visualizations I created for this blog post’s purposes.
  2. In Kibana, choose Management in the navigation pane, choose Saved Objects, and then import the file you just downloaded.
  3. Choose Dashboard and Open to load the SGDashboard you just imported. (You might have to press Enter in the top search box to have the dashboard load the first time.)

The following screenshot shows the SGDashboard after it has loaded.

Screenshot showing the dashboard after it has loaded

The SGDashboard is composed of a set of visualizations. Each visualization contains a view or summary of the underlying data contained in the Amazon ES cluster, as shown in the preceding screenshot. You can control the timeframe for the dashboard in the upper right corner. By clicking the timeframe, the dashboard exposes alternative timeframes that you can select.

The SGDashboard includes a list of security groups, destination ports, source IP addresses, actions, protocols, and connection directions as well as raw VPC Flow Log records. This information is useful because you can compare this to your security group configurations. Ports might be open in the security group but have no network traffic flowing to the instances on those ports, which means the corresponding rules can probably be removed. Also, by evaluating IP ranges in use, you can narrow the ranges to only those IP addresses required for the application. The following screenshot on the left shows a view of the SGDashboard for a specific security group. By comparing its accepted inbound IP addresses with the security group rules in the following screenshot on the right, you can ensure the source IP ranges are sufficiently restrictive.

Screenshot showing a view of the SGDashboard for a specific security group   Screenshot showing security group rules

Analyze VPC Flow Logs data

Amazon ES allows you to quickly view and filter VPC Flow Logs data to determine what network traffic is flowing in your VPC. This analysis requires an understanding of security groups and elastic network interfaces (ENIs). Let’s say you have two security groups associated with the same ENI, and the first security group has traffic it will register for both groups. You will still see traffic to the ENI listed in the second security group because it is allowing traffic to the ENI. Therefore, when you click a security group that you want to filter, additional groups might still be on the list because they are included in the VPC Flow Logs records.

The following screenshot on the left is a view of the SGDashboard with a security group selected (sg-978414e8). Even though that security group has a filter, two additional security groups remain in the dashboard. The following screenshot on the right shows the raw log data where each record contains all three security groups and demonstrates that all three security groups share a common set of flow log records.

Screenshot showing the SGDashboard with a security group selected   Screenshot showing raw log data

Also, note that security groups are stateful, so if the instance itself is initiating traffic to a different location, the return traffic will be displayed in the Kibana dashboard. The best example of this is port 123 Network Time Protocol (NTP). This type of traffic can be easily removed from the display by choosing the port on the right side of the dashboard, and then reversing the filter, as shown in the following screenshot. By reversing the filter, you can exclude data from the view.

Screenshot of reversing the filter on a port

Example: Unused security groups

Let’s say that some security groups are no longer in use. First, I change the time range by clicking the current time range in the top right corner of the dashboard, as shown in the following screenshot. I select Week to date.

Screenshot of changing the time range

As the following screenshot shows, the dashboard has identified five security groups that have had traffic during the week to date.

Screenshot showing five security groups that have had traffic during the week to date

As you can see in the following screenshot, I have many security groups in my test account that are not in use. Any security groups not in the SGDashboard are candidates for removal.

Example: Unused inbound rules

Let’s take a look at security group sg-63ed8c1c from the preceding screenshot. When I click sg-63ed8c1c (the security group ID) in the dashboard, a filter is applied that reduces the security groups displayed to only the records with that security group included. We can compare the traffic associated with this security group in the SGDashboard (shown in the following screenshot) to the security group rules in the EC2 console.

Screenshot showing the traffic of the sg-63ed8c1c security group

As the following screenshot of the EC2 console shows, this security group has only 2 inbound rules: one for HTTP on port 80 and one for RDP. The SGDashboard shows that traffic is not flowing on port 80, so I can safely remove that rule from the security group.

Screenshot showing this security group has only 2 inbound rules

Summary

It can be challenging to help ensure that your AWS Cloud environment allows only intended traffic and is as secure and manageable as possible. In this post, I have shown how to enable VPC Flow Logs. I then showed how to use Firehose and Lambda to add security group IDs, directions, and locations to the VPC Flow Logs dataset. The SGDashboard then enables you to analyze the flow log data and compare it with your security group configurations to improve your cloud security.

If you have comments about this blog post, submit them in the “Comments” section below. If you have implementation or troubleshooting questions about the solution in this post, please start a new thread on the AWS WAF forum.

– Guy


Source: Aws Security

Like This (1)
Dislike This (0)

Amazon Simple Queue Service Introduces Server-Side Encryption for Queues

SQS + SSE image

You can now use Amazon Simple Queue Service (SQS) to exchange sensitive data between applications using server-side encryption (SSE). SQS is a fully managed message queuing service for reliably communicating between distributed software components and microservices at any scale. You can use SQS to take advantage of the scale, cost, and operational benefits of a managed messaging service. The addition of SSE allows you to transmit sensitive data with the increased security of using encrypted queues.

SQS SSE uses the 256-bit Advanced Encryption Standard (AES-256 GCM algorithm) to encrypt each message body by using a unique key. AWS Key Management Service (KMS) provides encryption key management. In addition, KMS works with AWS CloudTrail to provide logs of all encryption key usage to help meet your regulatory and compliance needs.

SQS SSE is now available in the US West (Oregon) and US East (Ohio) Regions, with more AWS Regions to follow. There is no additional charge for using encrypted queues, but there is a charge to use KMS. For more information about KMS and CloudTrail pricing, see AWS Key Management Service Pricing.

To learn more, see the full post on the AWS Blog.

– Craig


Source: Aws Security

Like This (1)
Dislike This (0)

Four HIPAA Eligible Services Recently Added to the AWS Business Associate Agreement

HIPAA logo

We are pleased to announce that the following four AWS services have been added in recent weeks to the AWS Business Associate Agreement (BAA):

As with all HIPAA Eligible Services covered under the BAA, Protected Health Information (PHI) must be encrypted while at rest or in transit. See Architecting for HIPAA Security and Compliance on Amazon Web Services, which explains how you can configure each AWS HIPAA Eligible Service to store, process, and transmit PHI.

For more details, see the full AWS Blog post.

– Chad


Source: Aws Security

Like This (1)
Dislike This (0)

How to Enable the Use of Remote Desktops by Deploying Microsoft Remote Desktop Licensing Manager on AWS Microsoft AD

AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, now supports Microsoft Remote Desktop Licensing Manager (RD Licensing). By using AWS Microsoft AD as the directory for your Remote Desktop Services solution, you reduce the time it takes to deploy remote desktop solutions on Amazon EC2 for Windows Server instances, and you enable your users to use remote desktops with the credentials they already know. In this blog post, I explain how to deploy RD Licensing Manager on AWS Microsoft AD to enable your users to sign in to remote desktops by using credentials stored in an AWS Microsoft AD or an on-premises Active Directory (AD) domain.

Enable your AWS Microsoft AD users to open remote desktop sessions

To use RD Licensing, you must authorize RD Licensing servers in the same Active Directory domain as the Windows Remote Desktop Session Hosts (RD Session Hosts) by adding them to the Terminal Service Licensing Server security group in AD. This new release grants your AWS Microsoft AD administrative account permissions to do this. As a result, you can now deploy RD Session Hosts in the AWS Cloud without the extra time and effort to set up and configure your own AD domain on Amazon EC2 for Windows Server.

The following diagram illustrates the steps to set up remote desktops with RD Licensing with users in AWS Microsoft AD and shows what happens when users connect to remote desktops.

Diagram illustrating the steps to set up remote desktops with RD Licensing with users in AWS Microsoft AD

In detail, here is how the process works, as it is illustrated in the preceding diagram:

  1. Create an AWS Microsoft AD directory and create users in the directory. You can add user accounts (in this case jsmith) using Active Directory Users and Computers on an EC2 for Windows Server instance that you joined to the domain.
  2. Create EC2 for Windows Server instances to use as your RD Licensing servers (RDLS1 in the preceding diagram). Add the instances to the same domain to which you will join your Windows Remote Desktop Session Hosts (RD Session Hosts).
  3. Configure your EC2 for Windows Server instances as RD Licensing servers and add them to the Terminal Service Licensing Servers security group in AWS Microsoft AD. You can connect to the instances from the AWS Management Console to configure RD Licensing. You also can use Active Directory Users and Computers to add the RD Licensing servers to the security group, thereby authorizing the instances for RD Licensing.
  4. Install your Remote Desktop Services client access licenses (RDS CALs) on the RD Licensing server. You can connect to the instances from the AWS Management Console to install the RDS CALs.
  5. Create other hosts for use as RD Session Hosts (RDSH1 in the diagram). Add the hosts to the same domain as your RD Licensing servers.
  6. A user (in this case jsmith) attempts to open an RDS session.
  7. The RD Session Host requests an RDS CAL from the RD Licensing Server.
  8. The RD Licensing Server returns an RDS CAL to the RD Session Host.

Because the user exists in AWS Microsoft AD, authentication happens against AWS Microsoft AD. The order of authentication relative to session creation depends on whether you configure your RD Session Host for Network Level Authentication.

Enable your users to open remote desktop sessions with their on-premises credentials

If you have an on-premises AD domain with users, your users can open remote desktop sessions with their on-premises credentials if you create a forest trust from AWS Microsoft AD to your Active Directory. The trust enables using on-premises credentials without the need for complex directory synchronization or replication. The following diagram illustrates how to configure a system using the same steps as in the previous section, except that you must create a one-way trust to your on-premises domain in Step 1a. With the trust in place, AWS Microsoft AD refers the RD Session Host to the on-premises domain for authentication.

Diagram illustrating how to configure a system using the same steps as in the previous section, except that you must create a one-way trust to your on-premises domain in Step 1a

Summary

In this post, I have explained how to authorize RD Licensing in AWS Microsoft AD to support EC2-based remote desktop sessions for AWS managed users and on-premises AD managed users. To learn more about how to use AWS Microsoft AD, see the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page.

If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, please start a new thread on the Directory Service forum.

– Ron


Source: Aws Security

Like This (1)
Dislike This (0)

Amazon QuickSight Now Supports Audit Logging with AWS CloudTrail

QuickSight diagram

Amazon QuickSight democratizes business intelligence, making it easier and cheaper for you to provide advanced business analytics capabilities to everyone in your organization. Amazon QuickSight also enables you to understand your business better and helps you make data-driven decisions more quickly. However, determining who has access to which data in your organization can still be an administrative challenge.

Today, we are happy to announce that Amazon QuickSight now supports AWS CloudTrail, which enables you to log Amazon QuickSight events across your account. Amazon QuickSight administrators can now quickly and accurately answer questions such as who changed an analysis last or who connected to a sensitive database. CloudTrail support in Amazon QuickSight gives your administrators better governance, auditing, and risk management of your company’s Amazon QuickSight usage.

To learn more, see the full AWS Big Data Blog post.

– Craig


Source: Aws Security

Like This (1)
Dislike This (0)