SCS-C01 : AWS Certified Security – Specialty : Part 07
-
A company has a few dozen application servers in private subnets behind an Elastic Load Balancer (ELB) in an AWS Auto Scaling group. The application is accessed from the web over HTTPS. The data must always be encrypted in transit. The Security Engineer is worried about potential key exposure due to vulnerabilities in the application software.
Which approach will meet these requirements while protecting the external certificate during a breach?
- Use a Network Load Balancer (NLB) to pass through traffic on port 443 from the internet to port 443 on the instances.
- Purchase an external certificate, and upload it to the AWS Certificate Manager (for use with the ELB) and to the instances. Have the ELB decrypt traffic, and route and re-encrypt with the same certificate.
- Generate an internal self-signed certificate and apply it to the instances. Use AWS Certificate Manager to generate a new external certificate for the ELB. Have the ELB decrypt traffic, and route and re-encrypt with the internal certificate.
- Upload a new external certificate to the load balancer. Have the ELB decrypt the traffic and forward it on port 80 to the instances.
-
Which of the following are valid event sources that are associated with web access control lists that trigger AWS WAF rules? (Choose two.)
- Amazon S3 static web hosting
- Amazon CloudFront distribution
- Application Load Balancer
- Amazon Route 53
- VPC Flow Logs
Explanation:
A web access control list (web ACL) gives you fine-grained control over the web requests that your Amazon API Gateway API, Amazon CloudFront distribution or Application Load Balancer responds to. -
A company uses identity federation to authenticate users into an identity account (987654321987) where the users assume an IAM role named IdentityRole. The users then assume an IAM role named JobFunctionRole in the target AWS account (123456789123) to perform their job functions.
A user is unable to assume the IAM role in the target account. The policy attached to the role in the identity account is:What should be done to enable the user to assume the appropriate role in the target account?
- Update the IAM policy attached to the role in the identity account to be:
- Update the trust policy on the role in the target account to be:
- Update the trust policy on the role in the identity account to be:
- Update the IAM policy attached to the role in the target account to be:
- Update the IAM policy attached to the role in the identity account to be:
-
A Security Engineer is working with the development team to design a supply chain application that stores sensitive inventory data in an Amazon S3 bucket. The application will use an AWS KMS customer master key (CMK) to encrypt the data on Amazon S3. The inventory data on Amazon S3 will be shared of vendors. All vendors will use AWS principals from their own AWS accounts to access the data on Amazon S3. The vendor list may change weekly, and the solution must support cross-account access.
What is the MOST efficient way to manage access control for the KMS CMK7?
- Use KMS grants to manage key access. Programmatically create and revoke grants to manage vendor access.
- Use an IAM role to manage key access. Programmatically update the IAM role policies to manage vendor access.
- Use KMS key policies to manage key access. Programmatically update the KMS key policies to manage vendor access.
- Use delegated access across AWS accounts by using IAM roles to manage key access. Programmatically update the IAM trust policy to manage cross-account vendor access.
-
A Security Engineer is setting up an AWS CloudTrail trail for all regions in an AWS account. For added security, the logs are stored using server-side encryption with AWS KMS-managed keys (SSE-KMS) and have log integrity validation enabled.
While testing the solution, the Security Engineer discovers that the digest files are readable, but the log files are not. What is the MOST likely cause?
- The log files fail integrity validation and automatically are marked as unavailable.
- The KMS key policy does not grant the Security Engineer’s IAM user or role permissions to decrypt with it.
- The bucket is set up to use server-side encryption with Amazon S3-managed keys (SSE-S3) as the default and does not allow SSE-KMS-encrypted files.
- An IAM policy applicable to the Security Engineer’s IAM user or role denies access to the “CloudTrail/” prefix in the Amazon S3 bucket.
-
A corporate cloud security policy states that communications between the company’s VPC and KMS must travel entirely within the AWS network and not use public service endpoints.
Which combination of the following actions MOST satisfies this requirement? (Choose two.)
- Add the aws:sourceVpce condition to the AWS KMS key policy referencing the company’s VPC endpoint ID.
- Remove the VPC internet gateway from the VPC and add a virtual private gateway to the VPC to prevent direct, public internet connectivity.
- Create a VPC endpoint for AWS KMS with private DNS enabled.
- Use the KMS Import Key feature to securely transfer the AWS KMS key over a VPN.
- Add the following condition to the AWS KMS key policy: “aws:SourceIp”: “10.0.0.0/16”.
Explanation:
An IAM policy can deny access to KMS except through your VPC endpoint with the following condition statement:
“Condition”: {
“StringNotEquals”: {
“aws:sourceVpce”: “vpce-0295a3caf8414c94a”
}
}
If you select the Enable Private DNS Name option, the standard AWS KMS DNS hostname
(https://kms.<region>.amazonaws.com) resolves to your VPC endpoint. -
A company had one of its Amazon EC2 key pairs compromised. A Security Engineer must identify which current Linux EC2 instances were deployed and used the compromised key pair.
How can this task be accomplished?
- Obtain the list of instances by directly querying Amazon EC2 using: aws ec2 describe-instances –filters “Name=key-name,Values=KEYNAMEHERE”.
- Obtain the fingerprint for the key pair from the AWS Management Console, then search for the fingerprint in the Amazon Inspector logs.
- Obtain the output from the EC2 instance metadata using: curl http://169.254.169.254/latest/meta-data/public-keys/0/.
- Obtain the fingerprint for the key pair from the AWS Management Console, then search for the fingerprint in Amazon CloudWatch Logs using: aws logs filter-log-events.
-
A Security Engineer for a large company is managing a data processing application used by 1,500 subsidiary companies. The parent and subsidiary companies all use AWS. The application uses TCP port 443 and runs on Amazon EC2 behind a Network Load Balancer (NLB). For compliance reasons, the application should only be accessible to the subsidiaries and should not be available on the public internet. To meet the compliance requirements for restricted access, the Engineer has received the public and private CIDR block ranges for each subsidiary.
What solution should the Engineer use to implement the appropriate access restrictions for the application?
- Create a NACL to allow access on TCP port 443 from the 1,500 subsidiary CIDR block ranges. Associate the NACL to both the NLB and EC2 instances
- Create an AWS security group to allow access on TCP port 443 from the 1,500 subsidiary CIDR block ranges. Associate the security group to the NLB. Create a second security group for EC2 instances with access on TCP port 443 from the NLB security group.
- Create an AWS PrivateLink endpoint service in the parent company account attached to the NLB. Create an AWS security group for the instances to allow access on TCP port 443 from the AWS PrivateLink endpoint. Use AWS PrivateLink interface endpoints in the 1,500 subsidiary AWS accounts to connect to the data processing application.
- Create an AWS security group to allow access on TCP port 443 from the 1,500 subsidiary CIDR block ranges. Associate the security group with EC2 instances.
-
To meet regulatory requirements, a Security Engineer needs to implement an IAM policy that restricts the use of AWS services to the us-east-1 Region.
What policy should the Engineer implement?
-
A company uses user data scripts that contain sensitive information to bootstrap Amazon EC2 instances. A Security Engineer discovers that this sensitive information is viewable by people who should not have access to it.
What is the MOST secure way to protect the sensitive information used to bootstrap the instances?
- Store the scripts in the AMI and encrypt the sensitive data using AWS KMS Use the instance role profile to control access to the KMS keys needed to decrypt the data.
- Store the sensitive data in AWS Systems Manager Parameter Store using the encrypted string parameter and assign the GetParameters permission to the EC2 instance role.
- Externalize the bootstrap scripts in Amazon S3 and encrypt them using AWS KMS. Remove the scripts from the instance and clear the logs after the instance is configured.
- Block user access of the EC2 instance’s metadata service using IAM policies. Remove all scripts and clear the logs after execution.
-
A company is building a data lake on Amazon S3. The data consists of millions of small files containing sensitive information. The Security team has the following requirements for the architecture:
• Data must be encrypted in transit.
• Data must be encrypted at rest.
• The bucket must be private, but if the bucket is accidentally made public, the data must remain confidential.Which combination of steps would meet the requirements? (Choose two.)
- Enable AES-256 encryption using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) on the S3 bucket.
- Enable default encryption with server-side encryption with AWS KMS-managed keys (SSE-KMS) on the S3 bucket.
- Add a bucket policy that includes a deny if a PutObject request does not include aws:SecureTransport.
- Add a bucket policy with aws:SourceIp to Allow uploads and downloads from the corporate intranet only.
- Add a bucket policy that includes a deny if a PutObject request does not include s3:x-amz-server-side-encryption: “aws:kms”.
- Enable Amazon Macie to monitor and act on changes to the data lake’s S3 bucket.
Explanation:
Bucket encryption using KMS will protect both in case disks are stolen as well as if the bucket is public.
This is because the KMS key would need to have privileges granted to it for users outside of AWS. -
A Security Engineer discovered a vulnerability in an application running on Amazon ECS. The vulnerability allowed attackers to install malicious code. Analysis of the code shows it exfiltrates data on port 5353 in batches at random time intervals.
While the code of the containers is being patched, how can Engineers quickly identify all compromised hosts and stop the egress of data on port 5353?
- Enable AWS Shield Advanced and AWS WAF. Configure an AWS WAF custom filter for egress traffic on port 5353
- Enable Amazon Inspector on Amazon ECS and configure a custom assessment to evaluate containers that have port 5353 open. Update the NACLs to block port 5353 outbound.
- Create an Amazon CloudWatch custom metric on the VPC Flow Logs identifying egress traffic on port 5353. Update the NACLs to block port 5353 outbound.
- Use Amazon Athena to query AWS CloudTrail logs in Amazon S3 and look for any traffic on port 5353. Update the security groups to block port 5353 outbound.
-
An Amazon EC2 instance is denied access to a newly created AWS KMS CMK used for decrypt actions. The environment has the following configuration:
– The instance is allowed the kms:Decrypt action in its IAM role for all resources
– The AWS KMS CMK status is set to enabled
– The instance can communicate with the KMS API using a configured VPC endpointWhat is causing the issue?
- The kms:GenerateDataKey permission is missing from the EC2 instance’s IAM role
- The ARN tag on the CMK contains the EC2 instance’s ID instead of the instance’s ARN
- The kms:Encrypt permission is missing from the EC2 IAM role
- The KMS CMK key policy that enables IAM user permissions is missing
Explanation:
In a key policy, you use “*” for the resource, which means “this CMK.” A key policy applies only to the CMK it is attached to -
A company has enabled Amazon GuardDuty in all Regions as part of its security monitoring strategy. In one of the VPCs, the company hosts an Amazon EC2 instance working as an FTP server that is contacted by a high number of clients from multiple locations. This is identified by GuardDuty as a brute force attack due to the high number of connections that happen every hour.
The finding has been flagged as a false positive. However, GuardDuty keeps raising the issue. A Security Engineer has been asked to improve the signal-to-noise ratio. The Engineer needs to ensure that changes do not compromise the visibility of potential anomalous behavior.
How can the Security Engineer address the issue?
- Disable the FTP rule in GuardDuty in the Region where the FTP server is deployed.
- Add the FTP server to a trusted IP list and deploy it to GuardDuty to stop receiving the notifications
- Use GuardDuty filters with auto archiving enabled to close the findings
- Create an AWS Lambda function that closes the finding whenever a new occurrence is reported
Explanation:
Trusted IP lists consist of IP addresses that you have whitelisted for secure communication with your AWS infrastructure and applications. GuardDuty does not generate findings for IP addresses on trusted IP lists. At any given time, you can have only one uploaded trusted IP list per AWS account per region. -
What are the MOST secure ways to protect the AWS account root user of a recently opened AWS account? (Choose two.)
- Use the AWS account root user access keys instead of the AWS Management Console
- Enable multi-factor authentication for the AWS IAM users with the AdministratorAccess managed policy attached to them
- Enable multi-factor authentication for the AWS account root user
- Use AWS KMS to encrypt all AWS account root user and AWS IAM access keys and set automatic rotation to 30 days
- Do not create access keys for the AWS account root user; instead, create AWS IAM users
-
A company has decided to migrate sensitive documents from on-premises data centers to Amazon S3. Currently, the hard drives are encrypted to meet a compliance requirement regarding data encryption. The CISO wants to improve security by encrypting each file using a different key instead of a single key. Using a different key would limit the security impact of a single exposed key.
Which of the following requires the LEAST amount of configuration when implementing this approach?
- Place each file into a different S3 bucket. Set the default encryption of each bucket to use a different AWS KMS customer managed key.
- Put all the files in the same S3 bucket. Using S3 events as a trigger, write an AWS Lambda function to encrypt each file as it is added using different AWS KMS data keys.
- Use the S3 encryption client to encrypt each file individually using S3-generated data keys.
- Place all the files in the same S3 bucket. Use server-side encryption with AWS KMS-managed keys (SSE-KMS) to encrypt the data.
-
A company has an encrypted Amazon S3 bucket. An Application Developer has an IAM policy that allows access to the S3 bucket, but the Application Developer is unable to access objects within the bucket.
What is a possible cause of the issue?
- The S3 ACL for the S3 bucket fails to explicitly grant access to the Application Developer
- The AWS KMS key for the S3 bucket fails to list the Application Developer as an administrator
- The S3 bucket policy fails to explicitly grant access to the Application Developer
- The S3 bucket policy explicitly denies access to the Application Developer
-
A Web Administrator for the website example.com has created an Amazon CloudFront distribution for dev.example.com, with a requirement to configure HTTPS using a custom TLS certificate imported to AWS Certificate Manager.
Which combination of steps is required to ensure availability of the certificate in the CloudFront console? (Choose two.)
- Call UploadServerCertificate with /cloudfront/dev/ in the path parameter.
- Import the certificate with a 4,096-bit RSA public key.
- Ensure that the certificate, private key, and certificate chain are PKCS #12-encoded.
- Import the certificate in the us-east-1 (N. Virginia) Region.
- Ensure that the certificate, private key, and certificate chain are PEM-encoded.
-
A Security Engineer has discovered that, although encryption was enabled on the Amazon S3 bucket examplebucket, anyone who has access to the bucket has the ability to retrieve the files. The Engineer wants to limit access to each IAM user can access an assigned folder only.
What should the Security Engineer do to achieve this?
- Use envelope encryption with the AWS-managed CMK aws/s3.
- Create a customer-managed CMK with a key policy granting “kms:Decrypt” based on the “${aws:username}” variable.
- Create a customer-managed CMK for each user. Add each user as a key user in their corresponding key policy.
- Change the applicable IAM policy to grant S3 access to “Resource”: “arn:aws:s3:::examplebucket/${aws:username}/*”
-
A Security Engineer manages AWS Organizations for a company. The Engineer would like to restrict AWS usage to allow Amazon S3 only in one of the organizational units (OUs). The Engineer adds the following SCP to the OU:
The next day, API calls to AWS IAM appear in AWS CloudTrail logs in an account under that OU.
How should the Security Engineer resolve this issue?
- Move the account to a new OU and deny IAM:* permissions.
- Add a Deny policy for all non-S3 services at the account level.
- Change the policy to:
- Detach the default FullAWSAccess SCP.
Subscribe
0 Comments
Newest