DOP-C01 : AWS DevOps Engineer Professional : Part 13
-
A company has an application deployed using Amazon ECS with data stored in an Amazon DynamoDB table. The company wants the application to fail over to another Region in a disaster recovery scenario. The application must also efficiently recover from any accidental data loss events. The RPO for the application is 1 hour and the RTO is 2 hours.
Which highly available solution should a DevOps engineer recommend?
- Change the configuration of the existing DynamoDB table. Enable this as a global table and specify the second Region that will be used. Enable DynamoDB point-in-time recovery.
- Enable DynamoDB Streams for the table and create an AWS Lambda function to write the stream data to an S3 bucket in the second Region. Schedule a job for every 2 hours to use AWS Data Pipeline to restore the database to the failover Region.
- Export the DynamoDB table every 2 hours using AWS Data Pipeline to an Amazon S3 bucket in the second Region. Use Data Pipeline in the second Region to restore the export from S3 into the second DynamoDB table.
- Use AWS DMS to replicate the data every hour. Set the original DynamoDB table as the source and the new DynamoDB table as the target.
-
A company is implementing a well-architected design for its globally accessible API stack. The design needs to ensure both high reliability and fast response times for users located in North America and Europe.
The API stack contains the following three tiers:
– Amazon API Gateway
– AWS Lambda
– Amazon DynamoDBWhich solution will meet the requirements?
- Configure Amazon Route 53 to point to API Gateway APIs in North America and Europe using health checks. Configure the APIs to forward requests to a Lambda function in that Region. Configure the Lambda functions to retrieve and update the data in a DynamoDB table in the same Region as the Lambda function.
- Configure Amazon Route 53 to point to API Gateway APIs in North America and Europe using latency-based routing and health checks. Configure the APIs to forward requests to a Lambda function in that Region. Configure the Lambda functions to retrieve and update the data in a DynamoDB global table.
- Configure Amazon Route 53 to point to API Gateway in North America, create a disaster recovery API in Europe, and configure both APIs to forward requests to the Lambda functions in that Region. Retrieve the data from a DynamoDB global table. Deploy a Lambda function to check the North America API health every 5 minutes. In the event of a failure, update Route 53 to point to the disaster recovery API.
- Configure Amazon Route 53 to point to API Gateway API in North America using latency-based routing. Configure the API to forward requests to the Lambda function in the Region nearest to the user. Configure the Lambda function to retrieve and update the data in a DynamoDB table.
-
A company wants to migrate its content sharing web application hosted on Amazon EC2 to a serverless architecture. The company currently deploys changes to its application by creating a new Auto Scaling group of EC2 instances and a new Elastic Load Balancer, and then shifting the traffic away using an Amazon Route 53 weighted routing policy.
For its new serverless application, the company is planning to use Amazon API Gateway and AWS Lambda. The company will need to update its deployment processes to work with the new application. It will also need to retain the ability to test new features on a small number of users before rolling the features out to the entire user base.
Which deployment strategy will meet these requirements?
- Use AWS CDK to deploy API Gateway and Lambda functions. When code needs to be changed, update the AWS CloudFormation stack and deploy the new version of the APIs and Lambda functions. Use a Route 53 failover routing policy for the canary release strategy.
- Use AWS CloudFormation to deploy API Gateway and Lambda functions using Lambda function versions. When code needs to be changed, update the CloudFormation stack with the new Lambda code and update the API versions using a canary release strategy. Promote the new version when testing is complete.
- Use AWS Elastic Beanstalk to deploy API Gateway and Lambda functions. When code needs to be changed, deploy a new version of the API and Lambda functions. Shift traffic gradually using an Elastic Beanstalk blue/green deployment.
- Use AWS OpsWorks to deploy API Gateway in the service layer and Lambda functions in a custom layer. When code needs to be changed, use OpsWorks to perform a blue/green deployment and shift traffic gradually.
-
After a recent audit, a company decided to implement a new disaster recovery strategy for its Amazon S3 data and its MySQL database running on Amazon EC2. Management wants the ability to recover to a secondary AWS Region with an RPO under 5 seconds and an RTO under 1 minute.
Which actions will meet the requirements while MINIMIZING operational overhead? (Choose two.)
- Modify the application to write to both Regions at the same time when uploading objects to Amazon S3.
- Migrate the database to an Amazon Aurora multi-master in the primary and secondary Regions.
- Migrate the database to Amazon RDS with a read replica in the secondary Region.
- Migrate to Amazon Aurora Global Database.
- Set up S3 cross-Region replication with a replication SLA for the S3 buckets where objects are being put.
-
A DevOps engineer is scheduling legacy AWS KMS keys for deletion and has created a remediation AWS Lambda function that will re-enable a key if necessary. The engineer wants to automate this process with available AWS CloudTrail data so, if a key scheduled for deletion is in use, it will be re-enabled.
Which solution enables this automation?
- Create an Amazon CloudWatch Logs metric filter and alarm for KMS events with an error message. Set the remediation Lambda function as the target of the alarm.
- Create an Amazon CloudWatch Logs metric filter and alarm for KMS events with an error message. Create an Amazon SNS topic as the target of the alarm. Subscribe the remediation Lambda function to the SNS topic.
- Create an Amazon CloudWatch Events rule pattern looking for KMS service events with an error message. Create an Amazon SNS topic as the target of the rule. Subscribe the remediation Lambda function to the SNS topic.
- Use Amazon CloudTrail to alert for KMS service events with an error message. Set the remediation Lambda function as the target of the rule.
-
A DevOps engineer is building a centralized CI/CD pipeline using AWS CodeBuild, AWS CodeDeploy, and Amazon S3. The engineer is required to have least privilege access and individual encryption at rest for all artifacts in Amazon S3. The engineer must be able to prune old artifacts without the ability to download or read them.
The engineer has already completed the following steps:
1. Created a unique AWS Key Management Service (AWS KMS) CMK and S3 bucket for each project’s builds.
2. Updated the S3 bucket policy to only allow uploads that use the associated KMS encryption.Which final step should be taken to meet these requirements?
- Update the attached IAM policies to allow access to the appropriate KMS key from the CodeDeploy role where the application will be deployed.
- Update the attached IAM policies to allow access to the appropriate KMS key from the EC2 instance roles where the application will be deployed.
- Update the CMK’s key policy to allow access to the appropriate KMS key from the CodeDeploy role where the application will be deployed.
- Update the CMK’s key policy to allow access to the appropriate KMS key from the EC2 instance roles where the application will be deployed.
-
A DevOps engineer is designing a multi-Region disaster recovery strategy for an application requiring an RPO of 1 hour and RTO of 4 hours. The application is deployed with an AWS CloudFormation template that creates an Application Load Balancer, Amazon EC2 instances in an Auto Scaling group, and an Amazon RDS Multi-AZ DB instance with 20 GB of allocated storage. The AMI of the application instance does not contain data and has been copied to the destination Region.
Which combination of actions will satisfy the recovery objectives at the LOWEST cost? (Choose two.)
- Launch an RDS DB instance in the failover Region and use AWS DMS to configure ongoing replication from the source database.
- Schedule an AWS Lambda function to take a snapshot of the database every hour and copy the snapshot to the failover Region.
- Upon failover, update the CloudFormation stack in the failover Region to update the Auto Scaling group from one running instance to the desired number of instances. When the stack update is complete, change the DNS records to point to the failover Region’s Elastic Load Balancer.
- Upon failover, launch the CloudFormation template in the failover Region with the snapshot ID as an input parameter. When the stack creation is complete, change the DNS records to point to the failover Region’s Elastic Load Balancer.
- Utilizing the build-in RDS automated backups, set up an event with Amazon CloudWatch Events that triggers an AWS Lambda function to copy the snapshot to the failover Region.
-
A company is adopting serverless computing and is migrating some of its existing applications to AWS Lambda. A DevOps engineer must come up with an automated deployment strategy using AWS CodePipeline that should include proper version controls, branching strategies, and rollback methods.
Which combination of steps should the DevOps engineer follow when setting up the pipeline? (Choose three.)
- Use Amazon S3 as the source code repository.
- Use AWS CodeCommit as the source code repository.
- Use AWS CloudFormation to create an AWS Serverless Application Model (AWS SAM) template for deployment.
- Use AWS CodeBuild to create an AWS Serverless Application Model (AWS SAM) template for deployment.
- Use AWS CloudFormation to deploy the application.
- Use AWS CodeDeploy to deploy the application.
-
After a data leakage incident that led to thousands of stolen user profiles, a compliance officer is demanding automatic, auditable security policy checks for all of the company’s data stores, starting with public access of Amazon S3 buckets.
Which solution will accomplish this with the LEAST amount of effort?
- Create a custom rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Use a remediation action to trigger an AWS Lambda function that automatically disables public access.
- Create a custom rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Trigger an AWS Lambda function that automatically disables public access.
- Use a managed rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Configure a remediation action that automatically disables public access.
- Use a managed rule in AWS Config triggered by an S3 bucket configuration change that detects when the bucket policy or bucket ACL allows public read access. Configure an AWS Lambda function that automatically disables public access.
-
A company uses AWS KMS with CMKs and manual key rotation to meet regulatory compliance requirements. The security team wants to be notified when any keys have not been rotated after 90 days.
Which solution will accomplish this?
- Configure AWS KMS to publish to an Amazon SNS topic when keys are more than 90 days old.
- Configure an Amazon CloudWatch Events event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon SNS topic.
- Develop an AWS Config custom rule that publishes to an Amazon SNS topic when keys are more than 90 days old.
- Configure AWS Security Hub to publish to an Amazon SNS topic when keys are more than 90 days old.
-
A company develops and maintains a web application using Amazon EC2 instances and an Amazon RDS for SQL Server DB instance in a single Availability Zone. The resources need to run only when new deployments are being tested using AWS CodePipeline. Testing occurs one or more times a week and each test takes 2-3 hours to run. A DevOps engineer wants a solution that does not change the architecture components.
Which solution will meet these requirements in the MOST cost-effective manner?
- Convert the RDS database to an Amazon Aurora Serverless database. Use an AWS Lambda function to start and stop the EC2 instances before and after tests.
- Put the EC2 instances into an Auto Scaling group. Schedule scaling to run at the start of the deployment tests.
- Replace the EC2 instances with EC2 Spot Instances and the RDS database with an RDS Reserved Instance.
- Subscribe Amazon CloudWatch Events to CodePipeline to trigger AWS Systems Manager Automation documents that start and stop all EC2 and RDS instances before and after deployment tests.
-
A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being used in the pipeline to deploy an application to Amazon ECS using the blue/green deployment model. The company wants to implement scripts to test the green version of the application before shifting traffic. These scripts will complete in 5 minutes or less. If errors are discovered during these tests, the application must be rolled back.
Which strategy will meet these requirements?
- Add a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create an execution environment and build commands in the buildspec file to invoke test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.
- Add a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to execute an AWS Lambda function that will run the test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.
- Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to trigger rollback.
- Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment.
-
A company has 100 GB of log data in an Amazon S3 bucket stored in .csv format. SQL developers want to query this data and generate graphs to visualize it. They also need an efficient, automated way to store metadata from the .csv file.
Which combination of steps should be taken to meet these requirements with the LEAST amount of effort? (Choose three.)
- Filter the data through AWS X-Ray to visualize the data.
- Filter the data through Amazon QuickSight to visualize the data.
- Query the data with Amazon Athena.
- Query the data with Amazon Redshift.
- Use AWS Glue as the persistent metadata store.
- Use Amazon S3 as the persistent metadata store.
-
A company is using AWS CodePipeline to deploy an application. A recent policy change requires that a member of the company’s security team sign off on any application changes before they are deployed into production. The approval should be recorded and retained.
Which combination of actions will meet these new requirements? (Choose two.)
- Configure CodePipeline with Amazon CloudWatch Logs to retain data.
- Configure CodePipeline to deliver action logs to Amazon S3.
- Create an AWS CloudTrail trail to deliver logs to Amazon S3.
- Create a custom CodePipeline action to invoke an AWS Lambda function for approval. Create a policy that gives the security team access to manage custom CodePipeline actions.
- Create a manual approval CodePipeline action before the deployment step. Create a policy that grants the security team access to approve manual approval stages.
-
An application’s users are encountering bugs immediately after Amazon API Gateway deployments. The development team deploys once or twice a day and uses a blue/green deployment strategy with custom health checks and automated rollbacks. The team wants to limit the number of users affected by deployment bugs and receive notifications when rollbacks are needed.
Which combination of steps should a DevOps engineer use to meet these requests? (Choose two.)
- Implement a blue/green strategy using path mappings.
- Implement a canary deployment strategy.
- Implement a rolling deployment strategy using multiple stages.
- Use Amazon CloudWatch alarms to notify the development team.
- Use Amazon CloudWatch Events to notify the development team.
-
A DevOps engineer has been tasked with ensuring that all Amazon S3 buckets, except for those with the word “public” in the name, allow access only to authorized users utilizing S3 bucket policies. The security team wants to be notified when a bucket is created without the proper policy and for the policy to be automatically updated.
Which solutions will meet these requirements?
- Create a custom AWS Config rule that will trigger an AWS Lambda function when an S3 bucket is created or updated. Use the Lambda function to look for S3 buckets that should be private, but that do not have a bucket policy that enforces privacy. When such a bucket is found, invoke a remediation action and use Amazon SNS to notify the security team.
- Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers when an S3 bucket is created. Use an AWS Lambda function to determine whether the bucket should be private. If the bucket should be private, update the PublicAccessBlock configuration. Configure a second EventBridge (CloudWatch Events) rule to notify the security team using Amazon SNS when PutBucketPolicy is called.
- Create an Amazon S3 event notification that triggers when an S3 bucket is created that does not have the word “public” in the name. Define an AWS Lambda function as a target for this notification and use the function to apply a new default policy to the S3 bucket. Create an additional notification with the same filter and use Amazon SNS to send an email to the security team.
- Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers when a new object is created in a bucket that does not have the word “public” in the name. Target and use an AWS Lambda function to update the PublicAccessBlock configuration. Create an additional notification with the same filter and use Amazon SNS to send an email to the security team.
-
A company is using AWS to deploy an application. The development team must automate the deployments. The team has created an AWS CodePipeline pipeline to deploy the application to Amazon EC2 instances using AWS CodeDeploy after it has been built using AWS CodeBuild.
The team wants to add automated testing to the pipeline to confirm that the application is healthy before deploying the code to the EC2 instances. The team also requires a manual approval action before the application is deployed, even if the tests are successful. The testing and approval must be accomplished at the lowest costs, using the simplest management solution.
Which solution will meet these requirements?
- Create a manual approval action after the build action of the pipeline. Use Amazon SNS to inform the team of the stage being triggered. Next, add a test action using CodeBuild to perform the required tests. At the end of the pipeline, add a deploy action to deploy the application to the next stage.
- Create a test action after the CodeBuild build of the pipeline. Configure the action to use CodeBuild to perform the required tests. If these tests are successful, mark the action as successful. Add a manual approval action that uses Amazon SNS to notify the team, and add a deploy action to deploy the application to the next stage.
- Create a new pipeline that uses a source action that gets the code from the same repository as the first pipeline. Add a deploy action to deploy the code to a test environment. Use a test action using AWS Lambda to test the deployment. Add a manual approval action by using Amazon SNS to notify the team, and add a deploy action to deploy the application to the next stage.
- Create a test action after the build action. Use a Jenkins server on Amazon EC2 to perform the required tests and mark the action as successful if the tests pass. Create a manual approval action that uses Amazon SQS to notify the team and add a deploy action to deploy the application to the next stage.
-
A company is developing a web application’s infrastructure using AWS CloudFormation. The database engineering team maintains the database resources in a CloudFormation template, and the software development team maintains the web application resources in a separate CloudFormation template. As the scope of the application grows, the software development team needs to use resources maintained by the database engineering team. However, both teams have their own review and lifecycle management processes that they want to keep. Both teams also require resource-level change-set reviews. The software development team would like to deploy changes to this template using their CI/CD pipeline.
Which solution will meet these requirements?
- Create a stack export from the database CloudFormation template and import those references into the web application CloudFormation template.
- Create a CloudFormation nested stack to make cross-stack resource references and parameters available in both stacks.
- Create a CloudFormation stack set to make cross-stack resource references and parameters available in both stacks.
- Create input parameters in the web application CloudFormation template and pass resource names and IDs from the database stack.
-
An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders. The order processing system consists of an AWS Lambda function using reserved concurrency. The Lambda function processes order messages from an Amazon SQS queue and inserts processed orders into an Amazon DynamoDB table. The DynamoDB table has Auto Scaling enabled for read and write capacity.
Which actions will diagnose and resolve the delay? (Choose two.)
- Check the ApproximateAgeOfOldestMessage metric for the SQS queue and increase the Lambda function concurrency limit.
- Check the ApproximateAgeOfOldestMessage metric for the SQS queue and configure a redrive policy on the SQS queue.
- Check the NumberOfMessagesSent metric for the SQS queue and increase the SQS queue visibility timeout.
- Check the ThrottledWriteRequests metric for the DynamoDB table and increase the maximum write capacity units for the table’s Auto Scaling policy.
- Check the Throttles metric for the Lambda function and increase the Lambda function timeout.
-
A DevOps engineer is implementing governance controls for a company that requires its infrastructure to be housed within the United States. The engineer must restrict which Regions can be used, and ensure an alert is sent as soon as possible if any activity outside the governance policy takes place. The controls should be automatically enabled on any new Region outside the United States.
Which combination of actions will meet these requirements? (Choose two.)
- Create an AWS Organizations SCP that denies access to all non-global services in non-US Regions. Attach the policy to the root of the organization.
- Configure AWS CloudTrail to send logs to Amazon CloudWatch Logs and enable it for all Regions. Use a CloudWatch Logs metric filter to send an alert on any service activity in non-US Regions.
- Use an AWS Lambda function that checks for AWS service activity and deploy it to all Regions. Write an Amazon CloudWatch Events rule that runs the Lambda function every hour, sending an alert if activity is found in a non-US Region.
- Use an AWS Lambda function to query Amazon Inspector to look for service activity in non-US Regions and send alerts if any activity is found.
- Write an SCP using the aws:RequestedRegion condition key limiting access to US Regions. Apply the policy to all users, groups, and roles.
Subscribe
0 Comments
Newest