DOP-C01 : AWS DevOps Engineer Professional : Part 08
DOP-C01 : AWS DevOps Engineer Professional : Part 08
-
A DevOps Engineer is reviewing a system that uses Amazon EC2 instances in an Auto Scaling group. This system uses a configuration management tool that runs locally on each EC2 instance. Because of the volatility of the application load, new instances must be fully functional within 3 minutes of entering a running state. Current setup tasks include:
– Installing the configuration management agent – 2 minutes
– Installing the application framework – 15 minutes
– Copying configuration data from Amazon S3 – 2 minutes
– Running the configuration management agent to configure instances – 1 minute
– Deploying the application code from Amazon S3 – 2 minutesHow should the Engineer set up the system so it meets the launch time requirement?
- Trigger an AWS Lambda function from an Amazon CloudWatch Events rule when a new EC2 instance launches. Have the function install the configuration management agent and the application framework, pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
- Write a bootstrap script to install the configuration management agent, install the application framework, pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
- Build a custom AMI that includes the configuration management agent and application framework. Write a bootstrap script to pull configuration data from Amazon S3, run the agent to configure the instance, and deploy the application from S3.
- Build a custom AMI that includes the configuration management agent, application framework, and configuration data. Write a bootstrap script to run the agent to configure the instance and deploy the application from Amazon S3.
-
The resources for a business-critical, three-tier web application are expressed in a series of AWS CloudFormation templates. The application is using Amazon RDS for data and Amazon ElastiCache for session state. Users have reported degraded performance in the application. A DevOps Engineer notices that the T2 instance type is being used for the application tier and CPU usage is at 100% in Amazon CloudWatch.
What process should the Engineer follow to restore operations with the LEAST amount of disruption to the end users?
- Write a new CloudFormation template to include Amazon CloudFront in the environment, launch the stack, and update the Amazon Route 53 A record
- Launch a new CloudFormation stack for the application tier using the M4 instance type, run acceptance tests against the new stack, and update the Amazon Route 53 A record
- Update the CloudFormation stack for the application tier using the T2 Unlimited option, run acceptance tests against the new stack, and update the Amazon Route 53 A record
- Launch a new CloudFormation stack for all tiers of the application in a different region, run acceptance tests against the new stack, and update the Amazon Route 53 A record
-
A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.
A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.
How should the DevOps Engineer overcome this?
- Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda function
- Add an AfterAllowTraffic hook to the AppSpec file that forces traffic to wait for any pending database changes before allowing the new version of the Lambda function to respond
- Add a BeforeInstall hook to the AppSpec file that tests and waits for any necessary database changes before deploying the new version of the Lambda function
- Add a ValidateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services, such as the database, are not yet ready
-
A mobile application running on eight Amazon EC2 instances is relying on a third-party API endpoint. The third-party service has a high failure rate because of limited capacity, which is expected to be resolved in a few weeks.
In the meantime, the mobile application developers have added a retry mechanism and are logging failed API requests. A DevOps Engineer must automate the monitoring of application logs and count the specific error messages; if there are more than 10 errors within a 1-minute window, the system must issue an alert.
How can the requirements be met with MINIMAL management overhead?
- Install the Amazon CloudWatch Logs agent on all instances to push the application logs to CloudWatch Logs. Use metric filters to count the error messages every minute, and trigger a CloudWatch alarm if the count exceeds 10 errors.
- Install the Amazon CloudWatch Logs agent on all instances to push the access logs to CloudWatch Logs. Create a CloudWatch Events rule to count the error messages every minute, and trigger a CloudWatch alarm if the count exceeds 10 errors.
- Install the Amazon CloudWatch Logs agent on all instances to push the application logs to CloudWatch Logs. Use a metric filter to generate a custom CloudWatch metric that records the number of failures and triggers a CloudWatch alarm if the custom metric reaches 10 errors in a 1-minute period.
- Deploy a custom script on all instances to check application logs regularly in a cron job. Count the number of error messages every minute, and push a data point to a custom CloudWatch metric. Trigger a CloudWatch alarm if the custom metric reaches 10 errors in a 1-minute period.
-
A DevOps Engineer has several legacy applications that all generate different log formats. The Engineer must standardize the formats before writing them to Amazon S3 for querying and analysis.
How can this requirement be met at the LOWEST cost?
- Have the application send its logs to an Amazon EMR cluster and normalize the logs before sending them to Amazon S3
- Have the application send its logs to Amazon QuickSight, then use the Amazon QuickSight SPICE engine to normalize the logs. Do the analysis directly from Amazon QuickSight
- Keep the logs in Amazon S3 and use Amazon Redshift Spectrum to normalize the logs in place
- Use Amazon Kinesis Agent on each server to upload the logs and have Amazon Kinesis Data Firehose use an AWS Lambda function to normalize the logs before writing them to Amazon S3
-
A company uses Amazon S3 to store proprietary information. The Development team creates buckets for new projects on a daily basis. The Security team wants to ensure that all existing and future buckets have encryption, logging, and versioning enabled. Additionally, no buckets should ever be publicly read or write accessible.
What should a DevOps Engineer do to meet these requirements?
- Enable AWS CloudTrail and configure automatic remediation using AWS Lambda.
- Enable AWS Config rules and configure automatic remediation using AWS Systems Manager documents.
- Enable AWS Trusted Advisor and configure automatic remediation using Amazon CloudWatch Events.
- Enable AWS Systems Manager and configure automatic remediation using Systems Manager documents.
-
A DevOps Engineer is researching the least-expensive way to implement an image batch processing cluster in AWS. The application cannot run in Docker containers and must run on Amazon EC2. The batch job stores checkpoint data on a Network File System (NFS) and can tolerate interruptions. Configuring the cluster software from a generic EC2 Linux image takes 30 minutes.
Which is the MOST cost-effective solution?
- Use Amazon EFS for checkpoint data. To complete the job, use an EC2 Auto Scaling group and an On-Demand pricing model to provision EC2 instances temporarily.
- Use GlusterFS on EC2 instances for checkpoint data. To run the batch job, configure EC2 instances manually. When the job completes, shut down the instances manually.
- Use Amazon EFS for checkpoint data. Use EC2 Fleet to launch EC2 Spot Instances, and use user data to configure the EC2 Linux instance on startup.
- Use Amazon EFS for checkpoint data. Use EC2 Fleet to launch EC2 Spot Instances. Create a standard cluster AMI and use the latest AMI when creating instances.
-
A company is using AWS CodeDeploy to manage its application deployments. Recently, the Development team decided to use GitHub for version control, and the team is looking for ways to integrate the GitHub repository with CodeDeploy. The team also needs to develop a way to automate deployment whenever there is a new commit on that repository.
How can the integration be achieved in the MOST efficient way?
- Create a GitHub webhook to replicate the repository to AWS CodeCommit. Create an AWS CodePipeline pipeline that uses CodeCommit as a source provider and AWS CodeDeploy as a deployment provider. Once configured, commit a change to the GitHub repository to start the first deployment.
- Create an AWS CodePipeline pipeline that uses GitHub as a source provider and AWS CodeDeploy as a deployment provider. Connect this new pipeline with the GitHub account and instruct CodePipeline to use webhooks in GitHub to automatically start the pipeline when a change occurs.
- Create an AWS Lambda function to check periodically if there has been a new commit within the GitHub repository. If a new commit is found, trigger a CreateDeployment API call to AWS CodeDeploy to start a new deployment based on the last commit ID within the deployment group.
- Create an AWS CodeDeploy custom deployment configuration to associate the GitHub repository with the deployment group. During the association process, authenticate the deployment group with GitHub to obtain the GitHub security authentication token. Configure the deployment group options to automatically deploy if a new commit is found. Perform a new commit to the GitHub repository to trigger the first deployment.
-
A DevOps Engineer must implement monitoring for a workload running on Amazon EC2 and Amazon RDS MySQL. The monitoring must include:
– Application logs and operating system metrics for the Amazon EC2 instances
– Database logs and operating system metrics for the Amazon RDS databaseWhich steps should the Engineer take?
- Install an Amazon CloudWatch agent on the EC2 and RDS instances. Configure the agent to send the operating system metrics and application and database logs to CloudWatch.
- Install an Amazon CloudWatch agent on the EC2 instance, and configure the agent to send the application logs and operating system metrics to CloudWatch. Enable RDS Enhanced Monitoring, and modify the RDS instance to publish database logs to CloudWatch Logs.
- Install an Amazon CloudWatch Logs agent on the EC2 instance and configure it to send application logs to CloudWatch.
- Set up scheduled tasks on the EC2 and RDS instances to put operating system metrics and application and database logs into an Amazon S3 bucket. Set up an event on the bucket to invoke an AWS Lambda function to monitor for errors each time an object is put into the bucket.
-
A company requires that all logs are captured for everything that runs in the company’s AWS account. The account has multiple VPCs with Amazon EC2 instances, Application Load Balancers, Amazon RDS MySQL databases, and AWS WAF rules that are configured. The logs must be protected from deletion. The company also needs a daily visual analysis of log anomalies from the previous day.
Which combination of actions should a DevOps engineer take to meet these requirements? (Choose three.)
- Configure an AWS Lambda function to send all Amazon CloudWatch logs to an Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.
- Configure AWS CloudTrail to send all logs to Amazon Inspector. Create a dashboard report in Amazon QuickSight.
- Configure Amazon S3 MFA Delete on the logging S3 bucket.
- Configure an Amazon S3 Object Lock legal hold on the logging S3 bucket.
- Configure AWS Artifact to send all logs to the logging Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.
- Deploy the Amazon CloudWatch agent to all EC2 instances.
-
A DevOps Engineer wants to prevent Developers from pushing updates directly to the company’s master branch in AWS CodeCommit. These updates should be approved before they are merged.
Which solution will meet these requirements?
- Configure an IAM role for the Developers with access to CodeCommit and an explicit deny for write actions when the reference is the master. Allow Developers to use feature branches and create a pull request when a feature is complete. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
- Configure an IAM role for the Developers to use feature branches and create a pull request when a feature is complete. Allow CodeCommit to test all code in the feature branches, and dynamically modify the IAM role to allow merging the feature branches into the master. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
- Configure an IAM role for the Developers to use feature branches and create a pull request when a feature is complete. Allow CodeCommit to test all code in the feature branches, and issue a new AWS Security Token Service (STS) token allowing a one-time API call to merge the feature branches into the master. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
- Configure an IAM role for the Developers with access to CodeCommit and attach an access policy to the CodeCommit repository that denies the Developers role access when the reference is master. Allow Developers to use feature branches and create a pull request when a feature is complete. Allow an approver to use CodeCommit to view the changes and approve the pull requests.
-
A company is using AWS Organizations to create separate AWS accounts for each of its departments. It needs to automate the following tasks:
– Updating the Linux AMIs with new patches periodically and generating a golden image
– Installing a new version of Chef agents in the golden image, if available
– Enforcing the use of the newly generated golden AMIs in the department’s accountWhich option requires the LEAST management overhead?
- Write a script to launch an Amazon EC2 instance from the previous golden AMI, apply the patch updates, install the new version of the Chef agent, generate a new golden AMI, and then modify the AMI permissions to share only the new image with the departments’ accounts.
- Use an AWS Systems Manager Run Command to update the Chef agent first, use Amazon EC2 Systems Manager Automation to generate an updated AMI, and then assume an IAM role to copy the new golden AMI into the departments’ accounts.
- Use AWS Systems Manager Automation to update the Linux AMI using the previous image, provide the URL for the script that will update the Chef agent, and then use AWS Organizations to replace the previous golden AMI into the departments’ accounts.
- Use AWS Systems Manager Automation to update the Linux AMI from the previous golden image, provide the URL for the script that will update the Chef agent, and then share only the newly generated AMI with the departments’ accounts.
-
A company wants to automatically re-create its infrastructure using AWS CloudFormation as part of the company’s quality assurance (QA) pipeline. For each QA run, a new VPC must be created in a single account, resources must be deployed into the VPC, and tests must be run against this new infrastructure. The company policy states that all VPCs must be peered with a central management VPC to allow centralized logging. The company has existing CloudFormation templates to deploy its VPC and associated resources.
Which combination of steps will achieve the goal in a way that is automated and repeatable? (Choose two.)
- Create an AWS Lambda function that is invoked by an Amazon CloudWatch Events rule when a CreateVpcPeeringConnection API call is made. The Lambda function should check the source of the peering request, accepts the request, and update the route tables for the management VPC to allow traffic to go over the peering connection.
- In the CloudFormation template:
– Invoke a custom resource to generate unique VPC CIDR ranges for the VPC and subnets.
– Create a peering connection to the management VPC.
– Update route tables to allow traffic to the management VPC. - In the CloudFormation template:
– Use the Fn::Cidr function to allocate an unused CIDR range for the VPC and subnets.
– Create a peering connection to the management VPC.
– Update route tables to allow traffic to the management VPC. - Modify the CloudFormation template to include a mappings object that includes a list of /16 CIDR ranges for each account where the stack will be deployed.
- Use CloudFormation StackSets to deploy the VPC and associated resources to multiple AWS accounts using a custom resource to allocate unique CIDR ranges. Create peering connections from each VPC to the central management VPC and accept those connections in the management VPC.
-
A company has multiple development groups working in a single shared AWS account. The Senior Manager of the groups wants to be alerted via a third-party API call when the creation of resources approaches the service limits for the account.
Which solution will accomplish this with the LEAST amount of development effort?
- Create an Amazon CloudWatch Event rule that runs periodically and targets an AWS Lambda function. Within the Lambda function, evaluate the current state of the AWS environment and compare deployed resource values to resource limits on the account. Notify the Senior Manager if the account is approaching a service limit.
- Deploy an AWS Lambda function that refreshes AWS Trusted Advisor checks, and configure an Amazon CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event pattern matching Trusted Advisor events and a target Lambda function. In the target Lambda function, notify the Senior Manager.
- Deploy an AWS Lambda function that refreshes AWS Personal Health Dashboard checks, and configure an Amazon CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event pattern matching Personal Health Dashboard events and a target Lambda function. In the target Lambda function, notify the Senior Manager.
- Add an AWS Config custom rule that runs periodically, checks the AWS service limit status, and streams notifications to an Amazon SNS topic. Deploy an AWS Lambda function that notifies the Senior Manager, and subscribe the Lambda function to the SNS topic.
-
A highly regulated company has a policy that DevOps Engineers should not log in to their Amazon EC2 instances except in emergencies. If a DevOps Engineer does log in, the Security team must be notified within 15 minutes of the occurrence.
Which solution will meet these requirements?
- Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications. Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security team using Amazon SNS.
- Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a notification to the Security team using Amazon SNS.
- Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the Security team using Amazon SNS.
- Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the output to the Security team using Amazon SNS.
-
A DevOps Engineer manages a web application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an EC2 Auto Scaling group across multiple Availability Zones. The engineer needs to implement a deployment strategy that:
– Launches a second fleet of instances with the same capacity as the original fleet.
– Maintains the original fleet unchanged while the second fleet is launched.
– Transitions traffic to the second fleet when the second fleet is fully deployed.
– Terminates the original fleet automatically 1 hour after transition.Which solution will satisfy these requirements?
- Use an AWS CloudFormation template with a retention policy for the ALB set to 1 hour. Update the Amazon Route 53 record to reflect the new ALB.
- Use two AWS Elastic Beanstalk environments to perform a blue/green deployment from the original environment to the new one. Create an application version lifecycle policy to terminate the original environment in 1 hour.
- Use AWS CodeDeploy with a deployment group configured with a blue/green deployment configuration. Select the option Terminate the original instances in the deployment group with a waiting period of 1 hour.
- Use AWS Elastic Beanstalk with the configuration set to Immutable. Create an .ebextension using the Resources key that sets the deletion policy of the ALB to 1 hour, and deploy the application.
-
A company is using Docker containers for an application deployment and wants to move its application to AWS. The company currently manages its own clusters on premises to manage the deployment of these containers. It wants to deploy its application to a managed service in AWS and wants the entire flow of the deployment process to be automated. In addition, the company has the following requirements:
– Focus first on the development workload.
– The environment must be easy to manage.
– Deployment should be repeatable and reusable for new environments.
– Store the code in a GitHub repository.Which solution will meet these requirements?
- Set up an Amazon ECS environment. Use AWS CodePipeline to create a pipeline that is triggered on a commit to the GitHub repository. Use AWS CodeBuild to create the container images and AWS CodeDeploy to publish the container image to the ECS environment.
- Use AWS CodePipeline that triggers on a commit from the GitHub repository, build the container images with AWS CodeBuild, and publish the container images to Amazon ECR. In the final stage, use AWS CloudFormation to create an Amazon ECS environment that gets the container images from the ECR repository.
- Create a Kubernetes Cluster on Amazon EC2. Use AWS CodePipeline to create a pipeline that is triggered when the code is committed to the repository. Create the container images with a Jenkins server on EC2 and store them in the Docker Hub. Use AWS Lambda from the pipeline to trigger the deployment to the Kubernetes Cluster.
- Set up an Amazon ECS environment. Use AWS CodePipeline to create a pipeline that is triggered on a commit to the GitHub repository. Use AWS CodeBuild to create the container and store it in the Docker Hub. Use an AWS Lambda function to trigger a deployment and pull the new container image from the Docker Hub.
-
A company has migrated its container-based applications to Amazon EKS and wants to establish automated email notifications. The notifications sent to each email address are for specific activities related to EKS components. The solution will include Amazon SNS topics and an AWS Lambda function to evaluate incoming log events and publish messages to the correct SNS topic.
Which logging solution will support these requirements?
- Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
- Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon CloudWatch Events events that trigger Lambda.
- Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
- Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.
-
An n-tier application requires a table in an Amazon RDS MySQL DB instance to be dropped and repopulated at each deployment. This process can take several minutes and the web tier cannot come online until the process is complete. Currently, the web tier is configured in an Amazon EC2 Auto Scaling group, with instances being terminated and replaced at each deployment. The MySQL table is populated by running a SQL query through an AWS CodeBuild job.
What should be done to ensure that the web tier does not come online before the database is completely configured?
- Use Amazon Aurora as a drop-in replacement for RDS MySQL. Use snapshots to populate the table with the correct data.
- Modify the launch configuration of the Auto Scaling group to pause user data execution for 600 seconds, allowing the table to be populated.
- Use AWS Step Functions to monitor and maintain the state of data population. Mark the database in service before continuing with the deployment.
- Use an EC2 Auto Scaling lifecycle hook to pause the configuration of the web tier until the table is populated.
-
A web application with multiple services runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS Multi-AZ DB instance. The instance health check used by the load balancer returns PASS if at least one service is running on the instance.
The company uses AWS CodePipeline with AWS CodeBuild and AWS CodeDeploy steps to deploy code to test and production environments. Recently, a new version was unable to connect to the database server in the test environment. One process was running, so the health checks reported healthy and the application was promoted to production, causing a production outage. The company wants to ensure that test builds are fully functional before a promotion to production.
Which changes should a DevOps Engineer make to the test and deployment process? (Choose two.)
- Add an automated functional test to the pipeline that ensures solid test cases are performed.
- Add a manual approval action to the CodeDeploy deployment pipeline that requires a Testing Engineer to validate the testing environment.
- Refactor the health check endpoint the Elastic Load Balancer is checking to better validate actual application functionality.
- Refactor the health check endpoint the Elastic Load Balancer is checking to return a text-based status result and configure the load balancer to check for a valid response.
- Add a dependency checking step to the existing testing framework to ensure compatibility.