DOP-C01 : AWS DevOps Engineer Professional : Part 03
-
According to Information Security policy, changes to the contents of objects inside production Amazon S3 buckets that contain encrypted secrets should only be made by a trusted group of administrators.
How should a DevOps Engineer create real-time, automated checks to meet this requirement?
- Create an AWS Lambda function that is triggered by Amazon S3 data events for object changes and that also checks the IAM user’s membership in an administrator’s IAM role.
- Create a periodic AWS Config rule to query Amazon S3 Logs for changes and to check the IAM user’s membership in an administrator’s IAM role.
- Create a metrics filter for Amazon CloudWatch logs to check for Amazon S3 bucket-level permission changes and to check the IAM user’s membership in an administrator’s IAM role.
- Create a periodic AWS Config rule to query AWS CloudTrail logs for changes to the Amazon S3 bucket-level permissions and to check the IAM user’s membership in an administrator’s IAM role.
-
A business has an application that consists of five independent AWS Lambda functions.
The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests, packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon CloudWatch Events rule to ensure the pipeline execution starts as quickly as possible after a change is made to the application source code.
After working with the pipeline for a few months, the DevOps Engineer has noticed the pipeline takes too long to complete.
What should the DevOps Engineer implement to BEST improve the speed of the pipeline?
- Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput.
- Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the builds in parallel.
- Modify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same runOrder.
- Modify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.
-
A company uses a complex system that consists of networking, IAM policies, and multiple three-tier applications. Requirements are still being defined for a new system, so the number of AWS components present in the final design is not known. The DevOps Engineer needs to begin defining AWS resources using AWS CloudFormation to automate and version-control the new infrastructure.
What is the best practice for using CloudFormation to create new environments?
- Manually construct the networking layer using Amazon VPC and then define all other resources using CloudFormation.
- Create a single template to encompass all resources that are required for the system so there is only one template to version-control.
- Create multiple separate templates for each logical part of the system, use cross-stack references in CloudFormation, and maintain several templates in version control.
- Create many separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon EC2 instance running SDK for granular control.
-
A DevOps Engineer is deploying a new web application. The company chooses AWS Elastic Beanstalk for deploying and managing the web application, and Amazon RDS MySQL to handle persistent data. The company requires that new deployments have minimal impact if they fail. The application resources must be at full capacity during deployment, and rolling back a deployment must also be possible.
Which deployment sequence will meet these requirements?
- Deploy the application using Elastic Beanstalk and connect to an external RDS MySQL instance using Elastic Beanstalk environment properties. Use Elastic Beanstalk features for a blue/green deployment to deploy the new release to a separate environment, and then swap the CNAME in the two environments to redirect traffic to the new version.
- Deploy the application using Elastic Beanstalk, and include RDS MySQL as part of the environment. Use default Elastic Beanstalk behavior to deploy changes to the application, and let rolling updates deploy changes to the application.
- Deploy the application using Elastic Beanstalk, and include RDS MySQL as part of the environment. Use Elastic Beanstalk immutable updates for application deployments.
- Deploy the application using Elastic Beanstalk, and connect to an external RDS MySQL instance using Elastic Beanstalk environment properties. Use Elastic Beanstalk immutable updates for application deployments.
-
An Amazon EC2 instance with no internet access is running in a Virtual Private Cloud (VPC) and needs to download an object from a restricted Amazon S3 bucket. When the DevOps Engineer tries to gain access to the object, an AccessDenied error is received.
What are the possible causes for this error? (Choose three.)
- The S3 bucket default encryption is enabled.
- There is an error in the S3 bucket policy.
- There is an error in the VPC endpoint policy.
- The object has been moved to Amazon Glacier.
- There is an error in the IAM role configuration.
- S3 versioning is enabled.
-
An application has microservices spread across different AWS accounts and is integrated with an on-premises legacy system for some of its functionality. Because of the segmented architecture and missing logs, every time the application experiences issues, it is taking too long to gather the logs to identify the issues. A DevOps Engineer must fix the log aggregation process and provide a way to centrally analyze the logs.
Which is the MOST efficient and cost-effective solution?
- Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to export on-premises logs, and store the logs in an S3 bucket in a central account. Build an Amazon EMR cluster to reduce the logs and derive the root cause.
- Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to import on-premises logs. Store all logs in S3 buckets in individual accounts. Use Amazon Macie to write a query to search for the required specific event-related data point.
- Collect system logs and application logs using the Amazon CloudWatch Logs agent. Install the CloudWatch Logs agent on the on-premises servers. Transfer all logs from AWS to the on-premises data center. Use an Amazon Elasticsearch Logstash Kibana stack to analyze logs on premises.
- Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Install a CloudWatch Logs agent for on-premises resources. Store all logs in an S3 bucket in a central account. Set up an Amazon S3 trigger and an AWS Lambda function to analyze incoming logs and automatically identify anomalies. Use Amazon Athena to run ad hoc queries on the logs in the central account.
-
A DevOps Engineer is building a continuous deployment pipeline for a serverless application that uses AWS Lambda functions. The company wants to reduce the customer impact of an unsuccessful deployment. The company also wants to monitor for issues.
Which deploy stage configuration will meet these requirements?
- Use AWS CloudFormation to publish a new stack update, and include Amazon CloudWatch alarms on all resources. Set up an AWS CodePipeline approval action for a developer to verify and approve the AWS CloudFormation change set.
- Use AWS CodeBuild to add sample event payloads for testing to the Lambda functions. Publish a new version of the functions, and include Amazon CloudWatch alarms. Update the production alias to point to the new version. Configure rollbacks to occur when an alarm is in the ALARM state.
- Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.
- Use AWS CloudFormation to publish a new version on every stack update, and include Amazon CloudWatch alarms on all resources. Use the RoutingConfig property of the AWS::Lambda::Alias resource to update the traffic routing during the stack update.
-
A DevOps Engineer must track the health of a stateless RESTful service sitting behind a Classic ILoad Balancer. The deployment of new application revisions is through a CI/CD pipeline. If the service’s latency increases beyond a defined threshold, deployment should be stopped until the service has recovered.
Which of the following methods allow for the QUICKEST detection time?
- Use Amazon CloudWatch metrics provided by Elastic Load Balancing to calculate average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
- Use AWS Lambda and Elastic Load Balancing access logs to detect average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
- Use AWS CodeDeploy’s MinimumHealthyHosts setting to define thresholds for rolling back deployments. If these thresholds are breached, roll back the deployment.
- Use Metric Filters to parse application logs in Amazon CloudWatch Logs. Create a filter for latency. Alarm and stop deployment when latency increases beyond the defined threshold.
-
A DevOps Engineer is leading the implementation for automating patching of Windows-based workstations in a hybrid cloud environment by using AWS Systems Manager (SSM).
What steps should the Engineer follow to set up Systems Manager to automate patching in this environment? (Choose two.)
- Create multiple IAM service roles for Systems Manager so that the ssm.amazonaws.com service can execute the AssumeRole operation on every instance. Register the role on a per-resource level to enable the creation of a service token. Perform managed-instance activation with the newly created service role attached to each managed instance.
- Create an IAM service role for Systems Manager so that the ssm.amazonaws.com service can execute the AssumeRole operation. Register the role to enable the creation of a service token. Perform managed-instance activation with the newly created service role.
- Using previously obtained activation codes and activation IDs, download and install the SSM Agent on the hybrid servers, and register the servers or virtual machines on the Systems Manager service. Hybrid instances will show with an “mi-” prefix in the SSM console.
- Using previously obtained activation codes and activation IDs, download and install the SSM Agent on the hybrid servers, and register the servers or virtual machines on the Systems Manager service. Hybrid instances will show with an “i-” prefix in the SSM console as if they were provisioned as a regular Amazon EC2 instance.
- Run AWS Config to create a list of instances that are unpatched and not compliant. Create an instance scheduler job, and through an AWS Lambda function, perform the instance patching to bring them up to compliance.
-
A company needs to introduce automatic DNS failover for a distributed web application to a disaster recovery or standby installation. The DevOps Engineer plans to configure Amazon Route 53 to provide DNS routing to alternate endpoint in the event of an application failure.
What steps should the Engineer take to accomplish this? (Choose two.)
- Create Amazon Route 53 health checks for each endpoint that cannot be entered as alias records. Ensure firewall and routing rules allow Amazon Route 53 to send requests to the endpoints that are specified in the health checks.
- Create alias records that route traffic to AWS resources and set the value of the Evaluate Target Health option to Yes, then create all the non-alias records.
- Create a governing Amazon Route 53 record set, set it to failover, and associate it with the primary and secondary Amazon Route 53 record sets to distribute traffic to healthy DNS entries.
- Create an Amazon CloudWatch alarm to monitor the primary Amazon Route 53 DNS entry. Then create an associated AWS Lambda function to execute the failover API call to Route 53 to the secondary DNS entry.
- Map the primary and secondary Amazon Route 53 record sets to an Amazon CloudFront distribution using primary and secondary origins.
-
A company is implementing an Amazon ECS cluster to run its workload. The company architecture will run multiple ECS services on the cluster, with an Application Load Balancer on the front end, using multiple target groups to route traffic. The Application Development team has been struggling to collect logs that must be collected and sent to an Amazon S3 bucket for near-real time analysis
What must the DevOps Engineer configure in the deployment to meet these requirements? (Choose three.)
- Install the Amazon CloudWatch Logs logging agent on the ECS instances. Change the logging driver in the ECS task definition to ‘awslogs’.
- Download the Amazon CloudWatch Logs container instance from AWS and configure it as a task. Update the application service definitions to include the logging task.
- Use Amazon CloudWatch Events to schedule an AWS Lambda function that will run every 60 seconds running the create-export -task CloudWatch Logs command, then point the output to the logging S3 bucket.
- Enable access logging on the Application Load Balancer, then point it directly to the S3 logging bucket.
- Enable access logging on the target groups that are used by the ECS services, then point it directly to the S3 logging bucket.
- Create an Amazon Kinesis Data Firehose with a destination of the S3 logging bucket, then create an Amazon CloudWatch Logs subscription filter for Kinesis.
-
A Development team is currently using AWS CodeDeploy to deploy an application revision to an Auto Scaling group. If the deployment process fails, it must be rolled back automatically and a notification must be sent.
What is the MOST effective configuration that can satisfy all of the requirements?
- Create Amazon CloudWatch Events rules for CodeDeploy operations. Configure a CloudWatch Events rule to send out an Amazon SNS message when the deployment fails. Configure CodeDeploy to automatically roll back when the deployment fails.
- Use available Amazon CloudWatch metrics for CodeDeploy to create CloudWatch alarms. Configure CloudWatch alarms to send out an Amazon SNS message when the deployment fails. Use AWS CLI to redeploy a previously deployed revision.
- Configure a CodeDeploy agent to create a trigger that will send notification to Amazon SNS topics when the deployment fails. Configure CodeDeploy to automatically roll back when the deployment fails.
- Use AWS CloudTrail to monitor API calls made by or on behalf of CodeDeploy in the AWS account. Send an Amazon SNS message when deployment fails. Use AWS CLI to redeploy a previously deployed revision.
-
A large enterprise is deploying a web application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application stores data in an Amazon RDS Oracle DB instance and Amazon DynamoDB. There are separate environments for development, testing, and production.
What is the MOST secure and flexible way to obtain password credentials during deployment?
- Retrieve an access key from an AWS Systems Manager SecureString parameter to access AWS services. Retrieve the database credentials from a Systems Manager SecureString parameter.
- Launch the EC2 instances with an EC2 IAM role to access AWS services. Retrieve the database credentials from AWS Secrets Manager.
- Retrieve an access key from an AWS Systems Manager plaintext parameter to access AWS services. Retrieve the database credentials from a Systems Manager SecureString parameter.
- Launch the EC2 instances with an EC2 IAM role to access AWS services. Store the database passwords in an encrypted config file with the application artifacts.
-
A DevOps Engineer is designing a deployment strategy for a web application. The application will use an Auto Scaling group to launch Amazon EC2 instances using an AMI. The same infrastructure will be deployed in multiple environments (development, test, and quality assurance). The deployment strategy should meet the following requirements:
• Minimize the startup time for the instance
• Allow the same AMI to work in multiple environments
• Store secrets for multiple environments securelyHow should this be accomplished?
- Preconfigure the AMI using an AWS Lambda function that launches an Amazon EC2 instance, and then runs a script to install the software and create the AMI. Configure an Auto Scaling lifecycle hook to determine which environment the instance is launched in, and, based on that finding, run a configuration script. Save the secrets on an .ini file and store them in Amazon S3. Retrieve the secrets using a configuration script in EC2 user data.
- Preconfigure the AMI by installing all the software using AWS Systems Manager automation and configure Auto Scaling to tag the instances at launch with their specific environment. Then use a bootstrap script in user data to read the tags and configure settings for the environment. Use the AWS Systems Manager Parameter Store to store the secrets using AWS KMS.
- Use a standard AMI from the AWS Marketplace. Configure Auto Scaling to detect the current environment. Install the software using a script in Amazon EC2 user data. Use AWS Secrets Manager to store the credentials for all environments.
- Preconfigure the AMI by installing all the software and configuration for all environments. Configure Auto Scaling to tag the instances at launch with their environment. Use the Amazon EC2 user data to trigger an AWS Lambda function that reads the instance ID and then reconfigures the setting for the proper environment. Use the AWS Systems Manager Parameter Store to store the secrets using AWS KMS.
-
A Developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing.
Occasionally, some application servers are being terminated after failing ELB HTTP health checks. The Developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated.
How can log collection be automated?
- Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch Alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that executes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
- Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create a Config rule for EC2 Instance-terminate Lifecycle Action and trigger a step function that executes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
- Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that executes a script to called logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
- Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch Events rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that executes a SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
-
A company has a web application that uses AWS Elastic Beanstalk, Amazon S3, and Amazon DynamoDB to develop a web application. The web application has increased dramatically in popularity, resulting in unpredictable spikes in traffic. A DevOps Engineer has noted that 90% of the requests are duplicate read requests to the DynamoDB table and the images stored in an S3 bucket.
How can the Engineer improve the performance of the website?
- Use Amazon ElastiCache for Redis to cache repeated read requests to DynamoDB and AWS Elemental MediaStore to cache images stored in S3.
- Use Amazon ElastiCache for Memcached to cache repeated read requests to DynamoDB and Amazon EFS to cache images stored in S3.
- Use DynamoDB Accelerator to cache repeated read requests to DynamoDB and Amazon CloudFront to cache images stored in S3.
- Use DynamoDB Streams to cache repeated read requests to DynamoDB and API Gateway to cache images stored in S3.
-
A company is creating a software solution that executes a specific parallel-processing mechanism. The software can scale to tens of servers in some special scenarios. This solution uses a proprietary library that is license-based, requiring that each individual server have a single, dedicated license installed. The company has 200 licenses and is planning to run 200 server nodes concurrently at most.
The company has requested the following features:• A mechanism to automate the use of the licenses at scale.
• Creation of a dashboard to use in the future to verify which licenses are available at any moment.What is the MOST effective way to accomplish these requirements?
- Upload the licenses to a private Amazon S3 bucket. Create an AWS CloudFormation template with a Mappings section for the licenses. In the template, create an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the Mappings section. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
- Upload the licenses to an Amazon DynamoDB table. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the DynamoDB table. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
- Upload the licenses to a private Amazon S3 bucket. Populate an Amazon SQS queue with the list of licenses stored in S3. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script acquire an available license from SQS. Create an Auto Scaling lifecycle hook, then use it to put the license back in SQS after the instance is terminated.
- Upload the licenses to an Amazon DynamoDB table. Create an AWS CLI script to launch the servers by using the parameter –count, with min:max instances to launch. In the user data script, acquire an available license from the DynamoDB table. Monitor each instance and, in case of failure, replace the instance, then manually update the DynamoDB table.
-
A company has developed a static website hosted on an Amazon S3 bucket. The website is deployed using AWS CloudFormation. The CloudFormation template defines an S3 bucket and a custom resource that copies content into the bucket from a source location.
The company has decided that it needs to move the website to a new location, so the existing CloudFormation stack must be deleted and re-created. However, CloudFormation reports that the stack could not be deleted cleanly.
What is the MOST likely cause and how can the DevOps Engineer mitigate this problem for this and future versions of the website?
- Deletion has failed because the S3 bucket has an active website configuration. Modify the CloudFormation template to remove the WebsiteConfiguration property from the S3 bucket resource.
- Deletion has failed because the S3 bucket is not empty. Modify the custom resource’s AWS Lambda function code to recursively empty the bucket when RequestType is Delete.
- Deletion has failed because the custom resource does not define a deletion policy. Add a DeletionPolicy property to the custom resource definition with a value of RemoveOnDeletion.
- Deletion has failed because the S3 bucket is not empty. Modify the S3 bucket resource in the CloudFormation template to add a DeletionPolicy property with a value of Empty.
-
A company is deploying a new mobile game on AWS for its customers around the world. The Development team uses AWS Code services and must meet the following requirements:
– Clients need to send/receive real-time playing data from the backend frequently and with minimal latency
– Game data must meet the data residency requirementWhich strategy can a DevOps Engineer implement to meet their needs?
- Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build and deployment pipeline. A successful deployment in one region invokes an AWS Lambda function to copy the build artifacts to an Amazon S3 bucket in another region. After the artifact is copied, it triggers a deployment pipeline in the new region.
- Deploy the backend application to multiple Availability Zones in a single region. Create an Amazon CloudFront distribution to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-and-deployment pipeline. The pipeline deploys the backend application to all Availability Zones.
- Deploy the backend application to multiple regions. Use AWS Direct Connect to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-and-deployment pipeline in the region. After a successful deployment in the region, the pipeline continues to deploy the artifact to another region.
- Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build-and-deployment pipeline in the region. After a successful deployment in the region, the pipeline invokes the pipeline in another region and passes the build artifact location. The pipeline uses the artifact location and deploys applications in the new region.
-
A Development team is working on a serverless application in AWS. To quickly identify and remediate potential production issues, the team decides to roll out changes to a small number of users as a test before the full release. The DevOps Engineer must develop a solution to minimize downtime and impact.
Which of the following solutions should be used to meet the requirements? (Choose two.)
- Create an Application Load Balancer with two target groups. Set up the Application Load Balancer for Amazon API Gateway private integration. Associate one target group to the current version and the other target group to the new version. Configure API Gateway to route 10% of incoming traffic to the new version. As the new version becomes stable, configure API Gateway to send all traffic to the new version and detach the old version from the load balancer.
- Create an alias for an AWS Lambda function pointing to both the current and new versions. Configure the alias to route 10% of incoming traffic to the new version. As the new version is considered stable, update the alias to route all traffic to the new version.
- Create a failover record set in AWS Route 53 pointing to the AWS Lambda endpoints for the old and new versions. Configure Route 53 to route 10% of incoming traffic to the new version. As the new version becomes stable, update the DNS record to route all traffic to the new version.
- Create an ELB Network Load Balancer with two target groups. Set up the Network Load Balancer for Amazon API Gateway private integration Associate one target group with the current version and the other target group with the new version. Configure the load balancer to route 10% of incoming traffic to the new version. As the new version becomes stable, detach the old version from the load balancer.
- In Amazon API Gateway, create a canary release deployment by adding canary settings to the stage of a regular deployment. Configure API Gateway to route 10% of the incoming traffic to the canary release. As the canary release is considered stable, promote it to a production release
Subscribe
0 Comments
Newest