DOP-C01 : AWS DevOps Engineer Professional : Part 22



DOP-C01 : AWS DevOps Engineer Professional : Part 22

  1. Your CTO is very worried about the security of your AWS account. How best can you prevent hackers from completely hijacking your account?

    • Use short but complex password on the root account and any administrators.
    • Use AWS IAM Geo-Lock and disallow anyone from logging in except for in your city.
    • Use MFA on all users and accounts, especially on the root account.
    • Don’t write down or remember the root account password after creating the AWS account.
    Explanation:

    For increased security, we recommend that you configure multi-factor authentication (MFA) to help protect your AWS resources. MFA adds extra security because it requires users to enter a unique authentication code from an approved authentication device or SMS text message when they access AWS websites or services.

  2. If you are trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs, what should you configure?

    • Configure Rolling Deployments
    • Configure Enhanced Health Reporting
    • Configure Blue-Green Deployments
    • Configure a Dead Letter Queue
    Explanation:

    Elastic Beanstalk worker environments support Amazon Simple Queue Service (SQS) dead letter queues. A dead letter queue is a queue where other (source) queues can send messages that for some reason could not be successfully processed. A primary benefit of using a dead letter queue is the ability to sideline and isolate the unsuccessfully processed messages. You can then analyze any messages sent to the dead letter queue to try to determine why they were not successfully processed.

  3. You have a high security requirement for your AWS accounts.

    What is the most rapid and sophisticated setup you can use to react to AWS API calls to your account?

    • Subscription to AWS Config via an SNS Topic. Use a Lambda Function to perform in-flight analysis and reactivity to changes as they occur.
    • Global AWS CloudTrail setup delivering to S3 with an SNS subscription to the deliver notifications, pushing into a Lambda, which inserts records into an ELK stack for analysis.
    • Use a CloudWatch Rule ScheduleExpression to periodically analyze IAM credential logs. Push the deltas for events into an ELK stack and perform ad-hoc analysis there.
    • CloudWatch Events Rules which trigger based on all AWS API calls, submitting all events to an AWS Kinesis Stream for arbitrary downstream analysis.
  4. What method should you use to author automation if you want to wait for a CloudFormation stack to finish completing in a script?

    • Event subscription using SQS.
    • Event subscription using SNS.
    • Poll using <code>ListStacks</code> / <code>list-stacks</code>
    • Poll using <code>GetStackStatus</code> / <code>get-stack-status</code>
    Explanation:

    Event driven systems are good for IFTTT logic, but only polling will make a script wait to complete. ListStacks / list-stacks is a real method, GetStackStatus / get-stack-status is not.

  5. Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time.

    What is the simplest and cheapest way to reduce costs and scale with spikes like this?

    • Create an S3 bucket and asynchronously replicate common requests responses into S3 objects. When a request comes in for a precomputed response, redirect to AWS S3.
    • Create another ELB and Auto Scaling Group layer mounted on top of the other system, adding a tier to the system. Serve most read requests out of the top layer.
    • Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviours to proxy cache requests which can be served late.
    • Create a Memcached cluster in AWS ElastiCache. Create cache logic to serve requests which can be served late from the in-memory cache for increased performance.
    Explanation:

    CloudFront is ideal for scenarios in which entire requests can be served out of a cache and usage patterns involve heavy reads and spikiness in demand. A cache behavior is the set of rules you configure for a given URL pattern based on file extensions, file names, or any portion of a URL path on your website (e.g., *.jpg). You can configure multiple cache behaviors for your web distribution. Amazon CloudFront will match incoming viewer requests with your list of URL patterns, and if there is a match, the service will honor the cache behavior you configure for that URL pattern. Each cache behavior can include the following Amazon CloudFront configuration values: origin server name, viewer connection protocol, minimum expiration period, query string parameters, cookies, and trusted signers for private content.

  6. You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?

    • Kinesis Firehose + RDS
    • Kinesis Firehose + RedShift
    • EMR using Hive
    • EMR running Apache Spark
    Explanation:

    Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily.

  7. You are building a game high score table in DynamoDB. You will store each user’s highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game.

    What is the best DynamoDB key structure?

    • HighestScore as the hash / only key.
    • GameID as the hash key, HighestScore as the range key.
    • GameID as the hash / only key.
    • GameID as the range / only key.
    Explanation:

    Since access and storage for games is uniform, and you need to have ordering within each game for the scores (to access the highest value), your hash (partition) key should be the GameID, and there should be a range key for HighestScore.

  8. What is server immutability?

    • Not updating a server after creation.
    • The ability to change server counts.
    • Updating a server after creation.
    • The inability to change server counts.
    Explanation:

    … disposable upgrades offer a simpler way to know if your application has unknown dependencies. The underlying EC2 instance usage is considered temporary or ephemeral in nature for the period of deployment until the current release is active. During the new release, a new set of EC2 instances are rolled out by terminating older instances. This type of upgrade technique is more common in an immutable infrastructure

  9. You run a clustered NoSQL database on AWS EC2 using AWS EBS. You need to reduce latency for database response times. Performance is the most important concern, not availability. You did not perform the initial setup, someone without much AWS knowledge did, so you are not sure if they configured everything optimally. Which of the following is NOT likely to be an issue contributing to increased latency?

    • The EC2 instances are not EBS Optimized.
    • The database and requesting system are both in the wrong Availability Zone.
    • The EBS Volumes are not using PIOPS.
    • The database is not running in a placement group.
    Explanation:
    For the highest possible performance, all instances in a clustered database like this one should be in a single Availability Zone in a placement group, using EBS optimized instances, and using PIOPS SSD EBS Volumes. The particular Availability Zone the system is running in should not be important, as long as it is the same as the requesting resources.
  10. Fill the blanks: __________ helps us track AWS API calls and transitions, _________ helps to understand what resources we have now, and ________ allows auditing credentials and logins.

    • AWS Config, CloudTrail, IAM Credential Reports
    • CloudTrail, IAM Credential Reports, AWS Config
    • CloudTrail, AWS Config, IAM Credential Reports
    • AWS Config, IAM Credential Reports, CloudTrail
    Explanation:
    You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This includes calls made by using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services.
  11. You are creating an application which stores extremely sensitive financial information. All information in the system must be encrypted at rest and in transit.

    Which of these is a violation of this policy?

    • ELB SSL termination.
    • ELB Using Proxy Protocol v1.
    • CloudFront Viewer Protocol Policy set to HTTPS redirection.
    • Telling S3 to use AES256 on the server-side.
    Explanation:

    Terminating SSL terminates the security of a connection over HTTP, removing the S for “Secure” in HTTPS. This violates the “encryption in transit” requirement in the scenario.

  12. You need to scale an RDS deployment. You are operating at 10% writes and 90% reads, based on your logging. How best can you scale this in a simple way?

    • Create a second master RDS instance and peer the RDS groups.
    • Cache all the database responses on the read side with CloudFront.
    • Create read replicas for RDS since the load is mostly reads.
    • Create a Multi-AZ RDS installs and route read traffic to standby.
    Explanation:

    The high-availability feature is not a scaling solution for read-only scenarios; you cannot use a standby replica to serve read traffic. To service read-only traffic, you should use a Read Replica. For more information, see Working with PostgreSQL, MySQL, and MariaDB Read Replic

  13. When thinking of AWS Elastic Beanstalk, the ‘Swap Environment URLs’ feature most directly aids in what?

    • Immutable Rolling Deployments
    • Mutable Rolling Deployments
    • Canary Deployments
    • Blue-Green Deployments
    Explanation:

    Simply upload the new version of your application and let your deployment service (AWS Elastic Beanstalk, AWS CloudFormation, or AWS OpsWorks) deploy a new version (green). To cut over to the new version, you simply replace the ELB URLs in your DNS records. Elastic Beanstalk has a Swap Environment URLs feature to facilitate a simpler cutover process.

  14. You need to create a simple, holistic check for your system’s general availablity and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with?

    • Route53 Health Checks
    • CloudWatch Health Checks
    • AWS ELB Health Checks
    • EC2 Health Checks
    Explanation:

    You can create a health check that will run into perpetuity using Route53, in one API call, which will ping your service via HTTP every 10 or 30 seconds. Amazon Route 53 must be able to establish a TCP connection with the endpoint within four seconds. In addition, the endpoint must respond with an HTTP status code of 200 or greater and less than 400 within two seconds after connecting.

  15. What is the scope of an EC2 security group?

    • Availability Zone
    • Placement Group
    • Region
    • VPC
    Explanation:

    A security group is tied to a region and can be assigned only to instances in the same region. You can’t enable an instance.

  16. You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost?

    • Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
    • Purchase a Medium Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
    • Purchase a Light Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
    • Purchase a Full Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
    Explanation:

    Because the instance will always be online during the day, in a predictable manner, and there are a sequence of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a “Full” level utilization purchases on EC2.

  17. Which EBS volume type is best for high performance NoSQL cluster deployments?

    • io1
    • gp1
    • standard
    • gp2
    Explanation:

    io1 volumes, or Provisioned IOPS (PIOPS) SSDs, are best for: Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume, like large database workloads, such as MongoDB.

  18. You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB.

    Which application code deployment method should you use?

    • SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes.
    • Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration.
    • Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch.
    • Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.
    Explanation:
    The bootstrapping process can be slower if you have a complex application or multiple applications to install. Managing a fleet of applications with several build tools and dependencies can be a challenging task during rollouts. Furthermore, your deployment service should be designed to do faster rollouts to take advantage of Auto Scaling.
  19. You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?

    • AWS Elasticsearch Service
    • AWS RedShift
    • AWS EMR
    • AWS DynamoDB
    Explanation:

    Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular opensource search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.

  20. Which status represents a failure state in AWS CloudFormation?

    • <code>UPDATE_COMPLETE_CLEANUP_IN_PROGRESS</code>
    • <code>DELETE_COMPLETE_WITH_ARTIFACTS</code>
    • <code>ROLLBACK_IN_PROGRESS</code>
    • <code>ROLLBACK_FAILED</code>
    Explanation:

    ROLLBACK_IN_PROGRESS does not mean CloudFormation failed – it could mean I failed to specify a working solution.
    ROLLBACK_FAILED means CloudFormation failed to carry out a valid operation – rolling back changes it attempted to introduce but could not complete.