DOP-C01 : AWS DevOps Engineer Professional : Part 23



DOP-C01 : AWS DevOps Engineer Professional : Part 23

  1. What is the scope of an EC2 EIP?

    • Placement Group
    • Availability Zone
    • Region
    • VPC
    Explanation:

    An Elastic IP address is tied to a region and can be associated only with an instance in the same region.

  2. For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?

    • Detaching
    • Terminating:Wait
    • Pending
    • EnteringStandby
    Explanation:

    You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service.

  3. You want to pass queue messages that are 1GB each.

    How should you achieve this?

    • Use Kinesis as a buffer stream for message bodies. Store the checkpoint id for the placement in the Kinesis Stream in SQS.
    • Use the Amazon SQS Extended Client Library for Java and Amazon S3 as a storage mechanism for message bodies.
    • Use SQS’s support for message partitioning and multi-part uploads on Amazon S3.
    • Use AWS EFS as a shared pool storage medium. Store filesystem pointers to the files on disk in the SQS message bodies.
    Explanation:

    You can manage Amazon SQS messages with Amazon S3. This is especially useful for storing and retrieving messages with a message size of up to 2 GB. To manage Amazon SQS messages with Amazon S3, use the Amazon SQS Extended Client Library for Java.

  4. You are designing a service that aggregates clickstream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spikey, geographically distributed, high-scale, and unpredictable.

    How should you design this system?

    • Use a large RedShift cluster to perform the analysis, and a fleet of Lambdas to perform record inserts into the RedShift tables. Lambda will scale rapidly enough for the traffic spikes.
    • Use a CloudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETs to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3.
    • Use API Gateway invoking Lambdas which PutRecords into Kinesis, and EMR running Spark performing GetRecords on Kinesis to scale with spikes. Spark on EMR outputs the analysis to S3, which are sent out via email.
    • Use AWS Elasticsearch service and EC2 Auto Scaling groups. The Autoscaling groups scale based on click throughput and stream into the Elasticsearch domain, which is also scalable. Use Kibana to generate reports periodically.
    Explanation:

    Because you only need to batch analyze, anything using streaming is a waste of money. CloudFront is a Gigabit-Scale HTTP(S) global request distribution service, so it can handle scale, geo-spread, spikes, and unpredictability. The Access Logs will contain the GET data and work just fine for batch analysis and email using EMR. Can you use Amazon CloudFront if you expect usage peaks higher than 10 Gbps or 15,000 RPS? Yes. Complete our request for higher limits here, and we will add more capacity to your account within two business days.

  5. Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system provisions the whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your attempt to create a Development environment failed, after successfully creating Staging and Production environments in the same region. What happened?

    • You didn’t choose the Development version of the AMI you are using.
    • You didn’t set the Development flag to true when deploying EC2 instances.
    • You hit the soft limit of 5 EIPs per region and requested a 6th.
    • You hit the soft limit of 2 VPCs per region and requested a 3rd.
    Explanation:
    There is a soft limit of 5 EIPs per Region for VPC on new accounts. The third environment could not allocate the 6th EIP.
  6. To monitor API calls against our AWS account by different users and entities, we can use ________ to create a history of calls in bulk for later review, and use ___________ for reacting to AWS API calls in real-time.

    • AWS Config; AWS Inspector
    • AWS CloudTrail; AWS Config
    • AWS CloudTrail; CloudWatch Events
    • AWS Config; AWS Lambda
    Explanation:
    CloudTrail is a batch API call collection service, CloudWatch Events enables real-time monitoring of calls through the Rules object interface.
  7. How does Amazon RDS multi Availability Zone model work?

    • A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication.
    • A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication.
    • A second, standby database is deployed and maintained in a different region from master using asynchronous replication.
    • A second, standby database is deployed and maintained in a different region from master using synchronous replication.
    Explanation:

    In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.

  8. Which of these is not an instrinsic function in AWS CloudFormation?

    • Fn::Equals
    • Fn::If
    • Fn::Not
    • Fn::Parse
    Explanation:

    This is the complete list of Intrinsic Functions…: Fn::Base64, Fn::And, Fn::Equals, Fn::If, Fn::Not, Fn::Or, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::Join, Fn::Select

  9. Which one of the following is a restriction of AWS EBS Snapshots?

    • Snapshot restorations are restricted to the region in which the snapshots are created.
    • You cannot share unencrypted snapshots.
    • To share a snapshot with a user in other region the snapshot has to be created in that region first.
    • You cannot share a snapshot containing sensitive data such as an AWS Access Key ID or AWS Secret Access Key.
    Explanation:

    Shapshots shared with other users are usable in full by the recipient, including but limited to the ability to base modified volumes and snapshots.

  10. What option below is the geographic limit of an EC2 security group?

    • Security groups are global.
    • They are confined to Placement Groups.
    • They are confined to Regions.
    • They are confined to Availability Zones.
    Explanation:

    A security group is tied to a region and can be assigned only to instances in the same region.
    You can’t enable an instance to communicate with an instance outside its region using security group rules. Traffic from an instance in another region is seen as WAN bandwidth.

  11. If designing a single playbook to run across multiple Linux distributions that have distribution specific commands, what would be the best me

    • Enable fact gathering and use the `when’ conditional to match the distribution to the task.
    • This is not possible, a separate playbook for each target Linux distribution is required.
    • Use `ignore_errors: true’ in the tasks.
    • Use the `shell’ module to write your own checks for each command that is ran.
    Explanation:

    Ansible provides a method to only run a task when a condition is met using the `when’ declarative. With gather facts enabled, the play has access to the distribution name of the Linux system, thus, tasks can be tailored to a specific distribution and ran only when the condition is met, e.g.: ` – when: ansible_os_family == “Debian”‘.

  12. Which is the proper syntax for referencing a variable’s value in an Ansible task?

    • ${variable_name}
    • { variable_name }
    • “{{ variable_name }}”
    • @variable_name
    Explanation:
    We use the variable’s name to reference the variable which we encapsulate in curly brackets `{{ }}’; however, the YAML syntax dictates that a string beginning with a curly bracket denotes a dictionary value. To get around this, it is proper to wrap the variable declaration in quotes.
  13. If Erin has three clusters of server types that are all managed by Ansible and she needs to provision each cluster so that they are configured with their appropriate NTP server addresses. What is the best method Erin should use in Ansible for managing this?

    • Write a task that scans the network in the target hosts’ region for the NTP server, register the resulting address so that the next task can write the NTP configuration.
    • Break down the hosts by region in the Ansible inventory file and assign an inventory group variable the NTP address value for the respective region. The playbook can contain just the single play referencing the NTP variable from the inventory.
    • Create a playbook for each different region and store the NTP address in a variable in the play in the event the NTP server changes.
    • Create three plays, each one has the hosts for their respective regions and set the NTP server address in each task.
    Explanation:

    While all four answers provided are correct, only B is the best choice. Ansible offers the ability to assign variables to groups of hosts in the inventory file. When the playbook is ran it will use the variables assigned to the group, even all the groups are specified in a single playbook run. The respective variables will be available to the play. This is easiest method to run, maintain and write.

  14. Which of the following is an invalid variable name in Ansible?

    • host1st_ref
    • host-first-ref
    • Host1stRef
    • host_first_ref
    Explanation:

    Variable names can contain letters, numbers and underscores and should always start with a letter. Invalid variable examples, `host first ref’, `1st_host_ref”.

  15. What are the bare minimum requirements for a valid Ansible playbook?

    • The hosts, connection type, fact gathering, vars and tasks.
    • The hosts declaration and tasks
    • A YAML file with a single line containing `—‘.
    • At least one play with at least a hosts declaration
    Explanation: Ansible Playbooks are a series of plays and must contain at a minimum, one play. A play generally consists of hosts to run on, a list of tasks, variables and roles, and any additional instructions, such as connection type, fact gathering, remote username, etc. that the tasks will need to complete. The only requirement for a valid play is to declare the hosts.
  16. When running a playbook on a remote target host you receive a Python error similar to “[Errno 13] Permission denied: `/home/nick/.ansible/tmp’. What would be the most likely cause of this problem?

    • The user’s home or `.ansible’ directory on the Ansible system is not writeable by the user running the play.
    • The specified user does not exist on the remote system.
    • The user running `ansible-playbook’ must run it from their own home directory.
    • The user’s home or `.ansible’ directory on the Ansible remote host is not writeable by the user running the play
    Explanation:
    Each task that Ansible runs calls a module. When Ansible uses modules, it copies the module to the remote target system. In the error above it attempted to copy it to the remote user’s home directory and found that either the home directory or the `.ansible’ directory were not writeable and thus could not continue.
  17. When Ansible’s connection state is set to `remote’, what method of communication does Ansible utilize to run commands on the remote target host?

    • SSH
    • RSH
    • PSExec
    • API call to Ansible client on host
    Explanation: Ansible does not require a client/server architecture and makes all remote connections over SSH. Ansible utilizes the Paramiko Python libraries for SSH when the native system OpenSSH libraries do not meet the requirements. Also note, Ansible does require Python be installed on the target host. When the target host is Windows, it uses WinRS
  18. Which resource cannot be defined in an Ansible Playbook?

    • Fact Gathering State
    • Host Groups
    • Inventory File
    • Variables
    Explanation:
    Ansible’s inventory can only be specified on the command line, the Ansible configuration file or in environment variables.
  19. When specifying multiple variable names and values for a playbook on the command line, which of the following is the correct syntax?

    • ansible-playbook playbook.yml -e `host=”foo” pkg=”bar”‘
    • ansible-playbook playbook.yml -e `host: “foo”, pkg: “bar”‘
    • ansible-playbook playbook.yml -e `host=”foo”‘ -e `pkg=”bar”‘
    • ansible-playbook playbook.yml –extra-vars “host=foo”, “pkg=bar”
    Explanation:
    Variables are passed in a single command line parameter, `-e’ or `–extra-vars’. They are sent as a single string to the playbook and are space delimited. Because of the space delimeter, variable values must be encapsulated in quotes. Additionally, proper JSON or YAML can be passed, such as: `-e `{“key”: “name”, “array”: [“value1”, “value2”]}’.
  20. Ansible provides some methods for controlling how or when a task is ran. Which of the following is a valid method for controlling a task with a loop?

    • – with: <value>
    • – with_items: <value>
    • – only_when: <conditional>
    • – items: <value>
    Explanation:

    Ansible provides two methods for controlling tasks, loops and conditionals. The “with_items” context will allow the task to loop through a list of items, while the `when’ context will allow a conditional requirement to be met for the task to run. Both can be used at the same time.