DOP-C01 : AWS DevOps Engineer Professional : Part 24
-
Which difference between core modules and extra modules is not correct?
- Extra modules may one day become core modules
- Core modules are supported by the Ansible team
- Core modules are shipped by default with Ansible
- Extra modules have no support
Explanation:While extra modules are not official modules and thus not supported by the Ansible team, they are indeed supported by their writers and the community.
-
What is the proper (best practice) way to begin a playbook?
- – hosts: all
- …
- ###
- —
Explanation:All YAML files can begin with `—‘ and end with `…’ to indicate where YAML starts and ends. While this is optional, it is considered best practice.
-
You have a playbook that includes a task to install a package for a service, put a configuration file for that package on the system and restart the service. The playbook is then run twice in a row. What would you expect Ansible to do on the second run?
- Remove the old package and config file and reinstall and then restart the service.
- Take no action on the target host.
- Check if the package is installed, check if the file matches the source file, if not reinstall it; restart the service.
- Attempt to reinstall the package, copy the file and restart the service.
Explanation:Ansible follows an idempotence model and will not touch or change the system unless a change is warran
-
Which tool will Ansible not use, even if available, to gather facts?
- facter
- lsb_release
- Ansible setup module
- ohai
Explanation:Ansible will use its own `setup’ module to gather facts for the local system. Additionally, if ohai or facter are installed, those will also be used and all variables will be prefixed with `ohai_’ or `facter_’ respectively. `lsb_relase’ is a Linux tool for determining distribution information.
-
If a variable is assigned in the `vars’ section of a playbook, where is the proper place to override that variable?
- Inventory group var
- playbook host_vars
- role defaults
- extra vars
Explanation:
In Ansible’s variable precedence, the highest precedence is the extra vars option on the command line. -
If Ansible encounters a resource that does not meet the requirements specified in the play it makes the necessary changes to the resource; however if the resource is already in the desired state Ansible will do nothing. This is an example of which methodology?
- Idempotency
- Immutability
- Convergence
- Infrastructure as Code
Explanation:Idempotency states that changes are only made if a resource does not meet the requirement specifications. If a change is made, it is made `in-place’ and will not break existing resources.
-
When writing plays, tasks and playbooks, Ansible fully supports which high level language to describe these?
- YAML
- Python
- XML
- JSON
Explanation: This can be bit of a trick question. While Ansible Playbooks in this course are written in YAML, Ansible will accept plays, tasks and playbooks in JSON, as JSON a subset of YAML. However, the preferred and fully supported method is YAML. -
What is the expected behavior if Ansible is called with ‘ansible-playbook -i localhost playbook.yml’?
- Ansible will attempt to read the inventory file named ‘localhost’
- Ansible will run the plays locally.
- Ansible will run the playbook on the host named ‘localhost’
- Ansible won’t run, this is invalid command line syntax
Explanation:Ansible expects an inventory filename with the ‘-i’ option, regardless if it is a valid hostname. For this to execute on the host `localhost’ resolves to, a comma needs to be appended to the end.
-
The Ansible Inventory system allows many attributes to be defined within it. Which item below is not one of these?
- Group variables
- Host groups
- Include vars
- Children groups
Explanation:Ansible inventory files cannot reference other files for additional data. If this functionality is needed, it must be done in as a script to create a dynamic inventory.
-
When writing custom Ansible modules, which language is not supported?
- Python
- C++
- Bash
- All of the languages listed are supported
Explanation:Ansible modules can be written in any language that is executable on the target system. The only requirement is that the module can write its results as JSON output to STDOUT for Ansible to consume.
-
When specifying more than one conditional requirements for a task, what is the proper method?
- – when: foo == “hello” and bar == “world”
- – when: foo == “hello” – when: bar == “world”
- – when: foo == “hello” && bar == “world”
- – when: foo is “hello” and bar is “world”
Explanation:
Ansible will allow you to stack conditionals using ‘and’ and ‘or’. It requires it to be in the same ‘when’ statement, comparisons must be ‘==’ for equals or ‘!=’ for not equals and the ‘and/or’ must be written as such, not ‘&&/||’. -
Ansible supports running Playbook on the host directly or via SSH. How can Ansible be told to run its playbooks directly on the host?
- Setting ‘connection: local’ in the tasks that run locally.
- Specifying ‘-type local’ on the command line.
- It does not need to be specified; it is the default.
- Setting ‘connection: local’ in the Playbook.
Explanation:
Ansible can be told to run locally on the command line with the ‘-c’ option or can be told via the ‘connection: local’ declaration in the playbook. The default connection method is ‘remote’. -
What is the main difference between calling the commands ‘ansible’ and ‘ansible-playbook’ on the command line?
- ‘ansible’ is for setting configuration and environment variables which ‘ansible-playbook’ will use when running plays.
- ‘ansible-playbook’ is for running entire Playbooks while ‘ansible’ is for calling ad-hoc commands.
- ‘ansible-playbook’ runs the playbooks by using the ‘ansible’ command to run the individual plays
- ‘ansible’ is for running individual plays and ‘ansible-playbook’ is for running the entire playbook.
Explanation:
The ‘ansible’ command is for running Ansible ad-hoc commands remotely via SSH. ‘ansibleplaybook’ is for running Ansible Playbook projects. -
Which answer is the proper syntax for specifying two target hosts on the command line when running an Ansible Playbook?
- ansible-playbook -h host1.example.com -i all playbook.yml
- ansible-playbook -i host1.example.com playbook.yml
- ansible-playbook -h host1.example.com,host2.example.com playbook.yml
- ansible-playbook -i host1.example.com,host2.example.com playbook.yml
Explanation:
Ansible uses the `-i’ flag for accepting an inventory file or host. To allow Ansible to determine if you are passing a host list versus an inventory file the list must be comma separated. If a single host is specified, a trailing comma must be present. -
What is the purpose of a Docker swarm worker node?
- scheduling services
- service swarm node HTTP API endpoints
- executing containers
- maintaining cluster state
Explanation:Manager nodes handle cluster management tasks:
maintaining cluster state
scheduling services
serving swarm mode HTTP API endpoints
Worker nodes
Worker nodes are also instances of Docker Engine whose sole purpose is to execute containers.
Worker nodes don’t participate in the Raft distributed state, make scheduling decisions, or serve
the swarm mode HTTP API. -
You are building a Docker image with the following Dockerfile. How many layers will the resulting image have?
FROM scratch
CMD /app/hello.sh- 2
- 4
- 1
- 3
Explanation:FROM scratch
CMD /app/hello.sh
The image contains all the layers from the base image (only one in this case, since we’re building rom scratch), plus a new layer with the CMD instruction, and a read-write container layer. -
What storage driver does Docker generally recommend that you use if it is available?
- zfs
- btrfs
- aufs
- overlay
Explanation:After you have read the storage driver overview, the next step is to choose the best storage driver for your workloads. In making this decision, there are three high-level factors to consider: If multiple storage drivers are supported in your kernel, Docker has a prioritized list of which storage driver to use if no storage driver is explicitly configured, assuming that the prerequisites for that storage driver are met: If aufs is available, default to it, because it is the oldest storage driver. However, it is not universally available.
-
In which Docker Swarm model does the swarm manager distribute a specific number of replica tasks among the nodes based upon the scale you set in the desired state?
- distributed services
- scaled services
- replicated services
- global services
Explanation:A service is the definition of the tasks to execute on the worker nodes. It is the central structure of the swarm system and the primary root of user interaction with the swarm. When you create a service, you specify which container image to use and which commands to execute inside running containers. In the replicated services model, the swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state. For global services, the swarm runs one task for the service on every available node in the cluster. A task carries a Docker container and the commands to run inside the container. It is the atomic scheduling unit of swarm. Manager nodes assign tasks to worker nodes according to the number of replicas set in the service scale. Once a task is assigned to a node, it cannot move to another node. It can only run on the assigned node or fail.
-
On which local address does the Docker DNS server listen?
- 127.0.0.1
- 127.0.0.111
- 127.0.0.254
- 127.0.0.11
Explanation:Note: If you need access to a host’s localhost resolver, you must modify your DNS service on the host to listen on a non-localhost address that is reachable from within the container.
Note: The DNS server is always at 127.0.0.11. -
What are the default memory limit policies for a Docker container?
- Limited memory, limited kernel memory
- Limited memory, limited kernel memory
- Limited memory, unlimited kernel memory
- Unlimited memory, unlimited kernel memory
Explanation:Kernel memory limits are expressed in terms of the overall memory allocated to a container. Consider the following scenarios: Unlimited memory, unlimited kernel memory: This is the default behavior. Unlimited memory, limited kernel memory: This is appropriate when the amount of memory needed by all cgroups is greater than the amount of memory that actually exists on the host machine. You can configure the kernel memory to never go over what is available on the host machine, and containers which need more memory need to wait for it. Limited memory, umlimited kernel memory: The overall memory is limited, but the kernel memory is not. Limited memory, limited kernel memory: Limiting both user and kernel memory can be useful for debugging memory-related problems. If a container is using an unexpected amount of either type of memory, it will run out of memory without affecting other containers or the host machine. Within this setting, if the kernel memory limit is lower than the user memory limit, running out of kernel memory will cause the container to experience an OOM error. If the kernel memory limit is higher than the user memory limit, the kernel limit will not cause the container to experience an OOM.