DOP-C01 : AWS DevOps Engineer Professional : Part 25



DOP-C01 : AWS DevOps Engineer Professional : Part 25

  1. What needs to be done in order to remotely access a Docker daemon running on Linux?

    • add certificate authentication to the docker API
    • change the encryption level to TLS
    • enable the TCP socket
    • bind the Docker API to a unix socket
    Explanation:

    The Docker daemon can listen for Docker Remote API requests via three different types of Socket: unix, tcp, and fd. By default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock, requiring either root permission, or docker group membership. If you need to access the Docker daemon remotely, you need to enable the tcp Socket. Beware that the default setup provides unencrypted and un-authenticated direct access to the Docker daemon – and should be secured either using the built in HTTPS encrypted socket or by putting a secure web proxy in front of it.

  2. Which of the following Dockerfile commands cannot be overridden at runtime?

    • VOLUME
    • USER
    • ADD
    • CMD
    Explanation:
    When a developer builds an image from a Dockerfile or when she commits it, the developer can set a number of default parameters that take effect when the image starts up as a container. Four of the Dockerfile commands cannot be overridden at runtime: FROM, MAINTAINER, RUN, and ADD. Everything else has a corresponding override in docker run. We’ll go through what the developer might have set in each Dockerfile instruction and how the operator can override that setting.
  3. When deploying to a Docker swarm, which section of the docker-compose file defines configuration related to the deployment and running of services?

    • services
    • build
    • deploy
    • args
    Explanation:
    Specify configuration related to the deployment and running of services. This only takes effect when deploying to a swarm withdocker stack deploy, and is ignored by docker-compose up and docker-compose run.
  4. You are running a Docker daemon on a Linux host and it becomes unresponsive. Which signal, when sent to a Docker process with the kill command, forces the full stack trace to be logged for debugging purposes?

    • –TRACE
    • –IOTRACE
    • -SIGUSER1
    • –KILLTRACE
    Explanation:

    If the daemon is unresponsive, you can force a full stack trace to be logged by sending a SIGUSR1 signal to the daemon.
    Linux:
    $ sudo kill -SIGUSR1 $(pidof dockerd)
    Windows Server:
    Download docker-signal.
    Run the executable with the flag –pid=<PID of daemon>

  5. Which of the following is NOT an advantage of Docker’s content addressable storage model?

    • random UUIDs improve filesystem performance
    • improved security
    • guarantees data integrity after push, pull, load, and save operations
    • avoids content ID collisions
    Explanation:
    Docker 1.10 introduced a new content addressable storage model. This is a completely new way to address image and layer data on disk. Previously, image and layer data was referenced and stored using a randomly generated UUID. In the new model this is replaced by a secure content hash. The new model improves security, provides a built-in way to avoid ID collisions, and guarantees data integrity after pull, push, load, and save operations. It also enables better sharing of layers by allowing many images to freely share their layers even if they did not come from the same build.
  6. What flag would you use to limit a Docker container’s memory usage to 128 megabytes?

    • -memory 128m
    • -m 128m 
    • –memory-reservation 128m
    • -m 128MB
    Explanation:

    Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. Some of these options have different effects when used alone or when more than one option is set. Most of these options take a positive integer, followed by a suffix of b, k, m, g, to indicate bytes, kilobytes, megabytes, or gigabytes.
    Option -m or –memory=
    Description The maximum amount of memory the container can use. If you set this option, the minimum allowed value is 4m (4 megabyte).

  7. What is the only layer in a Docker image that is not read-only?

    • they are all read-only
    • none are read-only
    • the first layer
    • the last layer
    Explanation:

    A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only.

  8. When building a Docker image, you are searching through a persistent data volume’s logs to provide parameters for the next build. You execute the following command. Which of the operations will cause a failure of the Docker RUNcommand? RUN cat ./data/log/*.error | grep service_status | grep ERROR

    • the first grep command
    • any one of them
    • the second grep command
    • the cat command
    Explanation:

    Some RUN commands depend on the ability to pipe the output of one command into another, using the pipe character (|), as in the following example:
    RUN wget -O – https://some.site | wc -l > /number
    Docker executes these commands using the /bin/sh -c interpreter, which only evaluates the exit code of the last operation in the pipe to determine success. In the example above this build step succeeds and produces a new image so long as the wc -lcommand succeeds, even if the wget command fails.

  9. What does the Docker network docker_gwbridge do?

    • allows communication between containers on the same host
    • allows communication between containers on the same host
    • allows communication between swarm nodes on the same host
    • allows communication between containers on the different hosts
    Explanation:

    The docker_gwbridge is a local bridge network which is automatically created by Docker in two different circumstances: When you initialize or join a swarm, Docker creates the docker_gwbridge network and uses it for communication among swarm nodes on different hosts. When none of a container’s networks can provide external connectivity, Docker connects the container to the docker_gwbridge network in addition to the container’s other networks, so that the container can connect to external networks or other swarm nodes.

  10. Which services can be used as optional components of setting up a new Trail in CloudTrail?

    • KMS, SNS and SES
    • CloudWatch, S3 and SNS
    • KMS, Cloudwatch and SNS
    • KMS, S3 and CloudWatch
    Explanation:
    Key Management Service: The use of AWS KMS is an optional element of CloudTrail, but it allows additional encryption to be added to your Log files when stored on S3 Simple Notification Service: Amazon SNS is also an optional component for CloudTrail, but it allows for you to create notifications, for example when a new log file is delivered to S3 SNS could notify someone or a team via an e-mail. Or it could be used in conjunction with CloudWatch when metric thresholds have been reached. CloudWatch Logs: Again, this is another optional component, but AWS CloudTrail allows you to deliver its logs to AWS Cloudwatch Logs as well as S3 for specific monitoring metrics to take place.
  11. What is AWS CloudTrail Processing Library?

    • A static library with CloudTrail log files in a movable format machine code that is directly executable
    • An object library with CloudTrail log files in a movable format machine code that is usually not directly executable
    • A Java library that makes it easy to build an application that reads and processes CloudTrail log files
    • A PHP library that renders various generic containers needed for CloudTrail log files
    Explanation:
    AWS CloudTrail Processing Library is a Java library that makes it easy to build an application that reads and processes CloudTrail log files. You can download CloudTrail Processing Library from GitHub.
  12. Using the AWS CLI, which command retrieves CloudTrail trail settings, including the status of the trail itself?

    • aws cloudtrail return-trails
    • aws cloudtrail validate-settings
    • aws cloudtrail get-settings
    • aws cloudtrail describe-trails
    Explanation:
    You can retrieve trail settings and status using the cloudtrail describe-trails command. It will generate output similar to the example below.
    DOP-C01 AWS DevOps Engineer Professional Part 25 Q12 014
    DOP-C01 AWS DevOps Engineer Professional Part 25 Q12 014
  13. You are running Amazon CloudTrail on an Amazon S3 bucket and look at your most recent log. You notice that the entries include the ListThings and CreateThings actions and wonder if your devices have been hacked. Based on these entries, what service would you be concerned may have been hacked?

    • Amazon Inspector
    • AWS IoT
    • AWS CodePipeline
    • Amazon Glacier
    Explanation:

    AWS IoT (Internet of Things) is integrated with CloudTrail to capture API calls from the AWS IoT console or from your code to the AWS IoT APIs. AWS IoT provides secure, bi-directional communication between Internet-connected things (such as sensors, actuators, embedded devices, or smart appliances) and the AWS cloud. Using the information collected by CloudTrail, you can determine the request that was made to AWS IoT, the source IP address from which the request was made, who made the request, when it was made, and so on.

  14. When logging with Amazon CloudTrail, API call information for services with regional end points is ____.

    • captured and processed in the same region as to which the API call is made and delivered to the region associated with your Amazon S3 bucket
    • captured, processed, and delivered to the region associated with your Amazon S3 bucket
    • captured in the same region as to which the API call is made and processed and delivered to the region associated with your Amazon S3 bucket
    • captured in the region where the end point is located, processed in the region where the CloudTrail trail is configured, and delivered to the region associated with your Amazon S3 bucket
    Explanation:
    When logging with Amazon CloudTrail, API call information for services with regional end points (EC2, RDS etc.) is captured and processed in the same region as to which the API call is made and delivered to the region associated with your Amazon S3 bucket. API call information for services with single end points (IAM, STS etc.) is captured in the region where the end point is located, processed in the region where the CloudTrail trail is configured, and delivered to the region associated with your Amazon S3 bucket.
  15. When logging with Amazon CloudTrail, API call information for services with single end points is ____.

    • captured and processed in the same region as to which the API call is made and delivered to the region associated with your Amazon S3 bucket
    • captured, processed, and delivered to the region associated with your Amazon S3 bucket
    • captured in the same region as to which the API call is made and processed and delivered to the region associated with your Amazon S3 bucket
    • captured in the region where the end point is located, processed in the region where the CloudTrail trail is configured, and delivered to the region associated with your Amazon S3 bucket
    Explanation:

    When logging with Amazon CloudTrail, API call information for services with regional end points (EC2, RDS etc.) is captured and processed in the same region as to which the API call is made and delivered to the region associated with your Amazon S3 bucket. API call information for services with single end points (IAM, STS etc.) is captured in the region where the end point is located, processed in the region where the CloudTrail trail is configured, and delivered to the region associated with your Amazon S3 bucket.

  16. What is the correct syntax for the AWS command to create a single region trail?

    • aws create-trail –name trailname –s3-object objectname
    • aws cloudtrail –s3-regionname IPaddress create-trail –name trailname
    • aws cloudtrail create-trail –name trailname –s3-bucket-name bucketname
    • aws cloudtrail create-trail –name trailname –s3-portnumber IPaddress
    Explanation:

    The command aws cloudtrail create-trail –name trailname –s3-bucket-name bucketname will create a single region trail. You must create a S3 bucket before you execute the command, with proper CloudTrail permissions applied to it (and you must have the AWS command line tools (CLI) on your system).

  17. You want to set up the CloudTrail Processing Library to log your bucket operations. Which command will build a .jar file from the CloudTrail Processing Library source code?

    • mvn javac mvn -install processor
    • jar install processor
    • build jar -Dgpg.processor
    • mvn clean install -Dgpg.skip=true
    Explanation:

    The CloudTrail Processing Library is a Java library that provides an easy way to process AWS CloudTrail logs in a fault-tolerant, scalable and flexible way. To set up the CloudTrail Processing Library, you first need to download CloudTrail Processing Library source from GitHub. You can then create the .jar file using this command.

  18. By default, Amazon CloudTrail logs ____ actions defined by the CloudTrail ____ APIs.

    • bucket-level; RESTful
    • object-level; RESTful
    • object-level; SDK
    • bucket-level; SDK
    Explanation:
    By default, CloudTrail logs bucket-level actions. Amazon S3 records are written together with other AWS service records in a log file. Amazon S3 bucket-level actions supported for logging by CloudTrail are defined in its RESTful API.
  19. You want to build an application that coordinates work across distributed components, and you find Amazon Simple Workflow Service (Amazon SWF) does this easily. You have enabled logging in CloudTrail, but you are unsure about Amazon SWF actions supported.

    Which of the following actions is NOT supported?

    • RegisterDomain
    • RegisterWorkflowActivity
    • RegisterActivityType
    • RegisterActivityType
    Explanation:

    Amazon SWF is integrated with AWS CloudTrail, a service that captures API calls made by or on behalf of Amazon SWF and delivers the log files to an Amazon S3 bucket that you specify. The API calls can be made indirectly by using the Amazon SWF console or directly by using the Amazon SWF API. When CloudTrail logging is enabled, calls made to Amazon SWF actions are tracked in log files. Amazon SWF records are written together with any other AWS service records in a log file. CloudTrail determines when to create and write to a new file based on a specified time period and file size.
    The following actions are supported:
    DeprecateActivityType
    DeprecateDomain
    DeprecateWorkflowType
    RegisterActivityType
    RegisterDomain
    RegisterWorkflowType

  20. Consider the portion of a CloudTrail log file below. Which type of event is being captured?

    “eventTime”:”2016-07-16T17:35:32Z”,
    “eventSource”:”signin.amazonaws.com”,
    “eventName”:”ConsoleLogin”,
    “awsRegion”:”us-west-1″,
    “sourceIPAddress”:”192.1.2.10″,

    • AWS console sign-in
    • AWS log off
    • AWS error
    • AWS deployment
    Explanation:

    CloudTrail records attempts to sign into the AWS Management Console, the AWS Discussion Forums and the AWS Support Center. Note, however, that CloudTrail does not record root sign-in failures.