Last Updated on September 11, 2021 by InfraExam
SOA-C01 : AWS-SysOps : Part 15
An organization has setup Auto Scaling with ELB. Due to some manual error, one of the instances got rebooted. Thus, it failed the Auto Scaling health check. Auto Scaling has marked it for replacement. How can the system admin ensure that the instance does not get terminated?
- Update the Auto Scaling group to ignore the instance reboot event
- It is not possible to change the status once it is marked for replacement
- Manually add that instance to the Auto Scaling group after reboot to avoid replacement
- Change the health of the instance to healthy using the Auto Scaling commands
After an instance has been marked unhealthy by Auto Scaling, as a result of an Amazon EC2 or ELB health check, it is almost immediately scheduled for replacement as it will never automatically recover its health. If the user knows that the instance is healthy then he can manually call the SetInstanceHealth action (or the as-set instance- health command from CLI. to set the instance’s health status back to healthy. Auto Scaling will throw an error if the instance is already terminating or else it will mark it healthy.
A system admin wants to add more zones to the existing ELB. The system admin wants to perform this activity from CLI. Which of the below mentioned command helps the system admin to add new zones to the existing ELB?
- It is not possible to add more zones to the existing ELB
The user has created an Elastic Load Balancer with the availability zone and wants to add more zones to the existing ELB. The user can do so in two ways:
From the console or CLI
add new zones to ELB
An organization is planning to create a user with IAM. They are trying to understand the limitations of IAM so that they can plan accordingly. Which of the below mentioned statements is not true with respect to the limitations of IAM?
- One IAM user can be a part of a maximum of 5 groups
- The organization can create 100 groups per AWS account
- One AWS account can have a maximum of 5000 IAM users
- One AWS account can have 250 roles
AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. The default maximums for each of the IAM entities is given below:
Groups per AWS account: 100
Users per AWS account: 5000
Roles per AWS account: 250
Number of groups per user: 10 (that is, one user can be part of these many groups).
A user is planning to scale up an application by 8 AM and scale down by 7 PM daily using Auto Scaling. What should the user do in this case?
- Setup the scaling policy to scale up and down based on the CloudWatch alarms
- The user should increase the desired capacity at 8 AM and decrease it by 7 PM manually
- The user should setup a batch process which launches the EC2 instance at a specific time
- Setup scheduled actions to scale up or down at a specific time
Scale based on a schedule
Sometimes you know exactly when you will need to increase or decrease the number of instances in your group, simply because that need arises on a predictable schedule. Scaling by schedule means that scaling actions are performed automatically as a function of time and date.
For more information, see Scheduled Scaling.
A user has created a VPC with two subnets: one public and one private. The user is planning to run the patch update for the instances in the private subnet. How can the instances in the private subnet connect to the Internet?
- Use the internet gateway with a private IP
- Allow outbound traffic in the security group for port 80 to allow internet updates
- The private subnet can never connect to the internet
- Use NAT with an elastic IP
A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. A user can create a subnet with VPC and launch instances inside that subnet. If the user has created two subnets (one private and one public., he would need a Network Address Translation (NAT) instance with the elastic IP address. This enables the instances in the private subnet to send requests to the Internet (for example, to perform software updates).
A user has configured an EC2 instance in the US-East-1a zone. The user has enabled detailed monitoring of the instance. The user is trying to get the data from CloudWatch using a CLI. Which of the below mentioned CloudWatch endpoint URLs should the user use?
The CloudWatch resources are always region specific and they will have the end point as region specific. If the user is trying to access the metric in the US-East-1 region, the endpoint URL will be: monitoring.us-east- 1.amazonaws.com
A user has configured ELB with Auto Scaling. The user suspended the Auto Scaling AddToLoadBalancer (which adds instances to the load balancer. process for a while). What will happen to the instances launched during the suspension period?
- The instances will not be registered with ELB and the user has to manually register when the process is resumed
- The instances will be registered with ELB only once the process has resumed
- Auto Scaling will not launch the instance during this period due to process suspension
- It is not possible to suspend only the AddToLoadBalancer process
Auto Scaling performs various processes, such as Launch, Terminate, add to Load Balancer etc. The user can also suspend the individual process. The AddToLoadBalancer process type adds instances to the load balancer when the instances are launched. If this process is suspended, Auto Scaling will launch the instances but will not add them to the load balancer. When the user resumes this process, Auto Scaling will resume adding new instances launched after resumption to the load balancer. However, it will not add running instances that were launched while the process was suspended; those instances must be added manually.
A sys admin has enabled a log on ELB. Which of the below mentioned activities are not captured by the log?
- Response processing time
- Front end processing time
- Backend processing time
- Request processing time
Elastic Load Balancing access logs capture detailed information for all the requests made to the load balancer. Each request will have details, such as client IP, request path, ELB IP, time, and latencies. The time will have information, such as Request Processing time, Backend Processing time and Response Processing time.
A user has moved an object to Glacier using the life cycle rules. The user requests to restore the archive after 6 months. When the restore request is completed the user accesses that archive. Which of the below mentioned statements is not true in this condition?
- The archive will be available as an object for the duration specified by the user during the restoration request
- The restored object’s storage class will be RRS
- The user can modify the restoration period only by issuing a new restore request with the updated period
- The user needs to pay storage for both RRS (restored) and Glacier (Archive. Rates)
AWS Glacier is an archival service offered by AWS. AWS S3 provides lifecycle rules to archive and restore objects from S3 to Glacier. Once the object is archived their storage class will change to Glacier. If the user sends a request for restore, the storage class will still be Glacier for the restored object. The user will be paying for both the archived copy as well as for the restored object. The object is available only for the duration specified in the restore request and if the user wants to modify that period, he has to raise another restore request with the updated duration.
A user is running a batch process on EBS backed EC2 instances. The batch process starts a few instances to process hadoop Map reduce jobs which can run between 50 – 600 minutes or sometimes for more time. The user wants to configure that the instance gets terminated only when the process is completed. How can the user configure this with CloudWatch?
- Setup the CloudWatch action to terminate the instance when the CPU utilization is less than 5%
- Setup the CloudWatch with Auto Scaling to terminate all the instances
- Setup a job which terminates all instances after 600 minutes
- It is not possible to terminate instances automatically
Amazon CloudWatch alarm watches a single metric over a time period that the user specifies and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The user can setup an action which terminates the instances when their CPU utilization is below a certain threshold for a certain period of time. The EC2 action can either terminate or stop the instance as part of the EC2 action.
A user has enabled versioning on an S3 bucket. The user is using server side encryption for data at rest. If the user is supplying his own keys for encryption (SSE-C), what is recommended to the user for the purpose of security?
- The user should not use his own security key as it is not secure
- Configure S3 to rotate the user’s encryption key at regular intervals
- Configure S3 to store the user’s keys securely with SSL
- Keep rotating the encryption key manually at the client side
AWS S3 supports client side or server side encryption to encrypt all data at Rest. The server side encryption can either have the S3 supplied AES-256 encryption key or the user can send the key along with each API call to supply his own encryption key (SSE-C). Since S3 does not store the encryption keys in SSE-C, it is recommended that the user should manage keys securely and keep rotating them regularly at the client side version.
A user runs the command “dd if=/dev/xvdf of=/dev/null bs=1M” on an EBS volume created from a snapshot and attached to a Linux instance. Which of the below mentioned activities is the user performing with the step given above?
- Pre warming the EBS volume
- Initiating the device to mount on the EBS volume
- Formatting the volume
- Copying the data from a snapshot to the device
When the user creates an EBS volume and is trying to access it for the first time it will encounter reduced IOPS due to wiping or initiating of the block storage. To avoid this as well as achieve the best performance it is required to pre warm the EBS volume. For a volume created from a snapshot and attached with a Linux OS, the “dd” command pre warms the existing data on EBS and any restored snapshots of volumes that have been previously fully pre warmed. This command maintains incremental snapshots; however, because this operation is read-only, it does not pre warm unused space that has never been written to on the original volume. In the command “dd if=/dev/xvdf of=/dev/null bs=1M” , the parameter “if=input file” should be set to the drive that the user wishes to warm. The “of=output file” parameter should be set to the Linux null virtual device, /dev/null. The “bs” parameter sets the block size of the read operation; for optimal performance, this should be set to 1 MB.
A user has launched an EC2 Windows instance from an instance store backed AMI. The user wants to convert the AMI to an EBS backed AMI. How can the user convert it?
- Attach an EBS volume to the instance and unbundle all the AMI bundled data inside the EBS
- A Windows based instance store backed AMI cannot be converted to an EBS backed AMI
- It is not possible to convert an instance store backed AMI to an EBS backed AMI
- Attach an EBS volume and use the copy command to copy all the ephemeral content to the EBS Volume
Generally, when a user has launched an EC2 instance from an instance store backed AMI, it can be converted to an EBS backed AMI provided the user has attached the EBS volume to the instance and unbundles the AMI data to it. However, if the instance is a Windows instance, AWS does not allow this. In this case, since the instance is a Windows instance, the user cannot convert it to an EBS backed AMI.
A user has created a VPC with public and private subnets using the VPC Wizard. The VPC has CIDR 220.127.116.11/16. The private subnet uses CIDR 18.104.22.168/24. Which of the below mentioned entries are required in the main route table to allow the instances in VPC to communicate with each other?
- Destination : 22.214.171.124/24 and Target : VPC
- Destination : 126.96.36.199/16 and Target : ALL
- Destination : 188.8.131.52/0 and Target : ALL
- Destination : 184.108.40.206/24 and Target : Local
Option A doesn’t use standard AWS terminology (you don’t route to “VPC”), and because the mask is /24, it would only allow the instances in the private subnet to communicate with each other, not all the instances in the VPC as the question asked. Here’s an example VPC route table for a public subnet (i.e. it routes to the IGW). Option D is the correct one.
A sysadmin has created the below mentioned policy on an S3 bucket named cloudacademy. The bucket has both AWS.jpg and index.html objects. What does this policy define?
- It will make all the objects as well as the bucket public
- It will throw an error for the wrong action and does not allow to save the policy
- It will make the AWS.jpg object as public
- It will make the AWS.jpg as well as the cloudacademy bucket as public
A sysadmin can grant permission to the S3 objects or the buckets to any user or make objects public using the bucket policy and user policy. Both use the JSON-based access policy language.
Generally, if user is defining the ACL on the bucket, the objects in the bucket do not inherit it and
vice a versa. The bucket policy can be defined at the bucket level which allows the objects as well
as the bucket to be public with a single policy applied to that bucket. In the below policy the action
says “S3:ListBucket” for effect Allow and when there is no bucket name mentioned as a part of the resource, it will throw an error and not save the policy.
A user has launched an EC2 instance and deployed a production application in it. The user wants to prohibit any mistakes from the production team to avoid accidental termination. How can the user achieve this?
- The user can the set DisableApiTermination attribute to avoid accidental termination
- It is not possible to avoid accidental termination
- The user can set the Deletion termination flag to avoid accidental termination
- The user can set the InstanceInitiatedShutdownBehavior flag to avoid accidental termination
It is always possible that someone can terminate an EC2 instance using the Amazon EC2 console, command line interface or API by mistake. If the admin wants to prevent the instance from being accidentally terminated, he can enable termination protection for that instance. The DisableApiTermination attribute controls whether the instance can be terminated using the console, CLI or API. By default, termination protection is disabled for an EC2 instance. When it is set it will not allow the user to terminate the instance from CLI, API or the console.
A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is disabled. The user wants to now enable detailed monitoring. How can the user achieve this?
- Update the Launch config with CLI to set InstanceMonitoringDisabled = false
- The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
- Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
- Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group
CloudWatch is used to monitor AWS as well as the custom services. To enable detailed instance monitoring for a new Auto Scaling group, the user does not need to take any extra steps. When the user creates the AutoScaling launch config as the first step for creating an Auto Scaling group, each launch configuration contains a flag named InstanceMonitoring.Enabled. The default value of this flag is true. When the user has created a launch configuration with InstanceMonitoring.Enabled = false it will involve multiple steps to enable detail monitoring. The steps are:
Create a new Launch config with detailed monitoring enabled
Update the Auto Scaling group with a new launch config
Enable detail monitoring on each EC2 instance
A user is trying to pre-warm a blank EBS volume attached to a Linux instance. Which of the below mentioned steps should be performed by the user?
- There is no need to pre-warm an EBS volume
- Contact AWS support to pre-warm
- Unmount the volume before pre-warming
- Format the device
When the user creates a new EBS volume or restores a volume from the snapshot, the back-end storage blocks are immediately allocated to the user EBS. However, the first time when the user is trying to access a block of the storage, it is recommended to either be wiped from the new volumes or instantiated from the snapshot (for restored volumes. before the user can access the block. This preliminary action takes time and can cause a 5 to 50 percent loss of IOPS for the volume when the block is accessed for the first time. To avoid this, it is required to pre warm the volume. Pre-warming an EBS volume on a Linux instance requires that the user should unmount the blank device first and then write all the blocks on the device using a command, such as “dd”.
A user has launched an EC2 instance from an instance store backed AMI. The user has attached an additional instance store volume to the instance. The user wants to create an AMI from the running instance. Will the AMI have the additional instance store volume data?
- Yes, the block device mapping will have information about the additional instance store volume
- No, since the instance store backed AMI can have only the root volume bundled
- It is not possible to attach an additional instance store volume to the existing instance store backed AMI instance
- No, since this is ephemeral storage it will not be a part of the AMI
When the user has launched an EC2 instance from an instance store backed AMI and added an instance store volume to the instance in addition to the root device volume, the block device mapping for the new AMI contains the information for these volumes as well. In addition, the block device mappings for the instances those are launched from the new AMI will automatically contain information for these volumes.
A user has created an EBS volume of 10 GB and attached it to a running instance. The user is trying to access EBS for first time. Which of the below mentioned options is the correct statement with respect to a first time EBS access?
- The volume will show a size of 8 GB
- The volume will show a loss of the IOPS performance the first time
- The volume will be blank
- If the EBS is mounted it will ask the user to create a file system
A user can create an EBS volume either from a snapshot or as a blank volume. If the volume is from a snapshot it will not be blank. The volume shows the right size only as long as it is mounted. This shows that the file system is created. When the user is accessing the volume the AWS EBS will wipe out the block storage or instantiate from the snapshot. Thus, the volume will show a loss of IOPS. It is recommended that the user should pre warm the EBS before use to achieve better IO