SAP-C01 : AWS Certified Solutions Architect – Professional : Part 20
-
Which of the following rules must be added to a mount target security group to access Amazon Elastic File System (EFS) from an on-premises server?
- Configure an NFS proxy between Amazon EFS and the on-premises server to route traffic.
- Set up a Point-To-Point Tunneling Protocol Server (PPTP) to allow secure connection.
- Permit secure traffic to the Kerberos port 88 from the on-premises server.
- Allow inbound traffic to the Network File System (NFS) port (2049) from the on-premises server.
Explanation:
By mounting an Amazon EFS file system on an on-premises server, on-premises data can be migrated into the AWS Cloud. Any one of the mount targets in your VPC can be used as long as the subnet of the mount target is reachable by using the AWS Direct Connect connection. To access Amazon EFS from an on-premises server, a rule must be added to the mount target security group to allow inbound traffic to the NFS port (2049) from the on-premises server. -
Which of the following is true of Amazon EBS encryption keys?
- Amazon EBS encryption uses the Customer Master Key (CMK) to create an AWS Key Management Service (AWS KMS) master key.
- Amazon EBS encryption uses the EBS Magnetic key to create an AWS Key Management Service (AWS KMS) master key.
- Amazon EBS encryption uses the EBS Magnetic key to create a Customer Master Key (CMK).
- Amazon EBS encryption uses the AWS Key Management Service (AWS KMS) master key to create a Customer Master Key (CMK).
Explanation:
Amazon EBS encryption uses AWS Key Management Service (AWS KMS) master keys when creating encrypted volumes and any snapshots created from your encrypted volumes. -
A user is creating a snapshot of an EBS volume. Which of the below statements is incorrect in relation to the creation of an EBS snapshot?
- Its incremental
- It is a point in time backup of the EBS volume
- It can be used to create an AMI
- It is stored in the same AZ as the volume
Explanation:
The EBS snapshots are a point in time backup of the EBS volume. It is an incremental snapshot, but is always specific to the region and never specific to a single AZ. Hence the statement “It is stored in the same AZ as the volume” is incorrect. -
A user is running a critical batch process which runs for 1 hour and 50 mins every day at a fixed time. Which of the below mentioned options is the right instance type and costing model in this case if the user performs the same task for the whole year?
- Instance store backed instance with spot instance pricing.
- EBS backed instance with standard reserved upfront instance pricing.
- EBS backed scheduled reserved instance with partial instance pricing.
- EBS backed instance with on-demand instance pricing.
Explanation:
For Amazon Web Services, the reserved instance (standard or convertible) helps the user save money if the user is going to run the same instance for a longer period. Generally, if the user uses the instances around 30-40% of the year annually it is recommended to use RI. Here as the instance runs only for 1 hour 50 minutes daily, or less than 8 percent of the year, it is not recommended to have RI as it will be costlier. At its highest potential savings, you are still paying 25 percent of an annual cost for a reserved instance you are you using less than 2 hours a day, (or less than 8 percent of each year) you are not saving money. Even a scheduled reserved instance has a key limitation, which is that it cannot be stopped or rebooted, only manually terminated with a possibility that it could be restarted. You would have to terminate and restart it within the 1 hour 50-minute window, otherwise you would need to wait until the next day. For a critical daily process, this is likely not an option. Spot Instances are not ideal because the process is critical, and must run for a fixed length of time at a fixed time of day. Spot instances would stop and start based on fluctuations in instance pricing, leaving this process potentially unfinished. The user should use on-demand with EBS in this case. While it has the highest cost, it also has the greatest flexibility to ensure that a critical process like this is always completed. -
A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is disabled. The user wants to now enable detailed monitoring.
How can the user achieve this?
- Update the Launch config with CLI to set InstanceMonitoringDisabled = false
- The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
- Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group
- Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
Explanation:
CloudWatch is used to monitor AWS as well as the custom services. To enable detailed instance monitoring for a new Auto Scaling group, the user does not need to take any extra steps. When the user creates the Auto Scaling launch config as the first step for creating an Auto Scaling group, each launch configuration contains a flag named InstanceMonitoring.Enabled. The default value of this flag is true. When the user has created a launch configuration with InstanceMonitoring.Enabled = false it will involve multiple steps to enable detail monitoring. The steps are:
– Create a new Launch config with detailed monitoring enabled
– Update the Auto Scaling group with a new launch config
– Enable detail monitoring on each EC2 instance -
A user is sending a custom metric to CloudWatch. If the call to the CloudWatch APIs has different dimensions, but the same metric name, how will CloudWatch treat all the requests?
- It will reject the request as there cannot be a separate dimension for a single metric.
- It will group all the calls into a single call.
- It will treat each unique combination of dimensions as a separate metric.
- It will overwrite the previous dimension data with the new dimension data.
Explanation:
A dimension is a key-value pair used to uniquely identify a metric. CloudWatch treats each unique combination of dimensions as a separate metric.
Thus, if the user is making 4 calls with the same metric name but a separate dimension, it will create 4 separate metrics. -
What is the maximum number of data points for an HTTP data request that a user can include in PutMetricRequest in the CloudWatch?
- 30
- 50
- 10
- 20
Explanation:
The size of a PutMetricData request of CloudWatch is limited to 8KB for the HTTP GET requests and 40KB for the HTTP POST requests. The user can include a maximum of 20 data points in one PutMetricData request. -
You have set up a huge amount of network infrastructure in AWS and you now need to think about monitoring all of this. You decide CloudWatch will best fit your needs but you are unsure of the pricing structure and the limitations of CloudWatch.
Which of the following statements is TRUE in relation to the limitations of CloudWatch?
- You get 10 CloudWatch metrics, 10 alarms, 1,000,000 API requests, and 1,000 Amazon SNS email notifications per customer per month for free.
- You get 100 CloudWatch metrics, 100 alarms, 10,000,000 API requests, and 10,000 Amazon SNS email notifications per customer per month for free.
- You get 10 CloudWatch metrics, 10 alarms, 1,000 API requests, and 100 Amazon SNS email notifications per customer per month for free.
- You get 100 CloudWatch metrics, 100 alarms, 1,000,000 API requests, and 1,000 Amazon SNS email notifications per customer per month for free.
Explanation:
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real-time. You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications.
CloudWatch has the following limits:
You get 10 CloudWatch metrics, 10 alarms, 1,000,000 API requests, and 1,000 Amazon SNS email notifications per customer per month for free.
You can assign up to 10 dimensions per metric.
You can create up to 5000 alarms per AWS account. Metric data is kept for 2 weeks.
The size of a PutMetricData request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests.
You can include a maximum of 20 MetricDatum items in one PutMetricData request. A MetricDatum can contain a single value or a StatisticSet representing many values. -
A user is trying to send custom metrics to CloudWatch using the PutMetricData APIs. Which of the below mentioned points should the user needs to take care while sending the data to CloudWatch?
- The size of a request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests
- The size of a request is limited to 16KB for HTTP GET requests and 80KB for HTTP POST requests
- The size of a request is limited to 128KB for HTTP GET requests and 64KB for HTTP POST requests
- The size of a request is limited to 40KB for HTTP GET requests and 8KB for HTTP POST requests
Explanation:
With AWS CloudWatch, the user can publish data points for a metric that share not only the same time stamp, but also the same namespace and dimensions. CloudWatch can accept multiple data points in the same PutMetricData call with the same time stamp. The only thing that the user needs to take care of is that the size of a PutMetricData request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests. -
You set up your first Lambda function and want to set up some Cloudwatch metrics to monitor your function. Which of the following Lambda metrics can Cloudwatch monitor?
- Total requests only
- Status Check Failed, total requests, and error rates
- Total requests and CPU utilization
- Total invocations, errors, duration, and throttles
Explanation:
AWS Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch (CloudWatch). These metrics include total invocations, errors, duration, and throttles. -
How many metrics are supported by CloudWatch for Auto Scaling?
- 7 metrics and 5 dimension
- 5 metrics and 1 dimension
- 1 metric and 5 dimensions
- 8 metrics and 1 dimension
Explanation:
AWS Auto Scaling supports both detailed as well as basic monitoring of the CloudWatch metrics. Basic monitoring happens every 5 minutes, while detailed monitoring happens every minute. It supports 8 metrics and 1 dimension.
The metrics are: GroupMinSize GroupMaxSize GroupDesiredCapacity GroupInServiceInstances GroupPendingInstances GroupStandbyInstances GroupTerminatingInstances GroupTotalInstances
The dimension is AutoScalingGroupName -
A user has enabled detailed CloudWatch monitoring with the AWS Simple Notification Service. Which of the below mentioned statements helps the user understand detailed monitoring better?
- SNS cannot provide data every minute
- SNS will send data every minute after configuration
- There is no need to enable since SNS provides data every minute
- AWS CloudWatch does not support monitoring for SNS
Explanation:
CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute. The AWS SNS service sends data every 5 minutes. Thus, it supports only the basic monitoring. The user cannot enable detailed monitoring with SNS. -
An AWS account owner has setup multiple IAM users. One of these IAM users, named John, has CloudWatch access, but no access to EC2 services. John has setup an alarm action which stops EC2 instances when their CPU utilization is below the threshold limit.
When an EC2 instance’s CPU Utilization rate drops below the threshold John has set, what will happen and why?
- CloudWatch will stop the instance when the action is executed
- Nothing will happen. John cannot set an alarm on EC2 since he does not have the permission.
- Nothing will happen. John can setup the action, but it will not be executed because he does not have EC2 access through IAM policies.
- Nothing will happen because it is not possible to stop the instance using the CloudWatch alarm
Explanation:
Amazon CloudWatch alarms watch a single metric over a time period that the user specifies and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The user can setup an action which stops the instances when their CPU utilization is below a certain threshold for a certain period of time. The EC2 action can either terminate or stop the instance as part of the EC2 action. If the IAM user has read/write permissions for Amazon CloudWatch but not for Amazon EC2, he can still create an alarm.
However, the stop or terminate actions will not be performed on the Amazon EC2 instance. -
A user has configured an EC2 instance in the US-East-1a zone. The user has enabled detailed monitoring of the instance. The user is trying to get the data from CloudWatch using a CLI.
Which of the below mentioned CloudWatch endpoint URLs should the user use?
- monitoring.us-east-1a.amazonaws.com
- cloudwatch.us-east-1a.amazonaws.com
- monitoring.us-east-1.amazonaws.com
- monitoring.us-east-1-a.amazonaws.com
Explanation:
The CloudWatch resources are always region specific and they will have the end point as region specific. If the user is trying to access the metric in the US-East-1 region, the endpoint URL will be: monitoring.us-east-1.amazonaws.com -
Which of the following is not included in the metrics sent from Billing to Amazon CloudWatch?
- Recurring fees for AWS products and services
- Total AWS charges
- One-time charges and refunds
- Usage charges for AWS products and services
Explanation:
Usage charges and recurring fees for AWS products and services are included in the metrics sent from Billing to Amazon CloudWatch.
You will have a metric for total AWS charges, as well as one additional metric for each AWS product or service that you use.
However, one-time charges and refunds are not included. -
After your Lambda function has been running for some time, you need to look at some metrics to ascertain how your function is performing and decide to use the AWS CLI to do this.
Which of the following commands must be used to access these metrics using the AWS CLI?
- mon-list-metrics and mon-get-stats
- list-metrics and get-metric-statistics
- ListMetrics and GetMetricStatistics
- list-metrics and mon-get-stats
Explanation:
AWS Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch.
To access metrics using the AWS CLI
Use the list-metrics and get-metric-statistics commands. -
In Amazon CloudWatch, you can publish your own metrics with the put-metric-data command. When you create a new metric using the put-metric-data command, it can take up to two minutes before you can retrieve statistics on the new metric using the get-metric-statistics command.
How long does it take before the new metric appears in the list of metrics retrieved using the list- metrics command?
- After 2 minutes
- Up to 15 minutes
- More than an hour
- Within a minute
Explanation:
You can publish your own metrics to CloudWatch with the put-metric-data command (or its Query API equivalent PutMetricData). When you create a new metric using the put-metric-data command, it can take up to two minutes before you can retrieve statistics on the new metric using the get-metric-statistics command. However, it can take up to fifteen minutes before the new metric appears in the list of metrics retrieved using the list-metrics command. -
A company runs a legacy system on a single m4.2xlarge Amazon EC2 instance with Amazon EBS storage. The EC2 instance runs both the web server and a self-managed Oracle database. A snapshot is made of the EBS volume every 12 hours, and an AMI was created from the fully configured EC2 instance.
A recent event that terminated the EC2 instance led to several hours of downtime. The application was successfully launched from the AMI, but the age of the EBS snapshot and the repair of the database resulted in the loss of 8 hours of data. The system was also down for 4 hours while the Systems Operators manually performed these processes.
What architectural changes will minimize downtime and reduce the chance of lost data?
- Create an Amazon CloudWatch alarm to automatically recover the instance. Create a script that will check and repair the database upon reboot. Subscribe the Operations team to the Amazon SNS message generated by the CloudWatch alarm.
- Run the application on m4.xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balancer. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with a minimum instance count of two. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.
- Run the application on m4.2xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balancer. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with a minimum instance count of one. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.
- Increase the web server instance count to two m4.xlarge instances and use Amazon Route 53 round-robin load balancing to spread the load. Enable Route 53 health checks on the web servers. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.
-
A Solutions Architect is working with a company that operates a standard three-tier web application in AWS. The web and application tiers run on Amazon EC2 and the database tier runs on Amazon RDS. The company is redesigning the web and application tiers to use Amazon API Gateway and AWS Lambda, and the company intends to deploy the new application within 6 months. The IT Manager has asked the Solutions Architect to reduce costs in the interim.
Which solution will be MOST cost effective while maintaining reliability?
- Use Spot Instances for the web tier, On-Demand Instances for the application tier, and Reserved Instances for the database tier.
- Use On-Demand Instances for the web and application tiers, and Reserved Instances for the database tier.
- Use Spot Instances for the web and application tiers, and Reserved Instances for the database tier.
- Use Reserved Instances for the web, application, and database tiers.
-
A company uses Amazon S3 to store documents that may only be accessible to an Amazon EC2 instance in a certain virtual private cloud (VPC). The company fears that a malicious insider with access to this instance could also set up an EC2 instance in another VPC to access these documents.
Which of the following solutions will provide the required protection?
- Use an S3 VPC endpoint and an S3 bucket policy to limit access to this VPC endpoint.
- Use EC2 instance profiles and an S3 bucket policy to limit access to the role attached to the instance profile.
- Use S3 client-side encryption and store the key in the instance metadata.
- Use S3 server-side encryption and protect the key with an encryption context.
Subscribe
0 Comments
Newest