SAA-C02 : AWS Certified Solutions Architect – Associate SAA-C02 : Part 22
-
A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the application tier running on Amazon EC2 instances inside a VPC.
Which combination of steps should a solutions architect take to accomplish this? (Choose two.)
- Configure a VPC gateway endpoint for Amazon S3 within the VPC.
- Create a bucket policy to make the objects in the S3 bucket public.
- Create a bucket policy that limits access to only the application tier running in the VPC.
- Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
- Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket.
-
A solutions architect plans to convert a company’s monolithic web application into a multi-tier application. The company wants to avoid managing its own infrastructure. The minimum requirements for the web application are high availability, scalability, and regional low latency during peak hours. The solution should also store and retrieve data with millisecond latency using the application’s API.
Which solution meets these requirements?
- Use AWS Fargate to host the web application with backend Amazon RDS Multi-AZ DB instances.
- Use Amazon API Gateway with an edge-optimized API endpoint, AWS Lambda for compute, and Amazon DynamoDB as the data store.
- Use an Amazon Route 53 routing policy with geolocation that points to an Amazon S3 bucket with static website hosting and Amazon DynamoDB as the data store.
- Use an Amazon CloudFront distribution that points to an Elastic Load Balancer with an Amazon EC2 Auto Scaling group, along with Amazon RDS Multi-AZ DB instances.
-
A team has an application that detects new objects being uploaded into an Amazon S3 bucket. The uploads trigger AWS Lambda function to write object metadata into an Amazon DynamoDB table and an Amazon RDS for PostgreSQL database.
Which action should the team take to ensure high availability?
- Enable Cross-Region Replication in the S3 bucket.
- Create a Lambda function for each Availability Zone the application is deployed in.
- Enable Multi-AZ on the RDS for PostgreSQL database.
- Create a DynamoDB stream for the DynamoDB table.
-
A company is planning to migrate a legacy application to AWS. The application currently uses NFS to communicate to an on-premises storage solution to store application data. The application cannot be modified to use any other communication protocols other than NFS for this purpose.
Which storage solution should a solutions architect recommend for use after the migration?
- AWS DataSync
- Amazon Elastic Block Store (Amazon EBS)
- Amazon Elastic File System (Amazon EFS)
- Amazon EMR File System (Amazon EMRFS)
-
An application calls a service run by a vendor. The vendor charges based on the number of calls. The finance department needs to know the number of calls that are made to the service to validate the billing statements.
How can a solutions architect design a system to durably store the number of calls without requiring changes to the application?
- Call the service through an internet gateway.
- Decouple the application from the service with an Amazon Simple Queue Service (Amazon SQS) queue.
- Publish a custom Amazon CloudWatch metric that counts calls to the service.
- Call the service through a VPC peering connection.
Explanation:
There are 2 main types of monitoring you can do on AWS EC2 Instances as follows:
Basic Monitoring for Amazon EC2 instances: Seven pre-selected metrics at five-minute frequency and three status check metrics at one-minute frequency, for no additional charge.
Detailed Monitoring for Amazon EC2 instances: All metrics available to Basic Monitoring at one-minute frequency, for an additional charge. Instances with Detailed Monitoring enabled allows data aggregation by Amazon EC2 AMI ID and instance type. -
A company wants to reduce its Amazon S3 storage costs in its production environment without impacting durability or performance of the stored objects.
What is the FIRST step the company should take to meet these objectives?
- Enable Amazon Macie on the business-critical S3 buckets to classify the sensitivity of the objects.
- Enable S3 analytics to identify S3 buckets that are candidates for transitioning to S3 Standard-Infrequent Access (S3 Standard-IA).
- Enable versioning on all business-critical S3 buckets.
- Migrate the objects in all S3 buckets to S3 Intelligent-Tiering.
-
A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods of high demand. A solutions architect must ensure that the storage component for the text documents can scale to meet the demand of the application at all times. The company is concerned about the overall cost of the solution.
Which storage solution meets these requirements MOST cost-effectively?
- Amazon Elastic Block Store (Amazon EBS)
- Amazon Elastic File System (Amazon EFS)
- Amazon Elasticsearch Service (Amazon ES)
- Amazon S3
-
A company hosts multiple production applications. One of the applications consists of resources from Amazon EC2, AWS Lambda, Amazon RDS, Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS) across multiple AWS Regions. All company resources are tagged with a tag name of “application” and a value that corresponds to each application. A solutions architect must provide the quickest solution for identifying all of the tagged components.
Which solution meets these requirements?
- Use AWS CloudTrail to generate a list of resources with the application tag.
- Use the AWS CLI to query each service across all Regions to report the tagged components.
- Run a query in Amazon CloudWatch Logs Insights to report on the components with the application tag.
- Run a query with the AWS Resource Groups Tag Editor to report on the resources globally with the application tag.
-
A development team is deploying a new product on AWS and is using AWS Lambda as part of the deployment. The team allocates 512 MB of memory for one of the Lambda functions. With this memory allocation, the function is completed in 2 minutes. The function runs millions of times monthly, and the development team is concerned about cost. The team conducts tests to see how different Lambda memory allocations affect the cost of the function.
Which steps will reduce the Lambda costs for the product? (Choose two.)
- Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 1 minute.
- Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 90 seconds.
- Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 4 minutes.
- Increase the memory allocation for this Lambda function to 2,048 MB if this change causes the execution time of each function to be less than 1 minute.
- Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 5 minutes.
-
A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an Amazon EC2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises network, through the company’s internet connection, to the bastion host, and to the application servers. The solutions architect must make sure that the security groups of all the EC2 instances will allow that access.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
- Replace the current security group of the bastion host with one that only allows inbound access from the application instances.
- Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company.
- Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company.
- Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host.
- Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the bastion host.
-
A user owns a MySQL database that is accessed by various clients who expect, at most, 100 ms latency on requests. Once a record is stored in the database, it is rarely changed. Clients only access one record at a time.
Database access has been increasing exponentially due to increased client demand. The resultant load will soon exceed the capacity of the most expensive hardware available for purchase. The user wants to migrate to AWS, and is willing to change database systems.
Which service would alleviate the database load issue and offer virtually unlimited scalability for the future?
- Amazon RDS
- Amazon DynamoDB
- Amazon Redshift
- AWS Data Pipeline
-
A company designs a mobile app for its customers to upload photos to a website. The app needs a secure login with multi-factor authentication (MFA). The company wants to limit the initial build time and the maintenance of the solution.
Which solution should a solutions architect recommend to meet these requirements?
- Use Amazon Cognito Identity with SMS-based MFA.
- Edit IAM policies to require MFA for all users.
- Federate IAM against the corporate Active Directory that requires MFA.
- Use Amazon API Gateway and require server-side encryption (SSE) for photos.
-
A company has an application that uses overnight digital images of products on store shelves to analyze inventory data. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) and obtains the images from an Amazon S3 bucket for its metadata to be processed by worker nodes for analysis. A solutions architect needs to ensure that every image is processed by the worker nodes.
What should the solutions architect do to meet this requirement in the MOST cost-efficient way?
- Send the image metadata from the application directly to a second ALB for the worker nodes that use an Auto Scaling group of EC2 Spot Instances as the target group.
- Process the image metadata by sending it directly to EC2 Reserved Instances in an Auto Scaling group. With a dynamic scaling policy, use an Amazon CloudWatch metric for average CPU utilization of the Auto Scaling group as soon as the front-end application obtains the images.
- Write messages to Amazon Simple Queue Service (Amazon SQS) when the front-end application obtains an image. Process the images with EC2 On-Demand instances in an Auto Scaling group with instance scale-in protection and a fixed number of instances with periodic health checks.
- Write messages to Amazon Simple Queue Service (Amazon SQS) when the application obtains an image. Process the images with EC2 Spot Instances in an Auto Scaling group with instance scale-in protection and a dynamic scaling policy using a custom Amazon CloudWatch metric for the current number of messages in the queue.
-
A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously. The workload requires access latency within 1 ms. After processing has completed, engineers will need access to the dataset for manual postprocessing.
Which solution will meet these requirements?
- Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the dataset from Amazon EFS.
- Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3 bucket.
- Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for postprocessing.
- Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all instances for processing and postprocessing.
-
A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The application is hosted on redundant servers in the company’s on-premises data centers in the United States, Asia, and Europe. The company’s compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application.
What should a solutions architect do to meet these requirements?
- Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
- Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
- Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
- Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
-
A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as demand increases or decreases. The company needs a new solution that simplifies the process of adding or removing compute capacity to or from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations.
Which solution meets these requirements?
- Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.
- Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.
- Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances.
- Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.
-
A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?
- Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
- Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
- Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) cluster. Set up the Amazon ES cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
- Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
-
A company has two AWS accounts: Production and Development. There are code changes ready in the Development account to push to the Production account. In the alpha phase, only two senior developers on the development team need access to the Production account. In the beta phase, more developers might need access to perform testing as well.
What should a solutions architect recommend?
- Create two policy documents using the AWS Management Console in each account. Assign the policy to developers who need access.
- Create an IAM role in the Development account. Give one IAM role access to the Production account. Allow developers to assume the role.
- Create an IAM role in the Production account with the trust policy that specifies the Development account. Allow developers to assume the role.
- Create an IAM group in the Production account and add it as a principal in the trust policy that specifies the Production account. Add developers to the group.
-
A company is using an Amazon S3 bucket to store data uploaded by different departments from multiple locations. During an AWS Well-Architected review, the financial manager notices that 10 TB of S3 Standard storage data has been charged each month. However, in the AWS Management Console for Amazon S3, using the command to select all files and folders shows a total size of 5 TB.
What are the possible causes for this difference? (Choose two.)
- Some files are stored with deduplication.
- The S3 bucket has versioning enabled.
- There are incomplete S3 multipart uploads.
- The S3 bucker has AWS Key Management Service (AWS KMS) enabled.
- The S3 bucket has Intelligent-Tiering enabled.
-
A company is using a centralized AWS account to store log data in various Amazon S3 buckets. A solutions architect needs to ensure that the data is encrypted at rest before the data is uploaded to the S3 buckets. The data also must be encrypted in transit.
Which solution meets these requirements?
- Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
- Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets.
- Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads.
- Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.
Subscribe
0 Comments
Newest