SAA-C02 : AWS Certified Solutions Architect – Associate SAA-C02 : Part 11
-
A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancer. However, many of the web service clients can only reach IP addresses whitelisted on their firewalls.
What should a solutions architect recommend to meet the clients’ needs?
- A Network Load Balancer with an associated Elastic IP address.
- An Application Load Balancer with an associated Elastic IP address
- An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address
- An EC2 instance with a public IP address running as a proxy in front of the load balancer
-
A company wants to host a web application on AWS that will communicate to a database within a VPC. The application should be highly available.
What should a solutions architect recommend?
- Create two Amazon EC2 instances to host the web servers behind a load balancer, and then deploy the database on a large instance.
- Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.
- Deploy a load balancer in the public subnet with an Auto Scaling group for the web servers, and then deploy the database on an Amazon EC2 instance in the private subnet.
- Deploy two web servers with an Auto Scaling group, configure a domain that points to the two web servers, and then deploy a database architecture in multiple Availability Zones.
-
A company’s packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution, but wants to further reduce data transfer costs. The company cannot modify the application’s source code.
What should a solutions architect do to reduce costs?
- Use Lambda@Edge to compress the files as they are sent to users.
- Enable Amazon S3 Transfer Acceleration to reduce the response times.
- Enable caching on the CloudFront distribution to store generated files at the edge.
- Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.
-
A database is on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that experiences highly dynamic reads. Application developers notice a significant slowdown when testing read performance from a secondary AWS Region. The developers want a solution that provides less than 1 second of read replication latency.
What should the solutions architect recommend?
- Install MySQL on Amazon EC2 in the secondary Region.
- Migrate the database to Amazon Aurora with cross-Region replicas.
- Create another RDS for MySQL read replica in the secondary Region.
- Implement Amazon ElastiCache to improve database query performance.
-
A company is planning to deploy an Amazon RDS DB instance running Amazon Aurora. The company has a backup retention policy requirement of 90 days. Which solution should a solutions architect recommend?
- Set the backup retention period to 90 days when creating the RDS DB instance.
- Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days.
- Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily.
- Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days.
-
A company currently has 250 TB of backup files stored in Amazon S3 in a vendor’s proprietary format. Using a Linux-based software application provided by the vendor, the company wants to retrieve files from Amazon S3, transform the files to an industry-standard format, and re-upload them to Amazon S3. The company wants to minimize the data transfer charges associated with this conversation.
What should a solutions architect do to accomplish this?
- Install the conversion software as an Amazon S3 batch operation so the data is transformed without leaving Amazon S3.
- Install the conversion software onto an on-premises virtual machine. Perform the transformation and re-upload the files to Amazon S3 from the virtual machine.
- Use AWS Snowball Edge devices to export the data and install the conversion software onto the devices. Perform the data transformation and re-upload the files to Amazon S3 from the Snowball Edge devices.
- Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re-upload the files to Amazon S3 from the EC2 instance.
-
A company is migrating a NoSQL database cluster to Amazon EC2. The database automatically replicates data to maintain at least three copies of the data. I/O throughput of the servers is the highest priority. Which instance type should a solutions architect recommend for the migration?
- Storage optimized instances with instance store
- Burstable general purpose instances with an Amazon Elastic Block Store (Amazon EBS) volume
- Memory optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled
- Compute optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled
-
A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with Active Directory for access control.
Which solution will satisfy these requirements?
- Configure Amazon EFS Amazon Elastic File System (Amazon EFS) storage and set the Active Directory domain for authentication.
- Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones.
- Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume.
- Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.
-
A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will not require database modifications.
Which solution will meet these requirements?
- Amazon DynamoDB
- Amazon RDS for MySQL
- MySQL-compatible Amazon Aurora Serverless
- MySQL deployed on Amazon EC2 in an Auto Scaling group
-
A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data. Which combination of storage and caching should the solutions architect use?
- Amazon S3 with Amazon CloudFront
- Amazon S3 Glacier with Amazon ElastiCache
- Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
- AWS Storage Gateway with Amazon ElastiCache
-
A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the output data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances.
What should the solutions architect do to reduce the overall data transfer costs?
- Place all the EC2 instances in an Auto Scaling group.
- Place all the EC2 instances in the same AWS Region.
- Place all the EC2 instances in the same Availability Zone.
- Place all the EC2 instances in private subnets in multiple Availability Zones.
-
A company hosts its core network services, including directory services and DNS, in its on-premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services.
What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?
- Create a DX connection in each new account. Route the network traffic to the on-premises servers.
- Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers.
- Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.
- Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
-
A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.
What should a solutions architect recommend?
- Deploy Amazon Inspector and associate it with the ALB.
- Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
- Deploy rules to the network ACLs associated with the ALB to block the incoming traffic.
- Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
-
A company receives structured and semi-structured data from various sources once every day. A solutions architect needs to design a solution that leverages big data processing frameworks. The data should be accessible using SQL queries and business intelligence tools.
What should the solutions architect recommend to build the MOST high-performing solution?
- Use AWS Glue to process data and Amazon S3 to store data.
- Use Amazon EMR to process data and Amazon Redshift to store data.
- Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data.
- Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data.
-
A company is hosting an election reporting website on AWS for users around the world. The website uses Amazon EC2 instances for the web and application tiers in an Auto Scaling group with Application Load Balancers. The database tier uses an Amazon RDS for MySQL database. The website is updated with election results once an hour and has historically observed hundreds of users accessing the reports.
The company is expecting a significant increase in demand because of upcoming elections in different countries. A solutions architect must improve the website’s ability to handle additional demand while minimizing the need for additional EC2 instances.
Which solution will meet these requirements?
- Launch an Amazon ElastiCache cluster to cache common database queries.
- Launch an Amazon CloudFront web distribution to cache commonly requested website content.
- Enable disk-based caching on the EC2 instances to cache commonly requested website content.
- Deploy a reverse proxy into the design using an EC2 instance with caching enabled for commonly requested website content.
-
A company is building a website that relies on reading and writing to an Amazon DynamoDB database. The traffic associated with the website predictably peaks during business hours on weekdays and declines overnight and during weekends. A solutions architect needs to design a cost-effective solution that can handle the load.
What should the solutions architect do to meet these requirements?
- Enable DynamoDB Accelerator (DAX) to cache the data.
- Enable Multi-AZ replication for the DynamoDB database.
- Enable DynamoDB auto scaling when creating the tables.
- Enable DynamoDB auto scaling when creating the tables.
-
A company uses Amazon Redshift for its data warehouse. The company wants to ensure high durability for its data in case of any component failure.
What should a solutions architect recommend?
- Enable concurrency scaling.
- Enable cross-Region snapshots.
- Increase the data retention period.
- Deploy Amazon Redshift in Multi-AZ.
-
A company has data stored in an on-premises data center that is used by several on-premises applications. The company wants to maintain its existing application environment and be able to use AWS services for data analytics and future visualizations.
Which storage service should a solutions architect recommend?
- Amazon Redshift
- AWS Storage Gateway for files
- Amazon Elastic Block Store (Amazon EBS)
- Amazon Elastic File System (Amazon EFS)
-
A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The company’s security policy requires that all website traffic be inspected by AWS WAF.
How should the solutions architect comply with these requirements?
- Configure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource Name (ARN) only.
- Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin.
- Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront.
- Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the distribution.
-
A company has a 143 TB MySQL database that it wants to migrate to AWS. The plan is to use Amazon Aurora MySQL as the platform going forward. The company has a 100 Mbps AWS Direct Connect connection to Amazon VPC.
Which solution meets the company’s needs and takes the LEAST amount of time?
- Use a gateway endpoint for Amazon S3. Migrate the data to Amazon S3. Import the data into Aurora.
- Upgrade the Direct Connect link to 500 Mbps. Copy the data to Amazon S3. Import the data into Aurora.
- Order an AWS Snowmobile and copy the database backup to it. Have AWS import the data into Amazon S3. Import the backup into Aurora.
- Order four 50-TB AWS Snowball devices and copy the database backup onto them. Have AWS import the data into Amazon S3. Import the data into Aurora.
Subscribe
0 Comments
Newest