SAA-C02 : AWS Certified Solutions Architect – Associate SAA-C02 : Part 23
-
A company’s HTTP application is behind a Network Load Balancer (NLB). The NLB’s target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service.
The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application’s availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?
- Enable HTTP health checks on the NLB, supplying the URL of the company’s application.
- Add a cron job to the EC2 instances to check the local application’s logs once each minute. If HTTP errors are detected, the application will restart.
- Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company’s application. Configure an Auto Scaling action to replace unhealthy instances.
- Create an Amazon CloudWatch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state.
-
A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic between these VPCs. Approximately 500 GB of data transfer will occur between the VPCs each month.
What is the MOST cost-effective solution to connect these VPCs?
- Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication.
- Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication.
- Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication.
- Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
-
A company is deploying an application that processes streaming data in near-real time. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest possible latency between nodes.
Which combination of network solutions will meet these requirements? (Choose two.)
- Enable and configure enhanced networking on each EC2 instance.
- Group the EC2 instances in separate accounts.
- Run the EC2 instances in a cluster placement group.
- Attach multiple elastic network interfaces to each EC2 instance.
- Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
-
A company is running a global application. The application’s users submit multiple videos that are then merged into a single video file. The application uses a single Amazon S3 bucket in the us-east-1 Region to receive uploads from users. The same S3 bucket provides the download location of the single video file that is produced. The final video file output has an average size of 250 GB.
The company needs to develop a solution that delivers faster uploads and downloads of the video files that are stored in Amazon S2. The company will offer the solution as a subscription to users who want to pay for the increased speed.
What should a solutions architect do to meet these requirements?
- Enable AWS Global Accelerator for the S3 endpoint. Adjust the application’s upload and download links to use the Global Accelerator S3 endpoint for users who have a subscription.
- Enable S3 Cross-Region Replication to S3 buckets in all other AWS Regions. Use an Amazon Route 53 geolocation routing policy to route S3 requests based on the location of users who have a subscription.
- Create an Amazon CloudFront distribution and use the S3 bucket in us-east-1 as an origin. Adjust the application to use the CloudFront URL as the upload and download links for users who have a subscription.
- Enable S3 Transfer Acceleration for the S3 bucket in us-east-1. Configure the application to use the bucket’s S3-accelerate endpoint domain name for the upload and download links for users who have a subscription.
-
The following IAM policy is attached to an IAM group. This is the only policy applied to the group.
What are the effective IAM permissions of this policy for group members?
- Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied.
- Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication (MFA).
- Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action.
- Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.
-
A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.
What should a solutions architect do to mitigate any single point of failure in this architecture?
- Add a set of VPNs between the Management and Production VPCs.
- Add a second virtual private gateway and attach it to the Management VPC.
- Add a second set of VPNs to the Management VPC from a second customer gateway device.
- Add a second VPC peering connection between the Management VPC and the Production VPC.
-
A company is using AWS Organizations with two AWS accounts: Logistics and Sales. The Logistics account operates an Amazon Redshift cluster. The Sales account includes Amazon EC2 instances. The Sales account needs to access the Logistics account’s Amazon Redshift cluster.
What should a solutions architect recommend to meet this requirement MOST cost-effectively?
- Set up VPC sharing with the Logistics account as the owner and the Sales account as the participant to transfer the data.
- Create an AWS Lambda function in the Logistics account to transfer data to the Amazon EC2 instances in the Sales account.
- Create a snapshot of the Amazon Redshift cluster, and share the snapshot with the Sales account. In the Sales account, restore the cluster by using the snapshot ID that is shared by the Logistics account.
- Run COPY commands to load data from Amazon Redshift into Amazon S3 buckets in the Logistics account. Grant permissions to the Sales account to access the S3 buckets of the Logistics account.
-
A company is using Amazon Redshift for analytics and to generate customer reports. The company recently acquired 50 TB of additional customer demographic data. The data is stored in .csv files in Amazon S3. The company needs a solution that joins the data and visualizes the results with the least possible cost and effort.
What should a solutions architect recommend to meet these requirements?
- Use Amazon Redshift Spectrum to query the data in Amazon S3 directly and join that data with the existing data in Amazon Redshift. Use Amazon QuickSight to build the visualizations.
- Use Amazon Athena to query the data in Amazon S3. Use Amazon QuickSight to join the data from Athena with the existing data in Amazon Redshift and to build the visualizations.
- Increase the size of the Amazon Redshift cluster, and load the data from Amazon S3. Use Amazon EMR Notebooks to query the data and build the visualizations in Amazon Redshift.
- Export the data from the Amazon Redshift cluster into Apache Parquet files in Amazon S3. Use Amazon Elasticsearch Service (Amazon ES) to query the data. Use Kibana to visualize the results.
-
A solutions architect must provide a fully managed replacement for an on-premises solution that allows employees and partners to exchange files. The solution must be easily accessible to employees connecting from on-premises systems, remote employees, and external partners.
Which solution meets these requirements?
- Use AWS Transfer for SFTP to transfer files into and out of Amazon S3.
- Use AWS Snowball Edge for local storage and large-scale data transfers.
- Use Amazon FSx to store and transfer files to make them available remotely.
- Use AWS Storage Gateway to create a volume gateway to store and transfer files to Amazon S3.
-
A company’s database is hosted on an Amazon Aurora MySQL DB cluster in the us-east-1 Region. The database is 4 TB in size. The company needs to expand its disaster recovery strategy to the us-west-2 Region. The company must have the ability to fail over to us-west-2 with a recovery time objective (RTO) of 15 minutes.
What should a solutions architect recommend to meet these requirements?
- Create a Multi-Region Aurora MySQL DB cluster in us-east-1 and use-west-2. Use an Amazon Route 53 health check to monitor us-east-1 and fail over to us-west-2 upon failure.
- Take a snapshot of the DB cluster in us-east-1. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to copy the snapshot to us-west-2 and restore the snapshot in us-west-2 when failure is detected.
- Create an AWS CloudFormation script to create another Aurora MySQL DB cluster in us-west-2 in case of failure. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to deploy the AWS CloudFormation stack in us-west-2 when failure is detected.
- Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to promote the DB cluster in us-west-2 when failure is detected.
-
A company is migrating its applications to AWS. Currently, applications that run on premises generate hundreds of terabytes of data that is stored on a shared file system. The company is running an analytics application in the cloud that runs hourly to generate insights from this data.
The company needs a solution to handle the ongoing data transfer between the on-premises shared file system and Amazon S3. The solution also must be able to handle occasional interruptions in internet connectivity.
Which solutions should the company use for the data transfer to meet these requirements?
- AWS DataSync
- AWS Migration Hub
- AWS Snowball Edge Storage Optimized
- AWS Transfer for SFTP
-
A solutions architect is designing the architecture for a new web application. The application will run on AWS Fargate containers with an Application Load Balancer (ALB) and an Amazon Aurora PostgreSQL database. The web application will perform primarily read queries against the database.
What should the solutions architect do to ensure that the website can scale with increasing traffic? (Choose two.)
- Enable auto scaling on the ALB to scale the load balancer horizontally.
- Configure Aurora Auto Scaling to adjust the number of Aurora Replicas in the Aurora cluster dynamically.
- Enable cross-zone load balancing on the ALB to distribute the load evenly across containers in all Availability Zones.
- Configure an Amazon Elastic Container Service (Amazon ECS) cluster in each Availability Zone to distribute the load across multiple Availability Zones.
- Configure Amazon Elastic Container Service (Amazon ECS) Service Auto Scaling with a target tracking scaling policy that is based on CPU utilization.
-
A company captures ordered clickstream data from multiple websites and uses batch processing to analyze the data. The company receives 100 million event records, all approximately 1 KB in size, each day. The company loads the data into Amazon Redshift each night, and business analysts consume the data.
The company wants to move toward near-real-time data processing for timely insights. The solution should process the streaming data while requiring the least possible operational overhead.
Which combination of AWS services will meet these requirements MOST cost-effectively? (Choose two.)
- Amazon EC2
- AWS Batch
- Amazon Simple Queue Service (Amazon SQS)
- Amazon Kinesis Data Firehose
- Amazon Kinesis Data Analytics
-
A company has a customer relationship management (CRM) application that stores data in an Amazon RDS DB instance that runs Microsoft SQL Server. The company’s IT staff has administrative access to the database. The database contains sensitive data. The company wants to ensure that the data is not accessible to the IT staff and that only authorized personnel can view the data.
What should a solutions architect do to secure the data?
- Use client-side encryption with an Amazon RDS managed key.
- Use client-side encryption with an AWS Key Management Service (AWS KMS) customer managed key.
- Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) default encryption key.
- Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) customer managed key.
-
A company with a single AWS account runs its internet-facing containerized web application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster is placed in a private subnet of a VPC. System administrators access the EKS cluster through a bastion host on a public subnet.
A new corporate security policy requires the company to avoid the use of bastion hosts. The company also must not allow internet connectivity to the EKS cluster.
Which solution meets these requirements MOST cost-effectively?
- Set up an AWS Direct Connect connection.
- Create a transit gateway.
- Establish a VPN connection.
- Use AWS Storage Gateway.
-
A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations.
The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During peak usage periods, the database is unable to maintain the performance that is needed for reading and writing updates. The game’s user base is increasing rapidly.
What should a solutions architect do to improve the performance of the data tier?
- Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
- Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana.
- Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX.
- Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.
-
A company is migrating a large, mission-critical database to AWS. A solutions architect has decided to use an Amazon RDS for MySQL Multi-AZ DB instance that is deployed with 80,000 Provisioned IOPS for storage. The solutions architect is using AWS Database Migration Service (AWS DMS) to perform the data migration. The migration is taking longer than expected, and the company wants to speed up the process. The company’s network team has ruled out bandwidth as a limiting factor.
Which actions should the solutions architect take to speed up the migration? (Choose two.)
- Disable Multi-AZ on the target DB instance.
- Create a new DMS instance that has a larger instance size.
- Turn off logging on the target DB instance until the initial load is complete.
- Restart the DMS task on a new DMS instance with transfer acceleration enabled.
- Change the storage type on the target DB instance to Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2).
-
A company wants to run its critical applications in containers to meet requirements for scalability and availability. The company prefers to focus on maintenance of the critical applications. The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized workload.
What should a solutions architect do to meet these requirements?
- Use Amazon EC2 instances, and install Docker on the instances.
- Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes.
- Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.
- Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-optimized Amazon Machine Image (AMI).
-
A company is designing a new application that runs in a VPC on Amazon EC2 instances. The application stores data in Amazon S3 and uses Amazon DynamoDB as its database. For compliance reasons, the company prohibits all traffic between the EC2 instances and other AWS services from passing over the public internet.
What can a solutions architect do to meet this requirement?
- Configure gateway VPC endpoints to Amazon S3 and DynamoDB.
- Configure interface VPC endpoints to Amazon S3 and DynamoDB.
- Configure a gateway VPC endpoint to Amazon S3. Configure an interface VPC endpoint to DynamoDB.
- Configure a gateway VPC endpoint to DynamoDB. Configure an interface VPC endpoint to Amazon S3.
-
A company’s security team requests that network traffic be captured in VPC Flow Logs. The logs will be frequently accessed for 90 days and then accessed intermittently.
What should a solutions architect do to meet these requirements when configuring the logs?
- Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days.
- Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain the logs for 90 days.
- Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering.
- Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days.
Subscribe
0 Comments
Newest