SAA-C02 : AWS Certified Solutions Architect – Associate SAA-C02 : Part 10
-
A company is hosting multiple websites for several lines of business under its registered parent domain. Users accessing these websites will be routed to appropriate backend Amazon EC2 instances based on the subdomain. The websites host static webpages, images, and server-side scripts like PHP and JavaScript.
Some of the websites experience peak access during the first two hours of business with constant usage throughout the rest of the day. A solutions architect needs to design a solution that will automatically adjust capacity to these traffic patterns while keeping costs low.
Which combination of AWS services or features will meet these requirements? (Choose two.)
- AWS Batch
- Network Load Balancer
- Application Load Balancer
- Amazon EC2 Auto Scaling
- Amazon S3 website hosting
-
A company uses an Amazon S3 bucket to store static images for its website. The company configured permissions to allow access to Amazon S3 objects by privileged users only.
What should a solutions architect do to protect against data loss? (Choose two.)
- Enable versioning on the S3 bucket.
- Enable access logging on the S3 bucket.
- Enable server-side encryption on the S3 bucket.
- Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier.
- Use MFA Delete to require multi-factor authentication to delete an object.
-
An operations team has a standard that states IAM policies should not be applied directly to users. Some new team members have not been following this standard. The operations manager needs a way to easily identify the users with attached policies.
What should a solutions architect do to accomplish this?
- Monitor using AWS CloudTrail.
- Create an AWS Config rule to run daily.
- Publish IAM user changes to Amazon SNS.
- Run AWS Lambda when a user is modified.
-
A company wants to use an AWS Region as a disaster recovery location for its on-premises infrastructure. The company has 10 TB of existing data, and the on-premise data center has a 1 Gbps internet connection. A solutions architect must find a solution so the company can have its existing data on AWS in 72 hours without transmitting it using an unencrypted channel.
Which solution should the solutions architect select?
- Send the initial 10 TB of data to AWS using FTP.
- Send the initial 10 TB of data to AWS using AWS Snowball.
- Establish a VPN connection between Amazon VPC and the company’s data center.
- Establish an AWS Direct Connect connection between Amazon VPC and the company’s data center.
-
A company is building applications in containers. The company wants to migrate its on-premises development and operations services from its on-premises data center to AWS. Management states that production system must be cloud agnostic and use the same configuration and administrator tools across production systems. A solutions architect needs to design a managed solution that will align open-source software.
Which solution meets these requirements?
- Launch the containers on Amazon EC2 with EC2 instance worker nodes.
- Launch the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS workers nodes.
- Launch the containers on Amazon Elastic Containers service (Amazon ECS) with AWS Fargate instances.
- Launch the containers on Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 instance worker nodes.
-
A company hosts its website on AWS. To address the highly variable demand, the company has implemented Amazon EC2 Auto Scaling. Management is concerned that the company is over-provisioning its infrastructure, especially at the front end of the three-tier application. A solutions architect needs to ensure costs are optimized without impacting performance.
What should the solutions architect do to accomplish this?
- Use Auto Scaling with Reserved Instances.
- Use Auto Scaling with a scheduled scaling policy.
- Use Auto Scaling with the suspend-resume feature.
- Use Auto Scaling with a target tracking scaling policy.
-
A solutions architect is performing a security review of a recently migrated workload. The workload is a web application that consists of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The solutions architect must improve the security posture and minimize the impact of a DDoS attack on resources.
Which solution is MOST effective?
- Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the WAF ACL on the CloudFront distribution.
- Create a custom AWS Lambda function that adds identified attacks into a common vulnerability pool to capture a potential DDoS attack. Use the identified information to modify a network ACL to block access.
- Enable VPC Flow Logs and store then in Amazon S3. Create a custom AWS Lambda functions that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
- Enable Amazon GuardDuty and configure findings written to Amazon CloudWatch. Create an event with CloudWatch Events for DDoS alerts that triggers Amazon Simple Notification Service (Amazon SNS). Have Amazon SNS invoke a custom AWS Lambda function that parses the logs, looking for a DDoS attack. Modify a network ACL to block identified source IP addresses.
-
A company has multiple AWS accounts for various departments. One of the departments wants to share an Amazon S3 bucket with all other department.
Which solution will require the LEAST amount of effort?
- Enable cross-account S3 replication for the bucket.
- Create a pre-signed URL for the bucket and share it with other departments.
- Set the S3 bucket policy to allow cross-account access to other departments.
- Create IAM users for each of the departments and configure a read-only IAM policy.
-
A company needs to share an Amazon S3 bucket with an external vendor. The bucket owner must be able to access all objects.
Which action should be taken to share the S3 bucket?
- Update the bucket to be a Requester Pays bucket.
- Update the bucket to enable cross-origin resource sharing (CORS).
- Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects.
- Create an IAM policy to require users to grant bucket-owner-full-control when uploading objects.
Explanation:
By default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. To get access to the object, the object owner must explicitly grant you (the bucket owner) access. The object owner can grant the bucket owner full control of the object by updating the access control list (ACL) of the object. The object owner can update the ACL either during a put or copy operation, or after the object is added to the bucket.Similar: Require access to S3 objects uploaded from another AWS account (amazon.com)Resolution Add a bucket policy that grants users access to put objects in your bucket only when they grant you (the bucket owner) full control of the object.
-
A company is developing a real-time multiplier game that uses UDP for communications between client and servers in an Auto Scaling group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention.
Which solution should a solutions architect recommend?
- Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
- Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.
- Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage.
- Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.
-
A company collects temperature, humidity, and atmospheric pressure data in cities across multiple continents. The average volume of data collected per site each day is 500 GB. Each site has a high-speed internet connection. The company’s weather forecasting applications are based in a single Region and analyze the data daily.
What is the FASTEST way to aggregate data from all of these global sites?
- Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket.
- Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket.
- Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket.
- Upload the data to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Once a day take an EBS snapshot and copy it to the centralized Region. Restore the EBS volume in the centralized Region and run an analysis on the data daily.
Explanation:
Step-1: To transfer to S3 from global sites: Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. S3 Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. Used to accelerate object uploads to S3 over long distances (latency). Transfer acceleration is as secure as a direct upload to S3.
Step-2: When the application analyze/aggregate the data from S3 and then again upload the results – Multipart upload -
A company has a custom application running on an Amazon EC instance that:
• Reads a large amount of data from Amazon S3
• Performs a multi-stage analysis
• Writes the results to Amazon DynamoDBThe application writes a significant number of large, temporary files during the multi-stage analysis. The process performance depends on the temporary storage performance.
What would be the fastest storage option for holding the temporary files?
- Multiple Amazon S3 buckets with Transfer Acceleration for storage.
- Multiple Amazon Elastic Block Store (Amazon EBS) drives with Provisioned IOPS and EBS optimization.
- Multiple Amazon Elastic File System (Amazon EFS) volumes using the Network File System version 4.1 (NFSv4.1) protocol.
- Multiple instance store volumes with software RAID 0.
-
A leasing company generates and emails PDF statements every month for all its customers. Each statement is about 400 KB in size. Customers can download their statements from the website for up to 30 days from when the statements were generated. At the end of their 3-year lease, the customers are emailed a ZIP file that contains all the statements.
What is the MOST cost-effective storage solution for this situation?
- Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day.
- Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days.
- Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days.
- Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
-
A company recently released a new type of internet-connected sensor. The company is expecting to sell thousands of sensors, which are designed to stream high volumes of data each second to a central location. A solutions architect must design a solution that ingests and stores data so that engineering teams can analyze it in near-real time with millisecond responsiveness.
Which solution should the solutions architect recommend?
- Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.
- Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
- Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift.
- Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
-
A website runs a web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minute to initiate upon boot up before responding to user requests.
How should a solutions architect redesign the architecture to better respond to changing traffic?
- Configure a Network Load Balancer with a slow start configuration.
- Configure AWS ElastiCache for Redis to offload direct requests to the servers.
- Configure an Auto Scaling step scaling policy with an instance warmup condition.
- Configure Amazon CloudFront to use an Application Load Balancer as the origin.
-
A company is concerned that two NAT instances in use will no longer be able to support the traffic needed for the company’s application. A solutions architect wants to implement a solution that is highly available fault tolerant, and automatically scalable.
What should the solutions architect recommend?
- Remove the two NAT instances and replace them with two NAT gateways in the same Availability Zone.
- Use Auto Scaling groups with Network Load Balancers for the NAT instances in different Availability Zones.
- Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones.
- Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Network Load Balancer.
-
A company operates a website on Amazon EC2 Linux instances. Some of the instances are failing. Troubleshooting points to insufficient swap space on the failed instances. The operations team lead needs a solution to monitor this.
What should a solutions architect recommend?
- Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch.
- Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch.
- Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch.
- Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch.
-
A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is assigned to the EC2 instance. The default network ACL has been modified to block all traffic. A solutions architect needs to make the web server accessible from everywhere on port 443.
Which combination of steps will accomplish this task? (Choose two.)
- Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.
- Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.
- Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.
- Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.
- Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0.
-
A company must re-evaluate its need for the Amazon EC2 instances it currently has provisioned in an Auto Scaling group. At present, the Auto Scaling group is configured for a minimum of two instances and a maximum of four instances across two Availability Zones. A Solutions architect reviewed Amazon CloudWatch metrics and found that CPU utilization is consistently low for all the EC2 instances.
What should the solutions architect recommend to maximize utilization while ensuring the application remains fault tolerant?
- Remove some EC2 instances to increase the utilization of remaining instances.
- Increase the Amazon Elastic Block Store (Amazon EBS) capacity of instances with less CPU utilization.
- Modify the Auto Scaling group scaling policy to scale in and out based on a higher CPU utilization metric.
- Create a new launch configuration that uses smaller instance types. Update the existing Auto Scaling group.
Explanation:
As the Launch Configuration can’t be modified once created, only way to update the Launch Configuration for an Auto Scaling group is to create a new one and associate it with the Auto Scaling group -
A company has an application that posts messages to Amazon SQS. Another application polls the queue and processes the messages in an I/O-intensive operation. The company has a service level agreement (SLA) that specifies the maximum amount of time that can elapse between receiving the messages and responding to the users. Due to an increase in the number of messages, the company has difficulty meeting its SLA consistently.
What should a solutions architect do to help improve the application’s processing time and ensure it can handle the load at any level?
- Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with a larger size.
- Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance.
- Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep its aggregate CPU utilization below 70%.
- Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.
Subscribe
0 Comments
Newest