SAP-C01 : AWS Certified Solutions Architect – Professional : Part 33

  1. A company is planning to migrate an existing high performance computing (HPE) solution to the AWS Cloud. The existing solution consists of a 12-node cluster running Linux with high speed interconnectivity developed on a single rack. A solutions architect needs to optimize the performance of the HPE cluster.

    Which combination of steps will meet these requirements? (Choose two.)

    • Deploy instances across at least three Availability Zones.
    • Deploy Amazon EC2 instances in a placement group.
    • Use Amazon EC2 instances that support Elastic Fabric Adapter (EFA).
    • Use Amazon EC2 instances that support burstable performance.
    • Enable CPU hyperthreading.
  2. A company hosts a game player-matching service on a public facing, physical, on-premises instance that all users are able to access over the internet. All traffic to the instance uses UDP. The company wants to migrate the service to AWS and provide a high level of security. A solutions architect needs to design a solution for the player-matching service using AWS.

    Which combination of steps should the solutions architect take to meet these requirements? (Choose three.)

    • Use a Network Load Balancer (NLB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLB’s Elastic IP address.
    • Use an Application Load Balancer (ALB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALB’s internet-facing fully qualified domain name (FQDN).
    • Define an AWS WAF rule to explicitly drop non-UDP traffic, and associate the rule with the load balancer.
    • Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
    • Use Amazon CloudFront with an Elastic Load Balancer as an origin.
    • Enable AWS Shield Advanced on all public-facing resources.
  3. A company has multiple AWS accounts and manages these accounts which AWS Organizations. A developer was given IAM user credentials to access AWS resources. The developer should have read-only access to all Amazon S3 buckets in the account. However, when the developer tries to access the S3 buckets from the console, they receive an access denied error message with no bucket listed.

    A solution architect reviews the permissions and finds that the developer’s IAM user is listed as having read-only access to all S3 buckets in the account.

    Which additional steps should the solutions architect take to troubleshoot the issue? (Choose two.)

    • Check the bucket policies for all S3 buckets.
    • Check the ACLs for all S3 buckets.
    • Check the SCPs set at the organizational units (OUs).
    • Check for the permissions boundaries set for the IAM user.
    • Check if an appropriate IAM role is attached to the IAM user.
  4. A large company recently experienced an unexpected increase in Amazon RDS and Amazon DynamoDB costs. The company needs to increase visibility into delays of AWS Billing and Cost Management. There are various accounts associated with AWS Organizations, including many development and production accounts. There is no consistent tagging strategy across the organization, but there are guidelines in place that require all infrastructure to be deployed using AWS CloudFormation with consistent tagging. Management requires cost center numbers and project ID numbers for all existing and future DynamoDB tables and RDS distances.

    Which strategy should the solutions architect provide to meet these requirements?

    • Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources.
    • Use an AWS Config rule to alert the finance team of untagged resources. Create a centralized AWS Lambda based solution to tag untagged RDS databases and DynamoDB resources every hour using a cross-account role.
    • Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict resource creation that do not have the cost center and project ID on the resource.
    • Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources. Update existing federated roles to restrict privileges to provision resources that do not include the cost center and project ID on the resource.
  5. A company has an application that generates reports and stores them in an Amazon bucket Amazon S3 bucket. When a user accesses their report, the application generates a signed URL to allow the user to download the report. The company’s security team has discovered that the files are public and that anyone can download them without authentication. The company has suspended the generation of new reports until the problem is resolved.

    Which set of action will immediately remediate the security issue without impacting the application’s normal workflow?

    • Create an AWS Lambda function that applies all policy for users who are not authenticated. Create a scheduled event to invoke the Lambda function.
    • Review the AWS Trusted advisor bucket permissions check and implement the recommend actions.
    • Run a script that puts a Private ACL on all of the object in the bucket.
    • Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcis option to TRUE on the bucket.
  6. A company hosts a legacy application that runs on an Amazon EC2 instance inside a VPC without internet access. Users access the application with a desktop program installed on their corporate laptops. Communication between the laptops and the VPC flows through AWS Direct Connect (DX). A new requirement states that all data in transit must be encrypted between users and the VPC.

    Which strategy should a solutions architect use to maintain consistent network performance while meeting this new requirement?

    • Create a client VPN endpoint and configure the laptops to use an AWS client VPN to connect to the VPC over the internet.
    • Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
    • Create a new Site-to-Site VPN that connects to the VPC over the internet.
    • Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface.
  7. A company is creating a centralized logging service running on Amazon EC2 that will receive and analyze logs from hundreds of AWS accounts. AWS PrivateLink is being used to provide connectivity between the client services and the logging service.

    In each AWS account with a client an interface endpoint has been created for the logging service and is available. The logging service running on EC2 instances with a Network Load Balancer (NLB) are deployed in different subnets. The clients are unable to submit logs using the VPC endpoint.

    Which combination of steps should a solutions architect take to resolve this issue? (Choose two.)

    • Check that the NACL is attached to the logging service subnet to allow communications to and from the NLB subnets. Check that the NACL is attached to the NLB subnet to allow communications to and from the logging service subnets running on EC2 instances.
    • Check that the NACL is attached to the logging service subnets to allow communications to and from the interface endpoint subnets. Check that the NACL is attached to the interface endpoint subnet to allow communications to and from the logging service subnets running on EC2 instances.
    • Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the NLB subnets.
    • Check the security group for the logging service running on the EC2 instances to ensure it allows ingress from the clients.
    • Check the security group for the NLB to ensure it allows ingress from the interface endpoint subnets.
  8. A company is refactoring an existing web service that provides read and write access to structured data. The service must respond to short but significant spikes in the system load. The service must be fault tolerant across multiple AWS Regions.

    Which actions should be taken to meet these requirements?

    • Store the data in Amazon DocumentDB. Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized Amazon API Gateway and AWS Lambda. Assign the company’s domain as an alternate domain for the distribution, and configure Amazon Route 53 with an alias to the CloudFront distribution.
    • Store the data in replicated Amazon S3 buckets in two Regions. Create an Amazon CloudFront distribution in each Region, with custom origins built on Amazon API Gateway and AWS Lambda launched in each Region. Assign the company’s domain as an alternate domain for both distributions, and configure Amazon Route 53 with a failover routing policy between them.
    • Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. In both Regions, run the web service as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record in the company’s domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two ALBs.
    • Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. Configure the instances to download the web service code in the user data. In Amazon Route 53, configure an alias record for the company’s domain and a multi-value routing policy
  9. A company plans to migrate to AWS. A solutions architect uses AWS Application Discovery Service over the fleet and discovers that there is an Oracle data warehouse and several PostgreSQL databases.

    Which combination of migration patterns will reduce licensing costs and operational overhead? (Choose two.)

    • Lift and shift the Oracle data warehouse to Amazon EC2 using AWS DMS.
    • Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS
    • Lift and shift the PostgreSQL databases to Amazon EC2 using AWS DMS.
    • Migrate the PostgreSQL databases to Amazon RDS for PostgreSQL using AWS DMS.
    • Migrate the Oracle data warehouse to an Amazon EMR managed cluster using AWS DMS.
  10. A solutions architect needs to define a reference architecture for a solution for three-tier applications with web, application, and NoSQL data layers. The reference architecture must meet the following requirements:

    – High availability within an AWS Region
    – Able to fail over in 1 minute to another AWS Region for disaster recovery
    – Provide the most efficient solution while minimizing the impact on the user experience

    Which combination of steps will meet these requirements? (Choose three.)

    • Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 1 hour.
    • Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
    • Use a global table within Amazon DynamoDB so data can be accessed in the two selected Regions.
    • Back up data from an Amazon DynamoDB table in the primary Region every 60 minutes and then write the data to Amazon S3. Use S3 cross-Region replication to copy the data from the primary Region to the disaster recovery Region. Have a script import the data into DynamoDB in a disaster recovery scenario.
    • Implement a hot standby model using Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
    • Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
  11. A company has a Microsoft SQL Server database in its data center and plans to migrate data to Amazon Aurora MySQL. The company has already used the AWS Schema Conversion Tool to migrate triggers, stored procedures and other schema objects to Aurora MySQL. The database contains 1 TB of data and grows less than 1 MB per day. The company’s data center is connected to AWS through a dedicated 1Gbps AWS Direct Connect connection.

    The company would like to migrate data to Aurora MySQL and perform reconfigurations with minimal downtime to the applications.

    Which solution meets the company’s requirements?

    • Shut down applications over the weekend. Create an AWS DMS replication instance and task to migrate existing data from SQL Server to Aurora MySQL. Perform application testing and migrate the data to the new database endpoint.
    • Create an AWS DMS replication instance and task to migrate existing data and ongoing replication from SQL Server to Aurora MySQL. Perform application testing and migrate the data to the new database endpoint.
    • Create a database snapshot of SQL Server on Amazon S3. Restore the database snapshot from Amazon S3 to Aurora MySQL. Create an AWS DMS replication instance and task for ongoing replication from SQL Server to Aurora MySQL. Perform application testing and migrate the data to the new database endpoint.
    • Create a SQL Server native backup file on Amazon S3. Create an AWS DMS replication instance and task to restore the SQL Server backup file to Aurora MySQL. Create another AWS DMS task for ongoing replication from SQL Server to Aurora MySQL. Perform application testing and migrate the data to the new database endpoint.
  12. A company runs an application on a fleet of Amazon EC2 instances. The application requires low latency and random access to 100 GB of data. The application must be able to access the data at up to 3.000 IOPS. A Development team has configured the EC2 launch template to provision a 100-GB Provisioned IOPS (PIOPS) Amazon EBS volume with 3 000 IOPS provisioned. A Solutions Architect is tasked with lowering costs without impacting performance and durability.

    Which action should be taken?

    • Create an Amazon EFS file system with the performance mode set to Max I/O. Configure the EC2 operating system to mount the EFS file system.
    • Create an Amazon EFS file system with the throughput mode set to Provisioned. Configure the EC2 operating system to mount the EFS file system.
    • Update the EC2 launch template to allocate a new 1-TB EBS General Purpose SSO (gp2) volume.
    • Update the EC2 launch template to exclude the PIOPS volume. Configure the application to use local instance storage.
  13. A company recently transformed its legacy infrastructure provisioning scripts to AWS CloudFormation templates. The newly developed templates are hosted in the company’s private GitHub repository. Since adopting CloudFormation, the company has encountered several issues with updates to the CloudFormation templates, causing execution or creating environment. Management is concerned by the increase in errors and has asked a Solutions Architect to design the automated testing of CloudFormation template updates.

    What should the Solution Architect do to meet these requirements?

    • Use AWS CodePipeline to create a change set from the CloudFormation templates stored in the private GitHub repository. Execute the change set using AWS CodeDeploy. Include a CodePipeline action to test the deployment with testing scripts run by AWS CodeBuild.
    • Mirror the GitHub repository to AWS CodeCommit using AWS Lambda. Use AWS CodeDeploy to create a change set from the CloudFormation templates and execute it. Have CodeDeploy test the deployment with testing scripts run by AWS CodeBuild.
    • Use AWS CodePipeline to create and execute a change set from the CloudFormation templates stored in the GitHub repository. Configure a CodePipeline action to be deployment with testing scripts run by AWS CodeBuild.
    • Mirror the GitHub repository to AWS CodeCommit using AWS Lambda. Use AWS CodeBuild to create a change set from the CloudFormation templates and execute it. Have CodeBuild test the deployment with testing scripts.
  14. A company has several Amazon EC2 instances to both public and private subnets within a VPC that is not connected to the corporate network. A security group associated with the EC2 instances allows the company to use the Windows remote desktop protocol (RDP) over the internet to access the instances. The security team has noticed connection attempts from unknown sources. The company wants to implement a more secure solution to access the EC2 instances.

    Which strategy should a solutions architect implement?

    • Deploy a Linux bastion host on the corporate network that has access to all instances in the VPC.
    • Deploy AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users with permission.
    • Deploy a Linux bastion host with an Elastic IP address in the public subnet. Allow access to the bastion host from 0.0.0.0/0.
    • Establish a Site-to-Site VPN connecting the corporate network to the VPC. Update the security groups to allow access from the corporate network only.
  15. A retail company has a custom .NET web application running on AWS that uses Microsoft SQL Server for the database. The application servers maintain a user’s session locally.

    Which combination of architecture changes are needed to ensure all tiers of the solution are highly available? (Choose three.)

    • Refactor the application to store the user’s session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between application instances.
    • Set up the database to generate hourly snapshots using Amazon EBS. Configure an Amazon CloudWatch Events rule to launch a new database instance if the primary one fails.
    • Migrate the database to Amazon RDS for SQL Server. Configure the RDS instance to use a Multi-AZ deployment.
    • Move the .NET content to an Amazon S3 bucket. Configure the bucket for static website hosting.
    • Put the application instances in an Auto Scaling group. Configure the Auto Scaling group to create new instances if an instance becomes unhealthy.
    • Deploy Amazon CloudFront in front of the application tier. Configure CloudFront to serve content from healthy application instances only.
  16. A company is using an existing orchestration tool to manage thousands of Amazon EC2 instances. A recent penetration test found a vulnerability in the company’s software stack. This vulnerability has prompted the company to perform a full evaluation of its current production environment. The analysis determined that the following vulnerabilities exist within the environment:

    – Operating systems with outdated libraries and known vulnerabilities are being used in production.
    – Relational databases hosted and managed by the company are running unsupported versions with known vulnerabilities.
    – Data stored in databases is not encrypted.

    The solutions architect intends to use AWS Config to continuously audit and assess the compliance of the company’s AWS resource configurations with the company’s policies and guidelines.

    What additional steps will enable the company to secure its environments and track resources while adhering to best practices?

    • Use AWS Application Discovery Service to evaluate all running EC2 instances Use the AWS CLI to modify each instance, and use EC2 user data to install the AWS Systems Manager Agent during boot. Schedule patching to run as a Systems Manager Maintenance Windows task. Migrate all relational databases to Amazon RDS and enable AWS KMS encryption.
    • Create an AWS CloudFormation template for the EC2 instances. Use EC2 user data in the CloudFormation template to install the AWS Systems Manager Agent, and enable AWS KMS encryption on all Amazon EBS volumes. Have CloudFormation replace all running instances. Use Systems Manager Patch Manager to establish a patch baseline and deploy a Systems Manager Maintenance Windows task to execute AWS-RunPatchBaseline using the patch baseline.
    • Install the AWS Systems Manager Agent on all existing instances using the company’s current orchestration tool. Use the Systems Manager Run Command to execute a list of commands to upgrade software on each instance using operating system-specific tools. Enable AWS KMS encryption on all Amazon EBS volumes.
    • Install the AWS Systems Manager Agent on all existing instances using the company’s current orchestration tool. Migrate all relational databases to Amazon RDS and enable AWS KMS encryption. Use Systems Manager Patch Manager to establish a patch baseline and deploy a Systems Manager Maintenance Windows task to execute AWS-RunPatchBaseline using the patch baseline.
  17. A company wants to improve cost awareness for its Amazon EMR platform. The company has allocated budgets for each team’s Amazon EMR usage. When a budgetary threshold is reached, a notification should be sent by email to the budget office’s distribution list. Teams should be able to view their EMR cluster expenses to date. A solutions architect needs to create a solution that ensures the policy is proactively and centrally enforced in a multi-account environment.

    Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

    • Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property.
    • Implement Amazon CloudWatch dashboards for Amazon EMR usage.
    • Create an EMR bootstrap action that runs at startup that calls the Cost Explorer API to set the budget on the cluster with the GetCostForecast and NotificationsWithSubscribers actions.
    • Create an AWS Service Catalog portfolio for each team. Add each team’s Amazon EMR cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product.
    • Create an Amazon CloudWatch metric for billing. Create a custom alert when costs exceed the budgetary threshold.
  18. A company is migrating its on-premises systems to AWS. The user environment consists of the following systems:

    – Windows and Linux virtual machines running on VMware.
    – Physical servers running Red Hat Enterprise Linux.

    The company wants to be able to perform the following steps before migrating to AWS:

    – Identify dependencies between on-premises systems.
    – Group systems together into applications to build migration plans.
    – Review performance data using Amazon Athena to ensure that Amazon EC2 instances are right-sized.

    How can these requirements be met?

    • Populate the AWS Application Discovery Service import template with information from an on-premises configuration management database (CMDB). Upload the completed import template to Amazon S3, then import the data into Application Discovery Service.
    • Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
    • Install the AWS Application Discovery Service Discovery Connector on each of the on-premises systems and in VMware vCenter. Allow the Discovery Connector to collect data for one week.
    • Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Agent to collect data for a period of time.
  19. A company hosts a web application on AWS in the us-east-1 Region. The application servers are distributed across three Availability Zones behind an Application Load Balancer. The database is hosted in MYSQL database on an Amazon EC2 instance. A solutions architect needs to design a cross-Region data recovery solution using AWS services with an RTO of less than 5 minutes and an RPO of less than 1 minute. The solutions architect is deploying application servers in us-west-2, and has configured Amazon Route 53 health checks and DNS failover to us-west-2.

    Which additional step should the solutions architect take?

    • Migrate the database to an Amazon RDS for MySQL instance with a cross-Region read replica in us-west-2.
    • Migrate the database to an Amazon Aurora global database with the primary in us-east-1 and the secondary in us-west-2.
    • Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ deployment.
    • Create a MySQL standby database on an Amazon EC2 instance in us-west-2.
  20. A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many department services hosted either in the same data center or externally. The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the cloud resource costs after the migration.

    Which tools or services should solutions architect use to plan the cloud migration (Choose three.)

    • AWS Application Discovery Service
    • AWS SMS
    • AWS x-Ray
    • AWS Cloud Adoption Readness Tool (CART)
    • Amazon Inspector
    • AWS Migration Hub
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments