SAP-C01 : AWS Certified Solutions Architect – Professional : Part 05
-
Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that ail employees would be granted access to use Amazon S3 for storage of their personal documents.
Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose three.)
- Setting up a federation proxy or identity provider
- Using AWS Security Token Service to generate temporary tokens
- Tagging each folder in the bucket
- Configuring IAM role
- Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket
-
A company is running a batch analysis every hour on their main transactional DB, running on an RDS MySQL instance, to populate their central Data Warehouse running on Redshift. During the execution of the batch, their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required. The on-premises system cannot be modified because is managed by another team.
How would you optimize this scenario to solve performance issues and automate the process as much as possible?
- Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
- Replace RDS with Redshift for the oaten analysis and SQS to send a message to the on-premises system to update the dashboard
- Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
- Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
-
You are running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web application s database. You are currently running a Multi-AZ RDS MySQL instance for the database tier. You also have implemented Elasticache as a database caching layer between the application tier and database tier.
Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database.
- Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
- Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ.
- Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Replica.
- Generate the reports by querying the ElastiCache database caching tier.
Explanation:
Amazon RDS allows you to use read replicas with Multi-AZ deployments. In Multi-AZ deployments for MySQL, Oracle, SQL Server, and PostgreSQL, the data in your primary DB Instance is synchronously replicated to to a standby instance in a different Availability Zone (AZ). Because of their synchronous replication, Multi-AZ deployments for these engines offer greater data durability benefits than do read replicas. (In all Amazon RDS for Aurora deployments, your data is automatically replicated across 3 Availability Zones.)
You can use Multi-AZ deployments and read replicas in conjunction to enjoy the complementary benefits of each. You can simply specify that a given Multi-AZ deployment is the source DB Instance for your Read replicas. That way you gain both the data durability and availability benefits of Multi-AZ deployments and the read scaling benefits of read replicas.
Note that for Multi-AZ deployments, you have the option to create your read replica in an AZ other than that of the primary and the standby for even more redundancy. You can identify the AZ corresponding to your standby by looking at the “Secondary Zone” field of your DB Instance in the AWS Management Console. -
You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CDNs by their URLs.
You want to explicitly deny any other outbound connections from your VPC instances to hosts on the internet.Which of the following options would you consider?
- Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes.
- Implement security groups and configure outbound rules to only permit traffic to software depots.
- Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only.
- Implement network access control lists to all specific destinations, with an Implicit deny all rule.
Explanation:
Organizations usually implement proxy solutions to provide URL and web content filtering, IDS/IPS, data loss prevention, monitoring, and advanced threat protection. -
You have an application running on an EC2 instance which will allow users to download files from a private S3 bucket using a pre-signed URL. Before generating the URL, the application should verify the existence of the file in S3.
How should the application use AWS credentials to access the S3 bucket securely?
- Use the AWS account access keys; the application retrieves the credentials from the source code of the application.
- Create an IAM role for EC2 that allows list access to objects In the S3 bucket; launch the Instance with the role, and retrieve the role’s credentials from the EC2 instance metadata.
- Create an IAM user for the application with permissions that allow list access to the S3 bucket; the application retrieves the 1AM user credentials from a temporary directory with permissions that allow read access only to the Application user.
- Create an IAM user for the application with permissions that allow list access to the S3 bucket; launch the instance as the IAM user, and retrieve the IAM user’s credentials from the EC2 instance user data.
-
Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC2 instances.
Which of the following strategies will help prevent a similar situation in the future?
The administrator still must be able to:
– launch, start stop, and terminate development resources.
– launch and start production instances.- Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
- Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production, EC2 resources.
- Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances
- Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.
Explanation:
Working with volumes
When an API action requires a caller to specify multiple resources, you must create a policy statement that allows users to access all required resources. If you need to use a Condition element with one or more of these resources, you must create multiple statements as shown in this example.
The following policy allows users to attach volumes with the tag “volume_user=iam-user-name” to instances with the tag “department=dev”, and to detach those volumes from those instances. If you attach this policy to an IAM group, the aws:username policy variable gives each IAM user in the group permission to attach or detach volumes from the instances with a tag named volume_user that has his or her IAM user name as a value.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Allow”,
“Action”: [
“ec2:AttachVolume”,
“ec2:DetachVolume”
],
“Resource”: “arn:aws:ec2:us-east-1:123456789012:instance/*”,
“Condition”: {
“StringEquals”: {
“ec2:ResourceTag/department”: “dev”
}
}
},
{
“Effect”: “Allow”,
“Action”: [
“ec2:AttachVolume”,
“ec2:DetachVolume”
],
“Resource”: “arn:aws:ec2:us-east-1:123456789012:volume/*”,
“Condition”: {
“StringEquals”: {
“ec2:ResourceTag/volume_user”: “${aws:username}”
}
}
}
]
}
Launching instances (RunInstances)
The RunInstances – Amazon Elastic Compute CloudAPI action launches one or more instances. RunInstances requires an AMI and creates an instance; and users can specify a key pair and security group in the request. Launching into EC2-VPC requires a subnet, and creates a network interface. Launching from an Amazon EBS-backed AMI creates a volume. Therefore, the user must have permission to use these Amazon EC2 resources. The caller can also configure the instance using optional parameters to RunInstances, such as the instance type and a subnet. You can create a policy statement that requires users to specify an optional parameter, or restricts users to particular values for a parameter. The examples in this section demonstrate some of the many possible ways that you can control the configuration of an instance that a user can launch.
Note that by default, users don’t have permission to describe, start, stop, or terminate the resulting instances. One way to grant the users permission to manage the resulting instances is to create a specific tag for each instance, and then create a statement that enables them to manage instances with that tag. For more information, see 2: Working with instances.
a. AMI
The following policy allows users to launch instances using only the AMIs that have the specified tag, “department=dev”, associated with them. The users can’t launch instances using other AMIs because the Condition element of the first statement requires that users specify an AMI that has this tag. The users also can’t launch into a subnet, as the policy does not grant permissions for the subnet and network interface resources. They can, however, launch into EC2-Classic. The second statement uses a wildcard to enable users to create instance resources, and requires users to specify the key pair project_keypair and the security group sg-1a2b3c4d. Users are still able to launch instances without a key pair.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region::image/ami-*”
],
“Condition”: {
“StringEquals”: {
“ec2:ResourceTag/department”: “dev”
}
}
},
{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region:account:instance/*”,
“arn:aws:ec2:region:account:volume/*”,
“arn:aws:ec2:region:account:key-pair/project_keypair”,
“arn:aws:ec2:region:account:security-group/sg-1a2b3c4d”
]
}
]
}
Alternatively, the following policy allows users to launch instances using only the specified AMIs, ami-9e1670f7 and ami-45cf5c3c. The users can’t launch an instance using other AMIs (unless another statement grants the users permission to do so), and the users can’t launch an instance into a subnet.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region::image/ami-9e1670f7”,
“arn:aws:ec2:region::image/ami-45cf5c3c”,
“arn:aws:ec2:region:account:instance/*”,
“arn:aws:ec2:region:account:volume/*”,
“arn:aws:ec2:region:account:key-pair/*”,
“arn:aws:ec2:region:account:security-group/*”
]
}
]
}
Alternatively, the following policy allows users to launch instances from all AMIs owned by Amazon. The Condition element of the first statement tests whether ec2:Owner is amazon. The users can’t launch an instance using other AMIs (unless another statement grants the users permission to do so). The users are able to launch an instance into a subnet.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region::image/ami-*”
],
“Condition”: {
“StringEquals”: {
“ec2:Owner”: “amazon”
}
}
},
{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region:account:instance/*”,
“arn:aws:ec2:region:account:subnet/*”,
“arn:aws:ec2:region:account:volume/*”,
“arn:aws:ec2:region:account:network-interface/*”,
“arn:aws:ec2:region:account:key-pair/*”,
“arn:aws:ec2:region:account:security-group/*”
]
}
]
}
b. Instance type
The following policy allows users to launch instances using only the t2.micro or t2.small instance type, which you might do to control costs. The users can’t launch larger instances because the Condition element of the first statement tests whether ec2:InstanceType is either t2.micro or t2.small.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region:account:instance/*”
],
“Condition”: {
“StringEquals”: {
“ec2:InstanceType”: [“t2.micro”, “t2.small”]
}
}
},
{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region::image/ami-*”,
“arn:aws:ec2:region:account:subnet/*”,
“arn:aws:ec2:region:account:network-interface/*”,
“arn:aws:ec2:region:account:volume/*”,
“arn:aws:ec2:region:account:key-pair/*”,
“arn:aws:ec2:region:account:security-group/*”
]
}
]
}
Alternatively, you can create a policy that denies users permission to launch any instances except t2.micro and t2.small instance types.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Deny”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region:account:instance/*”
],
“Condition”: {
“StringNotEquals”: {
“ec2:InstanceType”: [“t2.micro”, “t2.small”]
}
}
},
{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region::image/ami-*”,
“arn:aws:ec2:region:account:network-interface/*”,
“arn:aws:ec2:region:account:instance/*”,
“arn:aws:ec2:region:account:subnet/*”,
“arn:aws:ec2:region:account:volume/*”,
“arn:aws:ec2:region:account:key-pair/*”,
“arn:aws:ec2:region:account:security-group/*”
]
}
]
}
c. Subnet
The following policy allows users to launch instances using only the specified subnet, subnet-12345678. The group can’t launch instances into any another subnet (unless another statement grants the users permission to do so). Users are still able to launch instances into EC2-Classic.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region:account:subnet/subnet-12345678”,
“arn:aws:ec2:region:account:network-interface/*”,
“arn:aws:ec2:region:account:instance/*”,
“arn:aws:ec2:region:account:volume/*”,
“arn:aws:ec2:region::image/ami-*”,
“arn:aws:ec2:region:account:key-pair/*”,
“arn:aws:ec2:region:account:security-group/*”
]
}
]
}
Alternatively, you could create a policy that denies users permission to launch an instance into any other subnet. The statement does this by denying permission to create a network interface, except where subnet subnet-12345678 is specified. This denial overrides any other policies that are created to allow launching instances into other subnets. Users are still able to launch instances into EC2-Classic.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Deny”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region:account:network-interface/*”
],
“Condition”: {
“ArnNotEquals”: {
“ec2:Subnet”: “arn:aws:ec2:region:account:subnet/subnet-12345678”
}
}
},
{
“Effect”: “Allow”,
“Action”: “ec2:RunInstances”,
“Resource”: [
“arn:aws:ec2:region::image/ami-*”,
“arn:aws:ec2:region:account:network-interface/*”,
“arn:aws:ec2:region:account:instance/*”,
“arn:aws:ec2:region:account:subnet/*”,
“arn:aws:ec2:region:account:volume/*”,
“arn:aws:ec2:region:account:key-pair/*”,
“arn:aws:ec2:region:account:security-group/*”
]
}
]
} -
A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The database tier uses shared-storage clustering to provide database fall over capability, and uses several read slaves for scaling. Data on all servers and the distributed file system directory is backed up weekly to off-site tapes.
Which AWS storage and database architecture meets the requirements of the application?
- Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.
- Web servers: store read-only data in an EC2 NFS server; mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
- Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
- Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
Explanation:
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Benefits
Enhanced Durability
Multi-AZ deployments for the MySQL, Oracle, and Amazon RDS for PostgreSQL – Amazon Web Services (AWS) engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL Server-native Mirroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone.
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available.
Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically.
Increased Availability
You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines (see the RDS FAQ for details).
The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete.
Unlike Single-AZ deployments, I/O activity is not suspended on your primary during backup for Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for Multi-AZ deployments.
On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically.
No Administrative Intervention
DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions.
Failover conditions
Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following:– Loss of availability in primary Availability Zone
– Loss of network connectivity to primary
– Compute unit failure on primary
– Storage failure on primaryNote: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.
-
Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS.
Which service should you use?
- Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
- Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database.
- Amazon ElastiCache to store the writes until the writes are committed to the database.
- Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
Explanation:
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed application components performing different tasks, without losing messages or requiring each component to be always available. Amazon SQS makes it easy to build a distributed, decoupled application, working in close conjunction with the Amazon Elastic Compute Cloud (Amazon EC2) and the other AWS infrastructure web services.
What can I do with Amazon SQS?
Amazon SQS is a web service that gives you access to a message queue that can be used to store messages while waiting for a computer to process them. This allows you to quickly build message queuing applications that can be run on any computer on the internet. Since Amazon SQS is highly scalable and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability. This lets you focus on building sophisticated message-based applications, without worrying about how the messages are stored and managed. You can use Amazon SQS with software applications in various ways. For example, you can:
Integrate Amazon SQS with other AWS infrastructure web services to make applications more reliable and flexible.
Use Amazon SQS to create a queue of work where each message is a task that needs to be completed by a process. One or many computers can read tasks from the queue and perform them.
Build a microservices architecture, using queues to connect your microservices.
Keep notifications of significant events in a business process in an Amazon SQS queue. Each event can have a corresponding message in a queue, and applications that need to be aware of the event can read and process the messages. -
You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls. Usually there are a few calls/second, but once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project.
What database implementation would better fit this scenario, keeping costs as low as possible?
- Use DynamoDB with a “Calls” table and a Global Secondary Index on a “State” attribute that can equal to “active” or “terminated”. In this way the Global Secondary Index can be used for all items in the table.
- Use RDS Multi-AZ with a “CALLS” table and an indexed “STATE” field that can be equal to “ACTIVE” or ‘TERMINATED”. In this way the SQL query is optimized by the use of the Index.
- Use RDS Multi-AZ with two tables, one for “ACTIVE_CALLS” and one for “TERMINATED_CALLS”. In this way the “ACTIVE_CALLS” table is always small and effective to access.
- Use DynamoDB with a “Calls” table and a Global Secondary Index on a “IsActive” attribute that is present for active calls only. In this way the Global Secondary Index is sparse and more effective.
Explanation:
Q: Can a global secondary index key be defined on non-unique attributes?
Yes. Unlike the primary key on a table, a GSI index does not require the indexed attributes to be unique.
Q: Are GSI key attributes required in all items of a DynamoDB table?
No. GSIs are sparse indexes. Unlike the requirement of having a primary key, an item in a DynamoDB table does not have to contain any of the GSI keys. If a GSI key has both hash and range elements, and a table item omits either of them, then that item will not be indexed by the corresponding GSI. In such cases, a GSI can be very useful in efficiently locating items that have an uncommon attribute. -
Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design tor the application that leverages multiple regions tor the most recently accessed content and latency sensitive portions of the wet) site The most latency sensitive component of the application involves reading user preferences to support web site personalization and ad selection.
In addition to running your application in multiple regions, which option will support this application’s requirements?
- Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table.
- Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3. CloudFront with dynamic content and an ELB in each region Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region.
- Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3 CloudFront and Route53 latency-based routing Between ELBs In each region Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SOS workers for propagating DynamoDB updates.
- Serve user content from S3. CloudFront with dynamic content, and an ELB in each region Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized OB to each ElastiCache cluster.
-
You’ve been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC) The previous architect has already deployed a 3-tier VPC.
The configuration is as follows:
VPC: vpc-2f8bc447
IGW: igw-2d8bc445
NACL: ad-208bc448
Subnets and Route Tables:
Web servers: subnet-258bc44d
Application servers: subnet-248bc44c
Database servers: subnet-9189c6f9
Route Tables:
rrb-218bc449
rtb-238bc44b
Associations:
subnet-258bc44d : rtb-218bc449
subnet-248bc44c : rtb-238bc44b
subnet-9189c6f9 : rtb-238bc44bYou are now ready to begin deploying EC2 instances into the VPC Web servers must have direct access to the internet Application and database servers cannot have direct access to the internet.
Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet?
- Create a bastion and NAT instance in subnet-258bc44d, and add a route from rtb- 238bc44b to the NAT instance.
- Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within subnet-248bc44c.
- Create a bastion and NAT instance in subnet-248bc44c, and add a route from rtb- 238bc44b to subnet-258bc44d.
- Create a bastion and NAT instance in subnet-258bc44d, add a route from rtb-238bc44b to Igw-2d8bc445, and a new NACL that allows access between subnet-258bc44d and subnet-248bc44c.
-
You are designing a multi-platform web application for AWS The application will run on EC2 instances and will be accessed from PCs. Tablets and smart phones Supported accessing platforms are Windows, MacOS, IOS and Android Separate sticky session and SSL certificate setups are required for different platform types.
Which of the following describes the most cost effective and performance efficient architecture setup?
- Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC.
- Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform.
- Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform.
- Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.
Explanation:
One ELB cannot handle different SSL certificates but since we are using sticky sessions it must be handled at the ELB level. SSL could be handled on the EC2 instances only with TCP configured ELB, ELB supports sticky sessions only in HTTP/HTTPS configurations.
The way the Elastic Load Balancer does session stickiness is on a HTTP/HTTPS listener is by utilizing an HTTP cookie. If SSL traffic is not terminated on the Elastic Load Balancer and is terminated on the back-end instance, the Elastic Load Balancer has no visibility into the HTTP headers and therefore can not set or read any of the HTTP headers being passed back and forth. -
An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CloudFormation template.
Which of the following would allow the application instance access to the DynamoDB tables without exposing API credentials?
- Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the application instances by referencing an instance profile.
- Use the Parameter section in the Cloud Formation template to nave the user input Access and Secret Keys from an already created IAM user that has me permissions required to read and write from the required DynamoDB table.
- Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.
- Create an identity and Access Management user in the CloudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and secret keys and pass them to the application instance through user-data.
-
Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. You don’t want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console.
Which option below will meet the needs for your NOC members?
- Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console.
- Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console.
- Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.
- Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.
-
You have an application running on an EC2 Instance which will allow users to download flies from a private S3 bucket using a pre-signed URL. Before generating the URL the application should verify the existence of the file in S3.
How should the application use AWS credentials to access the S3 bucket securely?
- Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
- Create an IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user’s credentials from the EC2 instance user data.
- Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role’s credentials from the EC2 Instance metadata
- Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
-
A benefits enrollment company is hosting a 3-tier web application running in a VPC on AWS which includes a NAT (Network Address Translation) instance in the public Web tier. There is enough provisioned capacity for the expected workload tor the new fiscal year benefit enrollment period plus some extra overhead Enrollment proceeds nicely for two days and then the web tier becomes unresponsive, upon investigation using CloudWatch and other monitoring tools it is discovered that there is an extremely large and unanticipated amount of inbound traffic coming from a set of 15 specific IP addresses over port 80 from a country where the benefits company has no customers. The web tier instances are so overloaded that benefit enrollment administrators cannot even SSH into them.
Which activity would be useful in defending against this attack?
- Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (Internet Gateway)
- Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP
- Create 15 Security Group rules to block the attacking IP addresses over port 80
- Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses
Explanation:
Use AWS Identity and Access Management (IAM) to control who in your organization has permission to create and manage security groups and network ACLs (NACL). Isolate the responsibilities and roles for better defense. For example, you can give only your network administrators or security admin the permission to manage the security groups and restrict other roles. -
You are developing a new mobile application and are considering storing user preferences in AWS.2w This would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size Additionally 5 million customers are expected to use the application on a regular basis.
The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
- Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials
- Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
- Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
- Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user’ S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.
Explanation:
Here are some of the things that you can build using fine-grained access control:
A mobile app that displays information for nearby airports, based on the user’s location. The app can access and display attributes such airline names, arrival times, and flight numbers. However, it cannot access or display pilot names or passenger counts.
A mobile game which stores high scores for all users in a single table. Each user can update their own scores, but has no access to the other ones. -
You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO.
You recently improved overall performance of the website using Cloud Front for dynamic content delivery and your website as the origin.
After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude.How do you fix your usage dashboard?
- Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
- Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job
- Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job
- Use Elastic Beanstalk “Rebuild Environment” option to update log delivery to the Elastic Map Reduce job.
- Use Elastic Beanstalk “Restart App server(s)” option to update log delivery to the Elastic Map Reduce job.
-
A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web-application best runs on m2 x large instances since it is highly memory- bound Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while ana is therefore only done once per week.
Recently, a new chat feature has been implemented in nodejs and wails to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application life cycle tool to simplify management of the application and reduce the deployment cycles.What configuration in AWS Ops Works is necessary to integrate the new chat module in the most cost-efficient and flexible way?
- Create one AWS OpsWorks stack, create one AWS Ops Works layer, create one custom recipe
- Create one AWS OpsWorks stack create two AWS Ops Works layers, create one custom recipe
- Create two AWS OpsWorks stacks create two AWS Ops Works layers, create one custom recipe
- Create two AWS OpsWorks stacks create two AWS Ops Works layers, create two custom recipe
-
Select the correct set of options. These are the initial settings for the default security group:
- Allow no inbound traffic, Allow all outbound traffic and Allow instances associated with this security group to talk to each other
- Allow all inbound traffic, Allow no outbound traffic and Allow instances associated with this security group to talk to each other
- Allow no inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other
- Allow all inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other
Explanation:
A default security group is named default, and it has an ID assigned by AWS. The following are the initial settings for each default security group:
Allow inbound traffic only from other instances associated with the default security group Allow all outbound traffic from the instance
The default security group specifies itself as a source security group in its inbound rules. This is what allows instances associated with the default security group to communicate with other instances associated with the default security group.
Subscribe
0 Comments
Newest