SAP-C01 : AWS Certified Solutions Architect – Professional : Part 02

  1. The following are AWS Storage services? (Choose two.)

    • AWS Relational Database Service (AWS RDS)
    • AWS ElastiCache
    • AWS Glacier
    • AWS Import/Export
  2. How is AWS readily distinguished from other vendors in the traditional IT computing landscape?

    • Experienced. Scalable and elastic. Secure. Cost-effective. Reliable
    • Secure. Flexible. Cost-effective. Scalable and elastic. Global
    • Secure. Flexible. Cost-effective. Scalable and elastic. Experienced
    • Flexible. Cost-effective. Dynamic. Secure. Experienced.
  3. You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached. The EC2 instance is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The four EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4,000 IOPS (4,000 16KB reads or writes), for a total of 16,000 random IOPS on the instance. The EC2 instance initially delivers the expected 16,000 IOPS random read and write performance. Sometime later, in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume is provisioned to 4,000 IOPs like the original four, for a total of 24,000 IOPS on the EC2 instance. Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%, but the total random IOPS measured at the instance level does not increase at all.

    What is the problem and a valid solution?

    • The EBS-Optimized throughput limits the total IOPS that can be utilized; use an EBSOptimized instance that provides larger throughput.
    • Small block sizes cause performance degradation, limiting the I/O throughput; configure the instance device driver and filesystem to use 64KB blocks to increase throughput.
    • The standard EBS Instance root volume limits the total IOPS rate; change the instance root volume to also be a 500GB 4,000 Provisioned IOPS volume.
    • Larger storage volumes support higher Provisioned IOPS rates; increase the provisioned volume storage of each of the 6 EBS volumes to 1TB.
    • RAID 0 only scales linearly to about 4 devices; use RAID 0 with 4 EBS Provisioned IOPS volumes, but increase each Provisioned IOPS EBS volume to 6,000 IOPS.
  4. Your company is storing millions of sensitive transactions across thousands of 100-GB files that must be encrypted in transit and at rest. Analysts concurrently depend on subsets of files, which can consume up to 5 TB of space, to generate simulations that can be used to steer business decisions.

    You are required to design an AWS solution that can cost effectively accommodate the long-term storage and in-flight subsets of data.

    Which approach can satisfy these objectives?

    • Use Amazon Simple Storage Service (S3) with server-side encryption, and run simulations on subsets in ephemeral drives on Amazon EC2.
    • Use Amazon S3 with server-side encryption, and run simulations on subsets in-memory on Amazon EC2.
    • Use HDFS on Amazon EMR, and run simulations on subsets in ephemeral drives on Amazon EC2.
    • Use HDFS on Amazon Elastic MapReduce (EMR), and run simulations on subsets in-memory on Amazon Elastic Compute Cloud (EC2).
    • Store the full data set in encrypted Amazon Elastic Block Store (EBS) volumes, and regularly capture snapshots that can be cloned to EC2 workstations.
  5. Your customer is willing to consolidate their log streams (access logs, application logs, security logs, etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours.

    What is the best approach to meet your customer’s requirements?

    • Send all the log events to Amazon SQS, setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
    • Send all the log events to Amazon Kinesis, develop a client process to apply heuristics on the logs
    • Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs
    • Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3, use EMR to apply heuristics on the logs
    Explanation:
    The throughput of an Amazon Kinesis stream is designed to scale without limits via increasing the number of shards within a stream. However, there are certain limits you should keep in mind while using Amazon Kinesis Streams:
    By default, Records of a stream are accessible for up to 24 hours from the time they are added to the stream. You can raise this limit to up to 7 days by enabling extended data retention.
    The maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB).
    Each shard can support up to 1000 PUT records per second.
    For more information about other API level limits, see Amazon Kinesis Streams Limits.
  6. A newspaper organization has an on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java. They have scanned the old newspapers into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability.

    Which is the most appropriate?

    • Use S3 with reduced redundancy lo store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer.
    • Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index.
    • Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.
    • Use a single-AZ RDS MySQL instance lo store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SQL.
    • Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container Tor the website on EC2 instances and use Route53 with DNS round-robin.
    Explanation:
    There is no such thing as “Most appropriate” without knowing all your goals. I find your scenarios very fuzzy, since you can obviously mix-n-match between them. I think you should decide by layers instead:
    Load Balancer Layer: ELB or just DNS, or roll-your-own. (Using DNS+EIPs is slightly cheaper, but less reliable than ELB.)
    Storage Layer for 17TB of Images: This is the perfect use case for S3. Off-load all the web requests directly to the relevant JPEGs in S3. Your EC2 boxes just generate links to them.
    If your app already serves it’s own images (not links to images), you might start with EFS. But more than likely, you can just setup a web server to re-write or re-direct all JPEG links to S3 pretty easily.
    If you use S3, don’t serve directly from the bucket – Serve via a CNAME in domain you control. That way, you can switch in CloudFront easily.
    EBS will be way more expensive, and you’ll need 2x the drives if you need 2 boxes. Yuck.
    Consider a smaller storage format. For example, JPEG200 or WebP or other tools might make for smaller images. There is also the DejaVu format from a while back.
    Cache Layer: Adding CloudFront in front of S3 will help people on the other side of the world — well, possibly. Typical archives follow a power law. The long tail of requests means that most JPEGs won’t be requested enough to be in the cache. So you are only speeding up the most popular objects. You can always wait, and switch in CF later after you know your costs better. (In some cases, it can actually lower costs.)
    You can also put CloudFront in front of your app, since your archive search results should be fairly static. This will also allow you to run with a smaller instance type, since CF will handle much of the load if you do it right.
    Database Layer: A few options:
    Use whatever your current server does for now, and replace with something else down the road. Don’t under-estimate this approach, sometimes it’s better to start now and optimize later.
    Use RDS to run MySQL/Postgres
    I’m not as familiar with ElasticSearch / Cloudsearch, but obviously Cloudsearch will be less maintenance+setup.
    App Layer:
    When creating the app layer from scratch, consider CloudFormation and/or OpsWorks. It’s extra stuff to learn, but helps down the road.
    Java+Tomcat is right up the alley of ElasticBeanstalk. (Basically EC2 + Autoscale + ELB).
    Preventing Abuse: When you put something in a public S3 bucket, people will hot-link it from their web pages. If you want to prevent that, your app on the EC2 box can generate signed links to S3 that expire in a few hours. Now everyone will be forced to go thru the app, and the app can apply rate limiting, etc.
    Saving money: If you don’t mind having downtime:
    run everything in one AZ (both DBs and EC2s). You can always add servers and AZs down the road, as long as it’s architected to be stateless. In fact, you should use multiple regions if you want it to be really robust.
    use Reduced Redundancy in S3 to save a few hundred bucks per month (Someone will have to “go fix it” every time it breaks, including having an off-line copy to repair S3.)
    Buy Reserved Instances on your EC2 boxes to make them cheaper. (Start with the RI market and buy a partially used one to get started.) It’s just a coupon saying “if you run this type of box in this AZ, you will save on the per-hour costs.” You can get 1/2 to 1/3 off easily.
    Rewrite the application to use less memory and CPU – that way you can run on fewer/smaller boxes. (May or may not be worth the investment.)
    If your app will be used very infrequently, you will save a lot of money by using Lambda. I’d be worried that it would be quite slow if you tried to run a Java application on it though.
    We’re missing some information like load, latency expectations from search, indexing speed, size of the search index, etc. But with what you’ve given us, I would go with S3 as the storage for the files (S3 rocks. It is really, really awesome). If you’re stuck with the commercial search application, then on EC2 instances with autoscaling and an ELB. If you are allowed an alternative search engine, Elasticsearch is probably your best bet. I’d run it on EC2 instead of the AWS Elasticsearch service, as IMHO it’s not ready yet. Don’t autoscale Elasticsearch automatically though, it’ll cause all sorts of issues. I have zero experience with CloudSearch so ic an’t comment on that. Regardless of which option, I’d use CloudFormation for all of it.
  7. A company has a complex web application that leverages Amazon CloudFront for global scalability and performance. Over time, users report that the web application is slowing down.
    The company’s operations team reports that the CloudFront cache hit ratio has been dropping steadily. The cache metrics report indicates that query strings on some URLs are inconsistently ordered and are specified sometimes in mixed-case letters and sometimes in lowercase letters.

    Which set of actions should the solutions architect take to increase the cache hit ratio as quickly possible?

    • Deploy a Lambda@Edge function to sort parameters by name and force them to be lowercase. Select the CloudFront viewer request trigger to invoke the function.
    • Update the CloudFront distribution to disable caching based on query string parameters.
    • Deploy a reverse proxy after the load balancer to post process the emitted URLs in the application to force the URL strings to be lowercase.
    • Update the CloudFront distribution to specify case-insensitive query string processing.
  8. You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts.

    Identify which option will allow you to achieve this goal.

    • Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.
    • Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
    • Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
    • Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
    Explanation:
    Bucket Owner Granting Cross-account Permission to objects It Does Not Own
    In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects. That is, your bucket can have objects that other AWS accounts own.
    Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of who the owner is, to a user in another account. For example, that user could be a billing application that needs to access object metadata. There are two core issues:
    The bucket owner has no permissions on those objects created by other AWS accounts. So for the bucket owner to grant permissions on objects it does not own, the object owner, the AWS account that created the objects, must first grant permission to the bucket owner. The bucket owner can then delegate those permissions.
    Bucket owner account can delegate permissions to users in its own account but it cannot delegate permissions to other AWS accounts, because cross-account delegation is not supported.
    In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with permission to access objects, and grant another AWS account permission to assume the role temporarily enabling it to access objects in the bucket.
    Background: Cross-Account Permissions and Using IAM Roles
    IAM roles enable several scenarios to delegate access to your resources, and cross-account access is one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily delegate object access cross-account to users in another AWS account, Account C. Each IAM role you create has two policies attached to it:
    A trust policy identifying another AWS account that can assume the role.
    An access policy defining what permissions—for example, s3:GetObject—are allowed when someone assumes the role. For a list of permissions you can specify in a policy, see Amazon S3 actions – Amazon Simple Storage Service.
    The AWS account identified in the trust policy then grants its user permission to assume the role. The user can then do the following to access objects:
    Assume the role and, in response, get temporary security credentials.
    Using the temporary security credentials, access the objects in the bucket.
    For more information about IAM roles, go to IAM roles – AWS Identity and Access Management (amazon.com) in IAM User Guide.
    The following is a summary of the walkthrough steps:
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q08 004
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q08 004

    Account A administrator user attaches a bucket policy granting Account B conditional permission to upload objects.
    Account A administrator creates an IAM role, establishing trust with Account C, so users in that account can access Account A. The access policy attached to the role limits what user in Account C can do when the user accesses Account A.
    Account B administrator uploads an object to the bucket owned by Account A, granting full-control permission to the bucket owner.
    Account C administrator creates a user and attaches a user policy that allows the user to assume the role.
    User in Account C first assumes the role, which returns the user temporary security credentials. Using those temporary credentials, the user then accesses objects in the bucket.
    For this example, you need three accounts. The following table shows how we refer to these accounts and the administrator users in these accounts. Per IAM guidelines (see Example walkthroughs: Managing access to your Amazon S3 resources – Amazon Simple Storage Service) we do not use the account root credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials in creating resources and granting them permissions

    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q08 005
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q08 005
  9. You’re running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server. Users must be able to access portions of this data while the backups are taking place.

    What backup solution would be most appropriate for this use case?

    • Use Storage Gateway and configure it to use Gateway Cached volumes.
    • Configure your backup software to use S3 as the target for your data backups.
    • Configure your backup software to use Glacier as the target for your data backups.
    • Use Storage Gateway and configure it to use Gateway Stored volumes.
    Explanation:
    Volume gateway provides an iSCSI target, which enables you to create volumes and mount them as iSCSI devices from your on-premises application servers. The volume gateway runs in either a cached or stored mode.

    In the cached mode, your primary data is written to S3, while you retain some portion of it locally in a cache for frequently accessed data.
    In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up to AWS.
    In either mode, you can take point-in-time snapshots of your volumes and store them in Amazon S3, enabling you to make space-efficient versioned copies of your volumes for data protection and various data reuse needs.

  10. To serve Web traffic for a popular product your chief financial officer and IT director have purchased 10 m1.large heavy utilization Reserved Instances (RIs), evenly spread across two availability zones; Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB). After several months, the product grows even more popular and you need additional capacity. As a result, your company purchases two C3.2xlarge medium utilization Ris. You register the two c3.2xlarge instances with your ELB and quickly find that the m1.large instances are at 100% of capacity and the c3.2xlarge instances have significant capacity that’s unused.

    Which option is the most cost effective and uses EC2 capacity most effectively?

    • Configure Autoscaling group and Launch Configuration with ELB to add up to 10 more on-demand m1.large instances when triggered by Cloudwatch. Shut off c3.2xlarge instances.
    • Configure ELB with two c3.2xlarge instances and use on-demand Autoscaling group for up to two additional c3.2xlarge instances. Shut off m1.large instances.
    • Route traffic to EC2 m1.large and c3.2xlarge instances directly using Route 53 latency based routing and health checks. Shut off ELB.
    • Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin. 
  11. You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route53 Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region. Dunning a DR test you notice that when you disable all web servers in one of the regions Route53 does not automatically direct all users to the other region.

    What could be happening? (Choose two.)

    • Latency resource record sets cannot be used in combination with weighted resource record sets.
    • You did not setup an HTTP health check to one or more of the weighted resource record sets associated with me disabled web servers.
    • The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region.
    • One of the two working web servers in the other region did not pass its HTTP health check.
    • You did not set “Evaluate Target Health” to “Yes” on the latency alias resource record set associated with example com in the region where you disabled the servers.
    Explanation:
    How Health Checks Work in Complex Amazon Route 53 Configurations
    Checking the health of resources in complex configurations works much the same way as in simple configurations. However, in complex configurations, you use a combination of alias resource record sets (including weighted alias, latency alias, and failover alias) and nonalias resource record sets to build a decision tree that gives you greater control over how Amazon Route 53 responds to requests. For more information, see How Health Checks Work in Simple Amazon Route 53 Configurations.
    For example, you might use latency alias resource record sets to select a region close to a user and use weighted resource record sets for two or more resources within each region to protect against the failure of a single endpoint or an Availability Zone. The following diagram shows this configuration.
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 006
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 006

    Here’s how Amazon EC2 and Amazon Route 53 are configured:
    You have Amazon EC2 instances in two regions, us-east-1 and ap-southeast-2. You want Amazon Route 53 to respond to queries by using the resource record sets in the region that provides the lowest latency for your customers, so you create a latency alias resource record set for each region. (You create the latency alias resource record sets after you create resource record sets for the individual Amazon EC2 instances.)
    Within each region, you have two Amazon EC2 instances. You create a weighted resource record set for each instance. The name and the type are the same for both of the weighted resource record sets in each region.
    When you have multiple resources in a region, you can create weighted or failover resource record sets for your resources. You can also create even more complex configurations by creating weighted alias or failover alias resource record sets that, in turn, refer to multiple resources.
    Each weighted resource record set has an associated health check. The IP address for each health check matches the IP address for the corresponding resource record set. This isn’t required, but it’s the most common configuration.
    For both latency alias resource record sets, you set the value of Evaluate Target Health to Yes.
    You use the Evaluate Target Health setting for each latency alias resource record set to make Amazon Route 53 evaluate the health of the alias targets—the weighted resource record sets—and respond accordingly.

    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 007
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 007

    The preceding diagram illustrates the following sequence of events:
    Amazon Route 53 receives a query for example.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region.
    Amazon Route 53 selects a weighted resource record set based on weight. Evaluate Target Health is Yes for the latency alias resource record set, so Amazon Route 53 checks the health of the selected weighted resource record set.
    The health check failed, so Amazon Route 53 chooses another weighted resource record set based on weight and checks its health. That resource record set also is unhealthy.
    Amazon Route 53 backs out of that branch of the tree, looks for the latency alias resource record set with the next-best latency, and chooses the resource record set for ap-southeast-2.
    Amazon Route 53 again selects a resource record set based on weight, and then checks the health of the selected resource record set. The health check passed, so Amazon Route 53 returns the applicable value in response to the query.
    What Happens When You Associate a Health Check with an Alias Resource Record Set?
    You can associate a health check with an alias resource record set instead of or in addition to setting the value of Evaluate Target Health to Yes. However, it’s generally more useful if Amazon Route 53 responds to queries based on the health of the underlying resources—the HTTP servers, database servers, and other resources that your alias resource record sets refer to. For example, suppose the following configuration:
    You assign a health check to a latency alias resource record set for which the alias target is a group of weighted resource record sets.
    You set the value of Evaluate Target Health to Yes for the latency alias resource record set.
    In this configuration, both of the following must be true before Amazon Route 53 will return the applicable value for a weighted resource record set:
    The health check associated with the latency alias resource record set must pass.
    At least one weighted resource record set must be considered healthy, either because it’s associated with a health check that passes or because it’s not associated with a health check. In the latter case, Amazon Route 53 always considers the weighted resource record set healthy.

    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 008
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 008

    If the health check for the latency alias resource record set fails, Amazon Route 53 stops responding to queries using any of the weighted resource record sets in the alias target, even if they’re all healthy. Amazon Route 53 doesn’t know the status of the weighted resource record sets because it never looks past the failed health check on the alias resource record set.
    What Happens When You Omit Health Checks?
    In a complex configuration, it’s important to associate health checks with all of the non-alias resource record sets. Let’s return to the preceding example, but assume that a health check is missing on one of the weighted resource record sets in the us-east-1 region:

    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 009
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 009

    Here’s what happens when you omit a health check on a non-alias resource record set in this configuration:
    Amazon Route 53 receives a query for example.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region.
    Amazon Route 53 looks up the alias target for the latency alias resource record set, and checks the status of the corresponding health checks. The health check for one weighted resource record set failed, so that resource record set is omitted from consideration.
    The other weighted resource record set in the alias target for the us-east-1 region has no health check. The corresponding resource might or might not be healthy, but without a health check, Amazon Route 53 has no way to know. Amazon Route 53 assumes that the resource is healthy and returns the applicable value in response to the query.
    What Happens When You Set Evaluate Target Health to No?
    In general, you also want to set Evaluate Target Health to Yes for all of the alias resource record sets. In the following example, all of the weighted resource record sets have associated health checks, but Evaluate Target Health is set to No for the latency alias resource record set for the us-east-1 region:

    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 010
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q11 010

    Here’s what happens when you set Evaluate Target Health to No for an alias resource record set in this configuration:
    Amazon Route 53 receives a query for example.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region.
    Amazon Route 53 determines what the alias target is for the latency alias resource record set, and checks the corresponding health checks. They’re both failing.
    Because the value of Evaluate Target Health is No for the latency alias resource record set for the us-east-1 region, Amazon Route 53 must choose one resource record set in this branch instead of backing out of the branch and looking for a healthy resource record set in the ap-southeast-2 region.

  12. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months.
    Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure.
    Your base architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.

    How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

    • Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
    • Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
    • Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
    • Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.
  13. A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically.

    What AWS services should be used meet these requirements?

    • Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaimg group monitored with CloudWatch and RDS with read replicas.
    • Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas.
    • Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and multi-AZ RDS.
    • Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS.
  14. You are designing a photo-sharing mobile app. The application will store all pictures in a single Amazon S3 bucket.
    Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3.
    You want to configure security to handle potentially millions of users in the most secure manner possible.

    What should your server-side application do when a new user registers on the photo-sharing mobile application?

    • Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3.
    • Create an IAM user. Assign appropriate permissions to the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3.
    • Create a set of long-term credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app and use them to access Amazon S3.
    • Record the user’s information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app, create temporary credentials using the AWS Security Token Service “AssumeRole” function. Store these credentials in the mobile app’s memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app.
    • Record the user’s information in Amazon DynamoDB. When the user uses their mobile app, create temporary credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app’s memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app.
    Explanation:
    We can use either RDS or DynamoDB, however in our given answers, IAM role is mentioned only with RDS, so I would go with Answer B. Question was explicitly focused on security, so IAM with RDS is the best choice.
  15. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately, this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it.

    What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose three.)

    • An AWS Direct Connect link between the VPC and the network housing the internal services.
    • An Internet Gateway to allow a VPN connection.
    • An Elastic IP address on the VPC instance
    • An IP address space that does not conflict with the one on-premises
    • Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
    • A VM Import of the current virtual machine
    Explanation:
    AWS Direct Connect
    AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or collocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
    AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Logically Isolated Virtual Private Cloud—Amazon VPC – Amazon Web Services using private IP space, while maintaining network separation between the public and private environments. Virtual interfaces can be reconfigured at any time to meet your changing needs.
    What is AWS Direct Connect?
    AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. With this connection in place, you can create virtual interfaces directly to the AWS cloud (for example, to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) and to Amazon Virtual Private Cloud (Amazon VPC), bypassing Internet service providers in your network path. An AWS Direct Connect location provides access to Amazon Web Services in the region it is associated with, as well as access to other US regions. For example, you can provision a single connection to any AWS Direct Connect location in the US and use it to access public AWS services in all US Regions and AWS GovCloud (US).
    The following diagram shows how AWS Direct Connect interfaces with your network.
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q15 011
    SAP-C01 AWS Certified Solutions Architect – Professional Part 02 Q15 011

    Requirements
    To use AWS Direct Connect, your network must meet one of the following conditions:
    Your network is collocated with an existing AWS Direct Connect location. For more information on available AWS Direct Connect locations, go to AWS Direct Connect | Hybrid Cloud Networking | AWS (amazon.com).
    You are working with an AWS Direct Connect partner who is a member of the AWS Partner Network (APN). For a list of AWS Direct Connect partners who can help you connect, go to AWS Direct Connect | Hybrid Cloud Networking | AWS (amazon.com).
    You are working with an independent service provider to connect to AWS Direct Connect.
    In addition, your network must meet the following conditions:
    Connections to AWS Direct Connect require single mode fiber, 1000BASE-LX (1310nm) for 1 gigabit Ethernet, or 10GBASE-LR (1310nm) for 10 gigabit Ethernet. Auto Negotiation for the port must be disabled. You must support 802.1Q VLANs across these connections.
    Your network must support Border Gateway Protocol (BGP) and BGP MD5 authentication. Optionally, you may configure Bidirectional Forwarding Detection (BFD).
    To connect to Amazon Virtual Private Cloud (Amazon VPC), you must first do the following:
    Provide a private Autonomous System Number (ASN). Amazon allocates a private IP address in the 169.x.x.x range to you.
    Create a virtual private gateway and attach it to your VPC. For more information about creating a virtual private gateway, see What is AWS Site-to-Site VPN? – AWS Site-to-Site VPN (amazon.com) User Guide.
    To connect to public AWS products such as Amazon EC2 and Amazon S3, you need to provide the following:
    A public ASN that you own (preferred) or a private ASN.
    Public IP addresses (/31) (that is, one for each end of the BGP session) for each BGP session. If you do not have public IP addresses to assign to this connection, log on to AWS and then Amazon Web Services Sign-In.
    The public routes that you will advertise over BGP.

  16. You have a periodic image analysis application that gets some files in input, analyzes them and tor each file writes some data in output to a ten file the number of files in input per day is high and concentrated in a few hours of the day.
    Currently you have a server on EC2 with a large EBS volume that hosts the input data and the results. It takes almost 20 hours per day to complete the process.

    What services could be used to reduce the elaboration time and improve the availability of the solution?

    • S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue
    • EBS with Provisioned IOPS (PIOPS) to store I/O files. SNS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the number of SNS notifications
    • S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications
    • EBS with Provisioned IOPS (PIOPS) to store I/O files SQS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the length of the SQS queue.
    Explanation:
    Amazon EBS allows you to create storage volumes and attach them to Amazon EC2 instances. Once attached, you can create a file system on top of these volumes, run a database, or use them in any other way you would use a block device. Amazon EBS volumes are placed in a specific Availability Zone, where they are automatically replicated to protect you from the failure of a single component.
    Amazon EBS provides three volume types: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. The three volume types differ in performance characteristics and cost, so you can choose the right storage performance and price for the needs of your applications. All EBS volume types offer the same durable snapshot capabilities and are designed for 99.999% availability.
  17. You have been asked to design the storage layer for an application. The application requires disk performance of at least 100,000 IOPS. In addition, the storage layer must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB.

    Which of the following designs will meet these objectives?

    • Instantiate a c3.8xlarge instance in us-east-1. Provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume. Ensure that EBS snapshots are performed every 15 minutes.
    • Instantiate a c3.8xlarge instance in us-east-1. Provision 3xlTB EBS volumes, attach them to the Instance, and configure them as a single RAID 0 volume. Ensure that EBS snapshots are performed every 15 minutes.
    • Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Provision 3x1TB EBS volumes, attach them to the instance, and configure them as a second RAID 0 volume. Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed volume.
    • Instantiate a c3.8xlarge instance in us-east-1. Provision an AWS Storage Gateway and configure it for 3 TB of storage and 100,000 IOPS. Attach the volume to the instance.
    • Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Configure synchronous, block-level replication to an identically configured instance in us-east-1b.
  18. You are the new IT architect in a company that operates a mobile sleep tracking application.
    When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend.
    The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table.
    Every morning, you scan the table to extract and aggregate last night’s data on a per user basis, and store the results in Amazon S3. Users are notified via Amazon SNS mobile push notifications that new data is available, which is parsed and visualized by the mobile app.
    Currently you have around 100k users who are mostly based out of North America.
    You have been tasked to optimize the architecture of the backend system to lower cost.

    What would you recommend? (Choose two.)

    • Have the mobile app access Amazon DynamoDB directly Instead of JSON files stored on Amazon S3.
    • Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.
    • Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
    • Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
    • Create a new Amazon DynamoDB table each day and drop the one for the previous day after its data is on Amazon S3.
  19. A large real-estate brokerage is exploring the option of adding a cost-effective location based alert to their existing mobile application. The application backend infrastructure currently runs on AWS. Users who opt in to this service will receive alerts on their mobile device regarding real-estate otters in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5 million users across the US.

    Which one of the following architectural suggestions would you make to the customer?

    • The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances; DynamoDB will be used to store and retrieve relevant offers EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application.
    • Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications location through carrier connection: RDS will be used to store and relevant offers. EC2 instances will communicate with mobile carriers to push alerts back to the mobile application.
    • The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB. AWS Mobile Push will be used to send offers to the mobile application.
    • The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB. EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.
  20. You currently operate a web application. In the AWS US-East region. The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database. Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2.IAM And RDS resources. The solution must ensure the integrity and confidentiality of your log data.

    Which of these solutions would you recommend?

    • Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs.
    • Create a new CloudTrail with one new S3 bucket to store the logs Configure SNS to send log file delivery notifications to your management system. Use IAM roles and S3 bucket policies on the S3 bucket mat stores your logs.
    • Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected. Use S3 ACLs and Multi Factor Authentication (MFA). Delete on the S3 bucket that stores your logs.
    • Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools. Use IAM roles and S3 bucket policies on the S3 buckets that store your logs.
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments