SAP-C01 : AWS Certified Solutions Architect – Professional : Part 11

  1. True or False: “In the context of Amazon ElastiCache, from the application’s point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node.”

    • True, from the application’s point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node since, each has a unique node identifier.
    • True, from the application’s point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node.
    • False, you can connect to a cache node, but not to a cluster configuration endpoint.
    • False, you can connect to a cluster configuration endpoint, but not to a cache node.
    Explanation:
    This is true. From the application’s point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node. In the process of connecting to cache nodes, the application resolves the configuration endpoint’s DNS name. Because the configuration endpoint maintains CNAME entries for all of the cache nodes, the DNS name resolves to one of the nodes; the client can then connect to that node.
  2. An organization is setting up a highly scalable application using Elastic Beanstalk.
    They are using Elastic Load Balancing (ELB) as well as a Virtual Private Cloud (VPC) with public and private subnets. They have the following requirements:

    – All the EC2 instances should have a private IP
    – All the EC2 instances should receive data via the ELB’s.

    Which of these will not be needed in this setup?

    • Launch the EC2 instances with only the public subnet.
    • Create routing rules which will route all inbound traffic from ELB to the EC2 instances.
    • Configure ELB and NAT as a part of the public subnet only.
    • Create routing rules which will route all outbound traffic from the EC2 instances through NAT.
    Explanation:
    The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. If the organization wants the Amazon EC2 instances to have a private IP address, he should create a public and private subnet for VPC in each Availability Zone (this is an AWS Elastic Beanstalk requirement). The organization should add their public resources, such as ELB and NAT to the public subnet, and AWC Elastic Beanstalk will assign them unique elastic IP addresses (a static, public IP address). The organization should launch Amazon EC2 instances in a private subnet so that AWS Elastic Beanstalk assigns them non-routable private IP addresses. Now the organization should configure route tables with the following rules:
    route all inbound traffic from ELB to EC2 instances
    route all outbound traffic from EC2 instances through NAT
  3. An EC2 instance that performs source/destination checks by default is launched in a private VPC subnet. All security, NACL, and routing definitions are configured as expected. A custom NAT instance is launched.

    Which of the following must be done for the custom NAT instance to work?

    • The source/destination checks should be disabled on the NAT instance.
    • The NAT instance should be launched in public subnet.
    • The NAT instance should be configured with a public IP address.
    • The NAT instance should be configured with an elastic IP address.
    Explanation:
    Each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore, you must disable source/destination checks on the NAT instance.
  4. An organization has created multiple components of a single application for compartmentalization. Currently all the components are hosted on a single EC2 instance. Due to security reasons the organization wants to implement two separate SSLs for the separate modules although it is already using VPC.

    How can the organization achieve this with a single instance?

    • You have to launch two instances each in a separate subnet and allow VPC peering for a single IP.
    • Create a VPC instance which will have multiple network interfaces with multiple elastic IP addresses.
    • Create a VPC instance which will have both the ACL and the security group attached to it and have separate rules for each IP address.
    • Create a VPC instance which will have multiple subnets attached to it and each will have a separate IP address.
    Explanation:
    A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. With VPC the user can specify multiple private IP addresses for his instances.
    The number of network interfaces and private IP addresses that a user can specify for an instance depends on the instance type. With each network interface the organization can assign an EIP. This scenario helps when the user wants to host multiple websites on a single EC2 instance by using multiple SSL certificates on a single server and associating each certificate with a specific EIP address. It also helps in scenarios for operating network appliances, such as firewalls or load balancers that have multiple private IP addresses for each network interface.
  5. An organization is making software for the CIA in USA. CIA agreed to host the application on AWS but in a secure environment. The organization is thinking of hosting the application on the AWS GovCloud region. Which of the below mentioned difference is not correct when the organization is hosting on the AWS GovCloud in comparison with the AWS standard region?

    • The billing for the AWS GovCLoud will be in a different account than the Standard AWS account.
    • GovCloud region authentication is isolated from Amazon.com.
    • Physical and logical administrative access only to U.S. persons.
    • It is physically isolated and has logical network isolation from all the other regions.
    Explanation:
    AWS GovCloud (US) is an isolated AWS region designed to allow U.S. government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. The AWS GovCloud (US) Region adheres to the U.S. International Traffic in Arms Regulations (ITAR) requirements. It has added advantages, such as:
    Restricting physical and logical administrative access to U.S. persons only There will be a separate AWS GovCloud (US) credentials, such as access key and secret access key than the standard AWS account
    The user signs in with the IAM user name and password
    The AWS GovCloud (US) Region authentication is completely isolated from Amazon.com If the organization is planning to host on EC2 in AWS GovCloud then it will be billed to standard AWS account of organization since AWS GovCloud billing is linked with the standard AWS account and is not be billed separately.
  6. How does in-memory caching improve the performance of applications in ElastiCache?

    • It improves application performance by deleting the requests that do not contain frequently accessed data.
    • It improves application performance by implementing good database indexing strategies.
    • It improves application performance by using a part of instance RAM for caching important data.
    • It improves application performance by storing critical pieces of data in memory for low-latency access.
    Explanation:
    In Amazon ElastiCache, in-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally intensive calculations.
  7. A user is thinking to use EBS PIOPS volume.

    Which of the below mentioned options is a right use case for the PIOPS EBS volume?

    • Analytics
    • System boot volume
    • Mongo DB
    • Log processing
    Explanation:
    Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput. Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency in random access I/O throughput business applications, database workloads, such as NoSQL DB, RDBMS, etc.
  8. How can a user list the IAM Role configured as a part of the launch config?

    • as-describe-launch-configs -iam-profile
    • as-describe-launch-configs -show-long
    • as-describe-launch-configs -iam-role
    • as-describe-launch-configs -role
    Explanation:
    As-describe-launch-configs describes all the launch config parameters created by the AWS account in the specified region. Generally, it returns values, such as Launch Config name, Instance Type and AMI ID. If the user wants additional parameters, such as the IAM Profile used in the config, he has to run command:
    as-describe-launch-configs –show-long
  9. An organization is setting up a multi-site solution where the application runs on premise as well as on AWS to achieve the minimum recovery time objective(RTO).

    Which of the below mentioned configurations will not meet the requirements of the multi-site solution scenario?

    • Configure data replication based on RTO.
    • Keep an application running on premise as well as in AWS with full capacity.
    • Setup a single DB instance which will be accessed by both sites.
    • Setup a weighted DNS service like Route 53 to route traffic across sites.
  10. Which of the following is true of an instance profile when an IAM role is created using the console?

    • The instance profile uses a different name.
    • The console gives the instance profile the same name as the role it corresponds to.
    • The instance profile should be created manually by a user.
    • The console creates the role and instance profile as separate actions.
    Explanation:
    Amazon EC2 uses an instance profile as a container for an IAM role. When you create an IAM role using the console, the console creates an instance profile automatically and gives it the same name as the role it corresponds to. If you use the AWS CLI, API, or an AWS SDK to create a role, you create the role and instance profile as separate actions, and you might give them different names.
  11. In the context of policies and permissions in AWS IAM, the Condition element is ____________.

    • crucial while writing the IAM policies
    • an optional element
    • always set to null
    • a mandatory element
    Explanation:
    The Condition element (or Condition block) lets you specify conditions for when a policy is in effect. The Condition element is optional.
  12. Which of the following is true while using an IAM role to grant permissions to applications running on Amazon EC2 instances?

    • All applications on the instance share the same role, but different permissions.
    • All applications on the instance share multiple roles and permissions.
    • Multiple roles are assigned to an EC2 instance at a time.
    • Only one role can be assigned to an EC2 instance at a time.
    Explanation:
    Only one role can be assigned to an EC2 instance at a time, and all applications on the instance share the same role and permissions.
  13. When using string conditions within IAM, short versions of the available comparators can be used instead of the more verbose ones.

    streqi is the short version of the _______ string condition.

    • StringEqualsIgnoreCase
    • StringNotEqualsIgnoreCase
    • StringLikeStringEquals
    • StringNotEquals
    Explanation:
    When using string conditions within IAM, short versions of the available comparators can be used instead of the more verbose versions. For instance, streqi is the short version of StringEqualsIgnoreCase that checks for the exact match between two strings ignoring their case.
  14. Attempts, one of the three types of items associated with the schedule pipeline in the AWS Data Pipeline, provides robust data management.

    Which of the following statements is NOT true about Attempts?

    • Attempts provide robust data management.
    • AWS Data Pipeline retries a failed operation until the count of retries reaches the maximum number of allowed retry attempts.
    • An AWS Data Pipeline Attempt object compiles the pipeline components to create a set of actionable instances.
    • AWS Data Pipeline Attempt objects track the various attempts, results, and failure reasons if applicable.
    Explanation:
    Attempts, one of the three types of items associated with a schedule pipeline in AWS Data Pipeline, provides robust data management. AWS Data Pipeline retries a failed operation. It continues to do so until the task reaches the maximum number of allowed retry attempts. Attempt objects track the various attempts, results, and failure reasons if applicable. Essentially, it is the instance with a counter. AWS Data Pipeline performs retries using the same resources from the previous attempts, such as Amazon EMR clusters and EC2 instances.
  15. Select the correct statement about Amazon ElastiCache.

    • It makes it easy to set up, manage, and scale a distributed in-memory cache environment in the cloud.
    • It allows you to quickly deploy your cache environment only if you install software.
    • It does not integrate with other Amazon Web Services.
    • It cannot run in the Amazon Virtual Private Cloud (Amazon VPC) environment.
    Explanation:
    ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in memory cache environment in the cloud. It provides a high-performance, scalable, and cost- effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment. With ElastiCache, you can quickly deploy your cache environment, without having to provision hardware or install software.
  16. In Amazon RDS for PostgreSQL, you can provision up to 3TB storage and 30,000 IOPS per database instance. For a workload with 50% writes and 50% reads running on a cr1.8xlarge instance, you can realize over 25,000 IOPS for PostgreSQL. However, by provisioning more than this limit, you may be able to achieve:

    • higher latency and lower throughput.
    • lower latency and higher throughput.
    • higher throughput only.
    • higher latency only.
    Explanation:
    You can provision up to 3TB storage and 30,000 IOPS per database instance. For a workload with 50% writes and 50% reads running on a cr1.8xlarge instance, you can realize over 25,000 IOPS for PostgreSQL. However, by provisioning more than this limit, you may be able to achieve lower latency and higher throughput. Your actual realized IOPS may vary from the amount you provisioned based on your database workload, instance type, and database engine choice.
  17. Which of the following cannot be done using AWS Data Pipeline?

    • Create complex data processing workloads that are fault tolerant, repeatable, and highly available.
    • Regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to another AWS service.
    • Generate reports over data that has been stored.
    • Move data between different AWS compute and storage services as well as on premise data sources at specified intervals.
    Explanation:
    AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premise data sources at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to another AWS.
    AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. AWS Data Pipeline also allows you to move and process data that was previously locked up in on premise data silos.
  18. AWS Direct Connect itself has NO specific resources for you to control access to. Therefore, there are no AWS Direct Connect Amazon Resource Names (ARNs) for you to use in an Identity and Access Management (IAM) policy.

    With that in mind, how is it possible to write a policy to control access to AWS Direct Connect actions?

    • You can leave the resource name field blank.
    • You can choose the name of the AWS Direct Connection as the resource.
    • You can use an asterisk (*) as the resource.
    • You can create a name for the resource.
    Explanation:
    AWS Direct Connect itself has no specific resources for you to control access to. Therefore, there are no AWS Direct Connect ARNs for you to use in an IAM policy. You use an asterisk (*) as the resource when writing a policy to control access to AWS Direct Connect actions.
  19. Identify an application that polls AWS Data Pipeline for tasks and then performs those tasks.

    • A task executor
    • A task deployer
    • A task runner
    • A task optimizer
    Explanation:
    A task runner is an application that polls AWS Data Pipeline for tasks and then performs those tasks. You can either use Task Runner as provided by AWS Data Pipeline, or create a custom Task Runner application.
    Task Runner is a default implementation of a task runner that is provided by AWS Data Pipeline. When Task Runner is installed and configured, it polls AWS Data Pipeline for tasks associated with pipelines that you have activated. When a task is assigned to Task Runner, it performs that task and reports its status back to AWS Data Pipeline. If your workflow requires non-default behavior, you’ll need to implement that functionality in a custom task runner.
  20. With respect to AWS Lambda permissions model, at the time you create a Lambda function, you specify an IAM role that AWS Lambda can assume to execute your Lambda function on your behalf. This role is also referred to as the________role.

    • configuration
    • execution
    • delegation
    • dependency
    Explanation:
    Regardless of how your Lambda function is invoked, AWS Lambda always executes the function. At the time you create a Lambda function, you specify an IAM role that AWS Lambda can assume to execute your Lambda function on your behalf. This role is also referred to as the execution role.
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments