DBS-C01 : AWS Certified Database – Specialty : Part 08

  1. A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries are executed. Amazon CloudWatch metrics indicate that the instance requires more I/O capacity.

    Which actions can a database specialist perform to resolve this issue? (Choose two.)

    • Restart the application tool used to execute queries.
    • Change to a database instance class with higher throughput.
    • Convert from Single-AZ to Multi-AZ.
    • Increase the I/O parameter in Amazon RDS Enhanced Monitoring.
    • Convert from General Purpose to Provisioned IOPS (PIOPS).
  2. A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template.

    What is the MOST operationally efficient solution to meet these requirements?

    • Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.
    • Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.
    • Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.
    • Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.
  3. A startup company is building a new application to allow users to visualize their on-premises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The application should be able to identify:

    – The networks and routes affected if a particular component fails.
    – The networks that have redundant routes between them.
    – The networks that do not have redundant routes between them.
    – The fastest path between two networks.

    Which database engine meets these requirements?

    • Amazon Aurora MySQL
    • Amazon Neptune
    • Amazon ElastiCache for Redis
    • Amazon DynamoDB
  4. An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most cost-effective solution that will automatically scale and is highly available.

    Which solution meets these requirements?

    • Amazon DynamoDB with on-demand capacity mode
    • Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled
    • Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)
    • Amazon Aurora with one writer node and two cross-Region Aurora Replicas
  5. A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours.

    This application has two parts:

    – An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users.
    – A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data.

    A database specialist needs to design a cost-effective database solution to handle this workload.

    Which solution meets these requirements?

    • Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
    • Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
    • Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
    • Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
  6. An online advertising website uses an Amazon DynamoDB table with on-demand capacity mode as its data store. The website also has a DynamoDB Accelerator (DAX) cluster in the same VPC as its web application server. The application needs to perform infrequent writes and many strongly consistent reads from the data store by querying the DAX cluster.

    During a performance audit, a systems administrator notices that the application can look up items by using the DAX cluster. However, the QueryCacheHits metric for the DAX cluster consistently shows 0 while the QueryCacheMisses metric continuously keeps growing in Amazon CloudWatch.

    What is the MOST likely reason for this occurrence?

    • A VPC endpoint was not added to access DynamoDB.
    • Strongly consistent reads are always passed through DAX to DynamoDB.
    • DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
    • A VPC endpoint was not added to access CloudWatch.
  7. A financial company recently launched a portfolio management solution. The backend of the application is powered by Amazon Aurora with MySQL compatibility. The company requires an RTO of 5 minutes and an RPO of 5 minutes. A database specialist must configure an efficient disaster recovery solution with minimal replication lag.

    Which approach should the database specialist take to meet these requirements?

    • Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.
    • Configure an Amazon Aurora global database and add a different AWS Region.
    • Configure a binlog and create a replica in a different AWS Region.
    • Configure a cross-Region read replica.
  8. A company hosts an internal file-sharing application running on Amazon EC2 instances in VPC_A. This application is backed by an Amazon ElastiCache cluster, which is in VPC_B and peered with VPC_A. The company migrates its application instances from VPC_A to VPC_B. Logs indicate that the file-sharing application no longer can connect to the ElastiCache cluster.

    What should a database specialist do to resolve this issue?

    • Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.
    • Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.
    • Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC_B’s CIDR blocks from the ElastiCache cluster.
    • Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances’ security group to the ElastiCache cluster.
  9. A database specialist must load 25 GB of data files from a company’s on-premises storage to an Amazon Neptune database.

    Which approach to load the data is FASTEST?

    • Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.
    • Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.
    • Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.
    • Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.
  10. A finance company needs to make sure that its MySQL database backups are available for the most recent 90 days. All of the MySQL databases are hosted on Amazon RDS for MySQL DB instances. A database specialist must implement a solution that meets the backup retention requirement with the least possible development effort.

    Which approach should the database specialist take?

    • Use AWS Backup to build a backup plan for the required retention period. Assign the DB instances to the backup plan.
    • Modify the DB instances to enable the automated backup option. Select the required backup retention period.
    • Automate a daily cron job on an Amazon EC2 instance to create MySQL dumps, transfer to Amazon S3, and implement an S3 Lifecycle policy to meet the retention requirement.
    • Use AWS Lambda to schedule a daily manual snapshot of the DB instances. Delete snapshots that exceed the retention requirement.
  11. An online advertising company uses an Amazon DynamoDb table as its data store. The table has Amazon DynamoDB Streams enabled and has a global secondary index on one of the keys. The table is encrypted using an AWS Key Management Service (AWS KMS) customer managed key.

    The company has decided to expand its operations globally and wants to replicate the database in a different AWS Region by using DynamoDB global tables. Upon review, an administrator notices the following:

    – No role with the dynamodb: CreateGlobalTable permission exists in the account.
    – An empty table with the same name exists in the new Region where replication is desired.
    – A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.

    Which configurations will block the creation of a global table or the creation of a replica in the new Region? (Choose two.)

    • A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.
    • An empty table with the same name exists in the Region where replication is desired.
    • No role with the dynamodb:CreateGlobalTable permission exists in the account.
    • DynamoDB Streams is enabled for the table.
    • The table is encrypted using a KMS customer managed key.
  12. A large automobile company is migrating the database of a critical financial application to Amazon DynamoDB. The company’s risk and compliance policy requires that every change in the database be recorded as a log entry for audits. The system is anticipating more than 500,000 log entries each minute. Log entries should be stored in batches of at least 100,000 records in each file in Apache Parquet format.

    How should a database specialist implement these requirements with DynamoDB?

    • Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.
    • Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.
    • Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.
    • Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.
  13. A company released a mobile game that quickly grew to 10 million daily active users in North America. The game’s backend is hosted on AWS and makes extensive use of an Amazon DynamoDB table that is configured with a TTL attribute.

    When an item is added or updated, its TTL is set to the current epoch time plus 600 seconds. The game logic relies on old data being purged so that it can calculate rewards points accurately. Occasionally, items are read from the table that are several hours past their TTL expiry.

    How should a database specialist fix this issue?

    • Use a client library that supports the TTL functionality for DynamoDB.
    • Include a query filter expression to ignore items with an expired TTL.
    • Set the ConsistentRead parameter to true when querying the table.
    • Create a local secondary index on the TTL attribute.
  14. A development team at an international gaming company is experimenting with Amazon DynamoDB to store in-game events for three mobile games. The most popular game hosts a maximum of 500,000 concurrent users, and the least popular game hosts a maximum of 10,000 concurrent users. The average size of an event is 20 KB, and the average user session produces one event each second. Each event is tagged with a time in milliseconds and a globally unique identifier. The lead developer created a single DynamoDB table for the events with the following schema:

    – Partition key: game name
    – Sort key: event identifier
    – Local secondary index: player identifier
    – Event time

    The tests were successful in a small-scale development environment. However, when deployed to production, new events stopped being added to the table and the logs show DynamoDB failures with the ItemCollectionSizeLimitExceededException error code.

    Which design change should a database specialist recommend to the development team?

    • Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.
    • Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.
    • Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.
    • Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.
  15. An ecommerce company recently migrated one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition DB instance. The company expects a spike in read traffic due to an upcoming sale. A database specialist must create a read replica of the DB instance to serve the anticipated read traffic.

    Which actions should the database specialist take before creating the read replica? (Choose two.)

    • Identify a potential downtime window and stop the application calls to the source DB instance.
    • Ensure that automatic backups are enabled for the source DB instance.
    • Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.
    • Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).
    • Modify the read replica parameter group setting and set the value to 1.
  16. A company is running a two-tier ecommerce application in one AWS account. The application is backed by an Amazon RDS for MySQL Multi-AZ DB instance. A developer mistakenly deleted the DB instance in the production environment. The company restores the database, but this event results in hours of downtime and lost revenue.

    Which combination of changes would minimize the risk of this mistake occurring in the future? (Choose three.)

    • Grant least privilege to groups, IAM users, and roles.
    • Allow all users to restore a database from a backup.
    • Enable deletion protection on existing production DB instances.
    • Use an ACL policy to restrict users from DB instance deletion.
    • Enable AWS CloudTrail logging and Enhanced Monitoring.
  17. A financial services company uses Amazon RDS for Oracle with Transparent Data Encryption (TDE). The company is required to encrypt its data at rest at all times. The key required to decrypt the data has to be highly available, and access to the key must be limited. As a regulatory requirement, the company must have the ability to rotate the encryption key on demand. The company must be able to make the key unusable if any potential security breaches are spotted. The company also needs to accomplish these tasks with minimum overhead.

    What should the database administrator use to set up the encryption to meet these requirements?

    • AWS CloudHSM
    • AWS Key Management Service (AWS KMS) with an AWS managed key
    • AWS Key Management Service (AWS KMS) with server-side encryption
    • AWS Key Management Service (AWS KMS) CMK with customer-provided material
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments