DVA-C01 : AWS Certified Developer Associate : Part 03
-
The Lambda function below is being called through an API using Amazon API Gateway. The average execution time for the Lambda function is about 1 second. The pseudocode for the Lambda function is as shown in the exhibit.
What two actions can be taken to improve the performance of this Lambda function without increasing the cost of the solution? (Choose two.)
- Package only the modules the Lambda function requires
- Use Amazon DynamoDB instead of Amazon RDS
- Move the initialization of the variable Amazon RDS connection outside of the handler function
- Implement custom database connection pooling with the Lambda function
- Implement local caching of Amazon RDS data so Lambda can re-use the cache
-
An application will ingest data at a very high throughput from many sources and must store the data in an Amazon S3 bucket. Which service would BEST accomplish this task?
- Amazon Kinesis Firehose
- Amazon S3 Acceleration Transfer
- Amazon SQS
- Amazon SNS
-
A Developer has setup an Amazon Kinesis Stream with 4 shards to ingest a maximum of 2500 records per second. A Lambda function has been configured to process these records.
In which order will these records be processed?
- Lambda will receive each record in the reverse order it was placed into the stream following a LIFO (last-in, first-out) method
- Lambda will receive each record in the exact order it was placed into the stream following a FIFO (first-in, first-out) method.
- Lambda will receive each record in the exact order it was placed into the shard following a FIFO (first-in, first-out) method. There is no guarantee of order across shards.
- The Developer can select FIFO, (first-in, first-out), LIFO (last-in, last-out), random, or request specific record using the getRecords API.
-
A static website is hosted in an Amazon S3 bucket. Several HTML pages on the site use JavaScript to download images from another Amazon S3 bucket. These images are not displayed when users browse the site.
What is the possible cause for the issue?
- The referenced Amazon S3 bucket is in another region.
- The images must be stored in the same Amazon S3 bucket.
- Port 80 must be opened on the security group in which the Amazon S3 bucket is located.
- Cross Origin Resource Sharing must be enabled on the Amazon S3 bucket.
-
Amazon S3 has the following structure: S3://BUCKET/FOLDERNAME/FILENAME.zip
Which S3 best practice would optimize performance with thousands of PUT request each second to a single bucket?
- Prefix folder names with user id; for example, s3://BUCKET/2013-FOLDERNAME/FILENAME.zip
- Prefix file names with timestamps; for example, s3://BUCKET/FOLDERNAME/2013-26-05-15-00-00-FILENAME.zip
- Prefix file names with random hex hashes; for example, s3://BUCKET/FOLDERNAME/23a6-FILENAME.zip
- Prefix folder names with random hex hashes; for example, s3://BUCKET/23a6-FOLDERNAME/FILENAME.zip
-
For a deployment using AWS CodeDeploy, what is the run order of the hooks for in-place deployments?
- Before Install -> Application Stop -> Application Start -> After Install
- Application Stop -> Before Install -> After Install -> Application Start
- Before Install -> Application Stop -> Validate Service -> Application Start
- Application Stop -> Before Install -> Validate Service -> Application Start
-
A Developer is developing an application that manages financial transactions. To improve security, multi-factor authentication (MFA) will be required as part of the login process.
What services can the Developer use to meet these requirements?
- Amazon DynamoDB to store MFA session data, and Amazon SNS to send MFA codes
- Amazon Cognito with MFA
- AWS Directory Service
- AWS IAM with MFA enabled
-
A game stores user game data in an Amazon DynamoDB table. Individual users should not have access to other users’ game data. How can this be accomplished?
- Encrypt the game data with individual user keys.
- Restrict access to specific items based on certain primary key values.
- Stage data in SQS queues to inject metadata before accessing DynamoDB.
- Read records from DynamoDB and discard irrelevant data client-side.
-
A company developed a set of APIs that are being served through the Amazon API Gateway. The API calls need to be authenticated based on OpenID identity providers such as Amazon or Facebook. The APIs should allow access based on a custom authorization model.
Which is the simplest and MOST secure design to use to build an authentication and authorization model for the APIs?
- Use Amazon Cognito user pools and a custom authorizer to authenticate and authorize users based on JSON Web Tokens.
- Build a OpenID token broker with Amazon and Facebook. Users will authenticate with these identify providers and pass the JSON Web Token to the API to authenticate each API call.
- Store user credentials in Amazon DynamoDB and have the application retrieve temporary credentials from AWS STS. Make API calls by passing user credentials to the APIs for authentication and authorization.
- Use Amazon RDS to store user credentials and pass them to the APIs for authentications and authorization.
-
A supplier is writing a new RESTful API for customers to query the status of orders. The customers requested the following API endpoint.
http://www.supplierdomain.com/status/customerID
Which of the following application designs meet the requirements? (Choose two.)
- Amazon SQS; Amazon SNS
- Elastic Load Balancing; Amazon EC2
- Amazon ElastiCache; Amazon Elacticsearch Service
- Amazon API Gateway; AWS Lambda
- Amazon S3; Amazon CloudFront
-
A development team consists of 10 team members. Similar to a home directory for each team member, the manager wants to grant access to user-specific folders in an Amazon S3 bucket. For the team member with the username “TeamMemberX”, the snippet of the IAM policy looks like this:
Instead of creating distinct policies for each team member, what approach can be used to make this policy snippet generic for all team members?
- Use IAM policy condition
- Use IAM policy principal
- Use IAM policy variables
- Use IAM policy resource
-
A legacy service has an XML-based SOAP interface. The Developer wants to expose the functionality of the service to external clients with the Amazon API Gateway. Which technique will accomplish this?
- Create a RESTful API with the API Gateway; transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates.
- Create a RESTful API with the API Gateway; pass the incoming JSON to the SOAP interface through an Application Load Balancer.
- Create a SOAP API with the API Gateway; pass the incoming XML to the SOAP interface through an Application Load Balancer.
- Create a SOAP API with the API Gateway; transform the incoming XML into a valid message for the SOAP interface using mapping templates.
-
A company is using AWS CodeBuild to compile a website from source code stored in AWS CodeCommit. A recent change to the source code has resulted in the CodeBuild project being unable to successfully compile the website.
How should the Developer identify the cause of the failures?
- Modify the buildspec.yml file to include steps to send the output of build commands to Amazon CloudWatch.
- Use a custom Docker image that includes the AWS X-Ray agent in the AWS CodeBuild project configuration.
- Check the build logs of the failed phase in the last build attempt in the AWS CodeBuild project build history.
- Manually re-run the build process on a local machine so that the output can be visualized.
-
A web application is using Amazon Kinesis Streams for clickstream data that may not be consumed for up to 12 hours.
How can the Developer implement encryption at rest for data within the Kinesis Streams?
- Enable SSL connections to Kinesis
- Use Amazon Kinesis Consumer Library
- Encrypt the data once it is at rest with a Lambda function
- Enable server-side encryption in Kinesis Streams
-
A Developer wants to use AWS X-Ray to trace a user request end-to-end throughput the software stack. The Developer made the necessary changes in the application, tested it, and found that the application is able to send the traces to AWS X-Ray. However, when the application is deployed to an EC2 instance, the traces are not available.
Which of the following could create this situation? (Choose two.)
- The traces are reaching X-Ray, but the Developer does not have access to view the records.
- The X-Ray daemon is not installed on the EC2 instance.
- The X-Ray endpoint specified in the application configuration is incorrect.
- The instance role does not have “xray:BatchGetTraces” and “xray:GetTraceGraph” permissions.
- The instance role does not have “xray:PutTraceSegments” and “xray:PutTelemetryRecords” permissions
-
A Developer executed a AWS CLI command and received the error shown below:
What action should the Developer perform to make this error human-readable?
- Make a call to AWS KMS to decode the message.
- Use the AWS STS decode-authorization-message API to decode the message.
- Use an open source decoding library to decode the message.
- Use the AWS IAM decode-authorization-message API to decode this message.
-
A company is using Amazon API Gateway to manage access to a set of microservices implemented as AWS Lambda functions. Following a bug report, the company makes a minor breaking change to one of the APIs.
In order to avoid impacting existing clients when the new API is deployed, the company wants to allow clients six months to migrate from v1 to v2.
Which approach should the Developer use to handle this change?
- Update the underlying Lambda function and provide clients with the new Lambda invocation URL.
- Use API Gateway to automatically propagate the change to clients, specifying 180 days in the phased deployment parameter.
- Use API Gateway to deploy a new stage named v2 to the API and provide users with its URL.
- Update the underlying Lambda function, create an Amazon CloudFront distribution with the updated Lambda function as its origin.
-
A company has written a Java AWS Lambda function to be triggered whenever a user uploads an image to an Amazon S3 bucket. The function converts the original image to several different formats and then copies the resulting images to another Amazon S3 bucket.
The Developers find that no images are being copied to the second Amazon S3 bucket. They have tested the code on an Amazon EC2 instance with 1GB of RAM, and it takes an average of 500 seconds to complete.
What is the MOST likely cause of the problem?
- The Lambda function has insufficient memory and needs to be increased to 1 GB to match the Amazon EC2 instance
- Files need to be copied to the same Amazon S3 bucket for processing, so the second bucket needs to be deleted.
- Lambda functions have a maximum execution limit of 300 seconds, therefore the function is not completing.
- There is a problem with the Java runtime for Lambda, and the function needs to be converted to node.js.
-
An application stops working with the following error: The specified bucket does not exist. Where is the BEST place to start the root cause analysis?
- Check the Elastic Load Balancer logs for DeleteBucket requests.
- Check the application logs in Amazon CloudWatch Logs for Amazon S3 DeleteBucket errors.
- Check AWS X-Ray for Amazon S3 DeleteBucket alarms.
- Check AWS CloudTrail for a DeleteBucket event.
-
An organization must store thousands of sensitive audio and video files in an Amazon S3 bucket. Organizational security policies require that all data written to this bucket be encrypted.
How can compliance with this policy be ensured?
- Use AWS Lambda to send notifications to the security team if unencrypted objects are pun in the bucket.
- Configure an Amazon S3 bucket policy to prevent the upload of objects that do not contain the x-amz-server-side-encryption header.
- Create an Amazon CloudWatch event rule to verify that all objects stored in the Amazon S3 bucket are encrypted.
- Configure an Amazon S3 bucket policy to prevent the upload of objects that contain the x-amz-server-side-encryption header.
Subscribe
0 Comments
Newest