MLS-C01 : AWS Certified Machine Learning – Specialty : Part 07
-
A data scientist uses an Amazon SageMaker notebook instance to conduct data exploration and analysis. This requires certain Python packages that are not natively available on Amazon SageMaker to be installed on the notebook instance.
How can a machine learning specialist ensure that required packages are automatically available on the notebook instance for the data scientist to use?
- Install AWS Systems Manager Agent on the underlying Amazon EC2 instance and use Systems Manager Automation to execute the package installation commands.
- Create a Jupyter notebook file (.ipynb) with cells containing the package installation commands to execute and place the file under the /etc/init directory of each Amazon SageMaker notebook instance.
- Use the conda package manager from within the Jupyter notebook console to apply the necessary conda packages to the default kernel of the notebook.
- Create an Amazon SageMaker lifecycle configuration with package installation commands and assign the lifecycle configuration to the notebook instance.
-
A data scientist needs to identify fraudulent user accounts for a company’s ecommerce platform. The company wants the ability to determine if a newly created account is associated with a previously known fraudulent user. The data scientist is using AWS Glue to cleanse the company’s application logs during ingestion.
Which strategy will allow the data scientist to identify fraudulent accounts?
- Execute the built-in FindDuplicates Amazon Athena query.
- Create a FindMatches machine learning transform in AWS Glue.
- Create an AWS Glue crawler to infer duplicate accounts in the source data.
- Search for duplicate accounts in the AWS Glue Data Catalog.
-
A Data Scientist is developing a machine learning model to classify whether a financial transaction is fraudulent. The labeled data available for training consists of 100,000 non-fraudulent observations and 1,000 fraudulent observations.
The Data Scientist applies the XGBoost algorithm to the data, resulting in the following confusion matrix when the trained model is applied to a previously unseen validation dataset. The accuracy of the model is 99.1%, but the Data Scientist needs to reduce the number of false negatives.
Which combination of steps should the Data Scientist take to reduce the number of false negative predictions by the model? (Choose two.)
- Change the XGBoost eval_metric parameter to optimize based on Root Mean Square Error (RMSE).
- Increase the XGBoost scale_pos_weight parameter to adjust the balance of positive and negative weights.
- Increase the XGBoost max_depth parameter because the model is currently underfitting the data.
- Change the XGBoost eval_metric parameter to optimize based on Area Under the ROC Curve (AUC).
- Decrease the XGBoost max_depth parameter because the model is currently overfitting the data.
-
A data scientist has developed a machine learning translation model for English to Japanese by using Amazon SageMaker’s built-in seq2seq algorithm with 500,000 aligned sentence pairs. While testing with sample sentences, the data scientist finds that the translation quality is reasonable for an example as short as five words. However, the quality becomes unacceptable if the sentence is 100 words long.
Which action will resolve the problem?
- Change preprocessing to use n-grams.
- Add more nodes to the recurrent neural network (RNN) than the largest sentence’s word count.
- Adjust hyperparameters related to the attention mechanism.
- Choose a different weight initialization type.
-
A financial company is trying to detect credit card fraud. The company observed that, on average, 2% of credit card transactions were fraudulent. A data scientist trained a classifier on a year’s worth of credit card transactions data. The model needs to identify the fraudulent transactions (positives) from the regular ones (negatives). The company’s goal is to accurately capture as many positives as possible.
Which metrics should the data scientist use to optimize the model? (Choose two.)
- Specificity
- False positive rate
- Accuracy
- Area under the precision-recall curve
- True positive rate
-
A machine learning specialist is developing a proof of concept for government users whose primary concern is security. The specialist is using Amazon SageMaker to train a convolutional neural network (CNN) model for a photo classifier application. The specialist wants to protect the data so that it cannot be accessed and transferred to a remote host by malicious code accidentally installed on the training container.
Which action will provide the MOST secure protection?
- Remove Amazon S3 access permissions from the SageMaker execution role.
- Encrypt the weights of the CNN model.
- Encrypt the training and validation dataset.
- Enable network isolation for training jobs.
-
A medical imaging company wants to train a computer vision model to detect areas of concern on patients’ CT scans. The company has a large collection of unlabeled CT scans that are linked to each patient and stored in an Amazon S3 bucket. The scans must be accessible to authorized users only. A machine learning engineer needs to build a labeling pipeline.
Which set of steps should the engineer take to build the labeling pipeline with the LEAST effort?
- Create a workforce with AWS Identity and Access Management (IAM). Build a labeling tool on Amazon EC2 Queue images for labeling by using Amazon Simple Queue Service (Amazon SQS). Write the labeling instructions.
- Create an Amazon Mechanical Turk workforce and manifest file. Create a labeling job by using the built-in image classification task type in Amazon SageMaker Ground Truth. Write the labeling instructions.
- Create a private workforce and manifest file. Create a labeling job by using the built-in bounding box task type in Amazon SageMaker Ground Truth. Write the labeling instructions.
- Create a workforce with Amazon Cognito. Build a labeling web application with AWS Amplify. Build a labeling workflow backend using AWS Lambda. Write the labeling instructions.
-
A company is using Amazon Textract to extract textual data from thousands of scanned text-heavy legal documents daily. The company uses this information to process loan applications automatically. Some of the documents fail business validation and are returned to human reviewers, who investigate the errors. This activity increases the time to process the loan applications.
What should the company do to reduce the processing time of loan applications?
- Configure Amazon Textract to route low-confidence predictions to Amazon SageMaker Ground Truth. Perform a manual review on those words before performing a business validation.
- Use an Amazon Textract synchronous operation instead of an asynchronous operation.
- Configure Amazon Textract to route low-confidence predictions to Amazon Augmented AI (Amazon A2I). Perform a manual review on those words before performing a business validation.
- Use Amazon Rekognition’s feature to detect text in an image to extract the data from scanned images. Use this information to process the loan applications.
-
A company ingests machine learning (ML) data from web advertising clicks into an Amazon S3 data lake. Click data is added to an Amazon Kinesis data stream by using the Kinesis Producer Library (KPL). The data is loaded into the S3 data lake from the data stream by using an Amazon Kinesis Data Firehose delivery stream. As the data volume increases, an ML specialist notices that the rate of data ingested into Amazon S3 is relatively constant. There also is an increasing backlog of data for Kinesis Data Streams and Kinesis Data Firehose to ingest.
Which next step is MOST likely to improve the data ingestion rate into Amazon S3?
- Increase the number of S3 prefixes for the delivery stream to write to.
- Decrease the retention period for the data stream.
- Increase the number of shards for the data stream.
- Add more consumers using the Kinesis Client Library (KCL).
-
A data scientist must build a custom recommendation model in Amazon SageMaker for an online retail company. Due to the nature of the company’s products, customers buy only 4-5 products every 5-10 years. So, the company relies on a steady stream of new customers. When a new customer signs up, the company collects data on the customer’s preferences. Below is a sample of the data available to the data scientist.
How should the data scientist split the dataset into a training and test set for this use case?
- Shuffle all interaction data. Split off the last 10% of the interaction data for the test set.
- Identify the most recent 10% of interactions for each user. Split off these interactions for the test set.
- Identify the 10% of users with the least interaction data. Split off all interaction data from these users for the test set.
- Randomly select 10% of the users. Split off all interaction data from these users for the test set.
-
A financial services company wants to adopt Amazon SageMaker as its default data science environment. The company’s data scientists run machine learning (ML) models on confidential financial data. The company is worried about data egress and wants an ML engineer to secure the environment.
Which mechanisms can the ML engineer use to control data egress from SageMaker? (Choose three.)
- Connect to SageMaker by using a VPC interface endpoint powered by AWS PrivateLink.
- Use SCPs to restrict access to SageMaker.
- Disable root access on the SageMaker notebook instances.
- Enable network isolation for training jobs and models.
- Restrict notebook presigned URLs to specific IPs used by the company.
- Protect data with encryption at rest and in transit. Use AWS Key Management Service (AWS KMS) to manage encryption keys.
Subscribe
0 Comments
Newest