DP-100 : Designing and Implementing a Data Science Solution on Azure : Part 06

  1. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You create an Azure Machine Learning service datastore in a workspace. The datastore contains the following files:

    – /data/2018/Q1.csv
    – /data/2018/Q2.csv
    – /data/2018/Q3.csv
    – /data/2018/Q4.csv
     – /data/2019/Q1.csv

    All files store data in the following format:

    id,f1,f2,I
    1,1,2,0
    2,1,1,1
    3,2,1,0
    4,2,2,1

    You run the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q01 092
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q01 092

    You need to create a dataset named training_data and load the data from all files into a single data frame by using the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q01 093
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q01 093

    Solution: Run the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q01 094
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q01 094

    Does the solution meet the goal?

    • Yes
    • No

    Explanation:

    Use two file paths.
    Use Dataset.Tabular_from_delimeted, instead of Dataset.File.from_files as the data isn’t cleansed.

    Note:
    A FileDataset references single or multiple files in your datastores or public URLs. If your data is already cleansed, and ready to use in training experiments, you can download or mount the files to your compute as a FileDataset object.

    A TabularDataset represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or Spark DataFrame so you can work with familiar data preparation and training libraries without having to leave your notebook. You can create a TabularDataset object from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.

  2. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You create an Azure Machine Learning service datastore in a workspace. The datastore contains the following files:

    – /data/2018/Q1.csv
    – /data/2018/Q2.csv
    – /data/2018/Q3.csv
    – /data/2018/Q4.csv
    – /data/2019/Q1.csv

    All files store data in the following format:

    id,f1,f2,I
    1,1,2,0
    2,1,1,1
    3,2,1,0
    4,2,2,1

    You run the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q02 095
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q02 095

    You need to create a dataset named training_data and load the data from all files into a single data frame by using the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q02 096
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q02 096

    Solution: Run the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q02 097
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q02 097

    Does the solution meet the goal?

    •  Yes
    • No
    Explanation:

    Use two file paths.
    Use Dataset.Tabular_from_delimeted as the data isn’t cleansed.

    Note:
    A TabularDataset represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or Spark DataFrame so you can work with familiar data preparation and training libraries without having to leave your notebook. You can create a TabularDataset object from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.

  3. You plan to use the Hyperdrive feature of Azure Machine Learning to determine the optimal hyperparameter values when training a model.

    You must use Hyperdrive to try combinations of the following hyperparameter values:

    – learning_rate: any value between 0.001 and 0.1
    – batch_size: 16, 32, or 64

    You need to configure the search space for the Hyperdrive experiment.

    Which two parameter expressions should you use? Each correct answer presents part of the solution.

    NOTE: Each correct selection is worth one point.

    • a choice expression for learning_rate
    • a uniform expression for learning_rate
    • a normal expression for batch_size
    • a choice expression for batch_size
    • a uniform expression for batch_size
    Explanation:

    B: Continuous hyperparameters are specified as a distribution over a continuous range of values. Supported distributions include:
    – uniform(low, high) – Returns a value uniformly distributed between low and high

    D: Discrete hyperparameters are specified as a choice among discrete values. choice can be:
    – one or more comma-separated values
    – a range object
    – any arbitrary list object

  4. HOTSPOT

    Your Azure Machine Learning workspace has a dataset named real_estate_data. A sample of the data in the dataset follows.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q04 098
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q04 098

    You want to use automated machine learning to find the best regression model for predicting the price column.

    You need to configure an automated machine learning experiment using the Azure Machine Learning SDK.

    How should you complete the code? To answer, select the appropriate options in the answer area.

    NOTE: Each correct selection is worth one point.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q04 099 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q04 099 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q04 099 Answer
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q04 099 Answer
    Explanation:

    Box 1: training_data
    The training data to be used within the experiment. It should contain both training features and a label column (optionally a sample weights column). If training_data is specified, then the label_column_name parameter must also be specified.

    Box 2: validation_data
    Provide validation data: In this case, you can either start with a single data file and split it into training and validation sets or you can provide a separate data file for the validation set. Either way, the validation_data parameter in your AutoMLConfig object assigns which data to use as your validation set.

    Example, the following code example explicitly defines which portion of the provided data in dataset to use for training and validation.

    dataset = Dataset.Tabular.from_delimited_files(data)

    training_data, validation_data = dataset.random_split(percentage=0.8, seed=1)

    automl_config = AutoMLConfig(compute_target = aml_remote_compute,
    task = ‘classification’,
    primary_metric = ‘AUC_weighted’,
    training_data = training_data,
    validation_data = validation_data,
    label_column_name = ‘Class’
    )

    Box 3: label_column_name
    label_column_name:
    The name of the label column. If the input data is from a pandas.DataFrame which doesn’t have column names, column indices can be used instead, expressed as integers.

    This parameter is applicable to training_data and validation_data parameters.

    Incorrect Answers:
    X: The training features to use when fitting pipelines during an experiment. This setting is being deprecated. Please use training_data and label_column_name instead.

    Y: The training labels to use when fitting pipelines during an experiment. This is the value your model will predict. This setting is being deprecated. Please use training_data and label_column_name instead.

    X_valid: Validation features to use when fitting pipelines during an experiment.
    If specified, then y_valid or sample_weight_valid must also be specified.

    Y_valid: Validation labels to use when fitting pipelines during an experiment.
    Both X_valid and y_valid must be specified together.

    exclude_nan_labels: Whether to exclude rows with NaN values in the label. The default is True.

    y_max: y_max (float)
    Maximum value of y for a regression experiment. The combination of y_min and y_max are used to normalize test set metrics based on the input data range. If not specified, the maximum value is inferred from the data.

  5. HOTSPOT

    You have a multi-class image classification deep learning model that uses a set of labeled photographs. You create the following code to select hyperparameter values when training the model.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q05 100
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q05 100

    For each of the following statements, select Yes if the statement is true. Otherwise, select No.

    NOTE: Each correct selection is worth one point.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q05 101 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q05 101 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q05 101 Answer
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q05 101 Answer
    Explanation:

    Box 1: Yes
    Hyperparameters are adjustable parameters you choose to train a model that govern the training process itself. Azure Machine Learning allows you to automate hyperparameter exploration in an efficient manner, saving you significant time and resources. You specify the range of hyperparameter values and a maximum number of training runs. The system then automatically launches multiple simultaneous runs with different parameter configurations and finds the configuration that results in the best performance, measured by the metric you choose. Poorly performing training runs are automatically early terminated, reducing wastage of compute resources. These resources are instead used to explore other hyperparameter configurations.

    Box 2: Yes
    uniform(low, high) – Returns a value uniformly distributed between low and high

    Box 3: No
    Bayesian sampling does not currently support any early termination policy.

  6. You run an automated machine learning experiment in an Azure Machine Learning workspace. Information about the run is listed in the table below:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 102
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 102

    You need to write a script that uses the Azure Machine Learning SDK to retrieve the best iteration of the experiment run.

    Which Python code segment should you use?

    • DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 103
      DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 103
    • DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 104
      DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 104
    • DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 105
      DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 105
    • DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 106
      DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 106
    • DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 107
      DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q06 107
    Explanation:

    The get_output method on automl_classifier returns the best run and the fitted model for the last invocation. Overloads on get_output allow you to retrieve the best run and fitted model for any logged metric or for a particular iteration.

    In [ ]:
    best_run, fitted_model = local_run.get_output()

  7. You have a comma-separated values (CSV) file containing data from which you want to train a classification model.

    You are using the Automated Machine Learning interface in Azure Machine Learning studio to train the classification model. You set the task type to Classification.

    You need to ensure that the Automated Machine Learning process evaluates only linear models.

    What should you do?

    • Add all algorithms other than linear ones to the blocked algorithms list.
    • Set the Exit criterion option to a metric score threshold.
    • Clear the option to perform automatic featurization.
    • Clear the option to enable deep learning.
    • Set the task type to Regression.
    Explanation:
    Automatic featurization can fit non-linear models.
  8. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run:

    from azureml.core import Run
    import pandas as pd
    run = Run.get_context()
    data = pd.read_csv('data.csv')
    label_vals = data['label'].unique()
    # Add code to record metrics here
    run.complete()

    The experiment must record the unique labels in the data as metrics for the run that can be reviewed later.

    You must add code to the script to record the unique label values as run metrics at the point indicated by the comment.

    Solution: Replace the comment with the following code:

    run.upload_file('outputs/labels.csv', './data.csv')

    Does the solution meet the goal?

    • Yes
    • No
    Explanation:

    label_vals has the unique labels (from the statement label_vals = data[‘label’].unique()), and it has to be logged.

    Note:
    Instead use the run_log function to log the contents in label_vals:

    for label_val in label_vals:
    run.log(‘Label Values’, label_val)

  9. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run:

    from azureml.core import Run
    import pandas as pd
    run = Run.get_context()
    data = pd.read_csv('data.csv')
    label_vals = data['label'].unique()
    # Add code to record metrics here
    run.complete()

    The experiment must record the unique labels in the data as metrics for the run that can be reviewed later.

    You must add code to the script to record the unique label values as run metrics at the point indicated by the comment.

    Solution: Replace the comment with the following code:

    run.log_table('Label Values', label_vals)

    Does the solution meet the goal?

    • Yes
    • No
    Explanation:

    Instead use the run_log function to log the contents in label_vals:

    for label_val in label_vals:
    run.log(‘Label Values’, label_val)

  10. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run:

    from azureml.core import Run
    import pandas as pd
    run = Run.get_context()
    data = pd.read_csv('data.csv')
    label_vals = data['label'].unique()
    # Add code to record metrics here
    run.complete()

    The experiment must record the unique labels in the data as metrics for the run that can be reviewed later.

    You must add code to the script to record the unique label values as run metrics at the point indicated by the comment.

    Solution: Replace the comment with the following code:

    for label_val in label_vals:
    run.log('Label Values', label_val)

    Does the solution meet the goal?

    •  Yes
    • No
    Explanation:

    The run_log function is used to log the contents in label_vals:

    for label_val in label_vals:
    run.log(‘Label Values’, label_val)

  11. HOTSPOT

    You publish a batch inferencing pipeline that will be used by a business application.

    The application developers need to know which information should be submitted to and returned by the REST interface for the published pipeline.

    You need to identify the information required in the REST request and returned as a response from the published pipeline.

    Which values should you use in the REST request and to expect in the response? To answer, select the appropriate options in the answer area.

    NOTE: Each correct selection is worth one point.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q11 108 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q11 108 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q11 108 Answer
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q11 108 Answer
    Explanation:

    Box 1: JSON containing an OAuth bearer token
    Specify your authentication header in the request.
    To run the pipeline from the REST endpoint, you need an OAuth2 Bearer-type authentication header.

    Box 2: JSON containing the experiment name
    Add a JSON payload object that has the experiment name.

    Example:
    rest_endpoint = published_pipeline.endpoint
    response = requests.post(rest_endpoint,
    headers=auth_header,
    json={“ExperimentName”: “batch_scoring”,
    “ParameterAssignments”: {“process_count_per_node”: 6}})
    run_id = response.json()[“Id”]

    Box 3: JSON containing the run ID
    Make the request to trigger the run. Include code to access the Id key from the response dictionary to get the value of the run ID.

  12. HOTSPOT

    You create an experiment in Azure Machine Learning Studio. You add a training dataset that contains 10,000 rows. The first 9,000 rows represent class 0 (90 percent).

    The remaining 1,000 rows represent class 1 (10 percent).

    The training set is imbalances between two classes. You must increase the number of training examples for class 1 to 4,000 by using 5 data rows. You add the Synthetic Minority Oversampling Technique (SMOTE) module to the experiment.

    You need to configure the module.

    Which values should you use? To answer, select the appropriate options in the dialog box in the answer area.

    NOTE: Each correct selection is worth one point.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q12 109 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q12 109 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q12 109 Answer
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q12 109 Answer
    Explanation:

    Box 1: 300
    You type 300 (%), the module triples the percentage of minority cases (3000) compared to the original dataset (1000).

    Box 2: 5
    We should use 5 data rows.
    Use the Number of nearest neighbors option to determine the size of the feature space that the SMOTE algorithm uses when in building new cases. A nearest neighbor is a row of data (a case) that is very similar to some target case. The distance between any two cases is measured by combining the weighted vectors of all features.

    By increasing the number of nearest neighbors, you get features from more cases.
    By keeping the number of nearest neighbors low, you use features that are more like those in the original sample.

  13. You are solving a classification task.

    You must evaluate your model on a limited data sample by using k-fold cross-validation. You start by configuring a k parameter as the number of splits.

    You need to configure the k parameter for the cross-validation.

    Which value should you use?

    • k=0.5
    • k=0.01
    • k=5
    • k=1
    Explanation:

    Leave One Out (LOO) cross-validation
    Setting K = n (the number of observations) yields n-fold and is called leave-one out cross-validation (LOO), a special case of the K-fold approach.

    LOO CV is sometimes useful but typically doesn’t shake up the data enough. The estimates from each fold are highly correlated and hence their average can have high variance.
    This is why the usual choice is K=5 or 10. It provides a good compromise for the bias-variance tradeoff.

  14. HOTSPOT

    You are running Python code interactively in a Conda environment. The environment includes all required Azure Machine Learning SDK and MLflow packages.

    You must use MLflow to log metrics in an Azure Machine Learning experiment named mlflow-experiment.

    How should you complete the code? To answer, select the appropriate options in the answer area.

    NOTE: Each correct selection is worth one point.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q14 110 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q14 110 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q14 110 Answer
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q14 110 Answer
    Explanation: 

    Box 1: mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
    In the following code, the get_mlflow_tracking_uri() method assigns a unique tracking URI address to the workspace, ws, and set_tracking_uri() points the MLflow tracking URI to that address.

    mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())

    Box 2: mlflow.set_experiment(experiment_name)
    Set the MLflow experiment name with set_experiment() and start your training run with start_run().

    Box 3: mlflow.start_run()

    Box 4: mlflow.log_metric
    Then use log_metric() to activate the MLflow logging API and begin logging your training run metrics.

  15. DRAG DROP

    You are creating a machine learning model that can predict the species of a penguin from its measurements. You have a file that contains measurements for three species of penguin in comma-delimited format.

    The model must be optimized for area under the received operating characteristic curve performance metric, averaged for each class.

    You need to use the Automated Machine Learning user interface in Azure Machine Learning studio to run an experiment and find the best performing model.

    Which five actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q15 111 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q15 111 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q15 111 Answer
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q15 111 Answer
    Explanation:

    Step 1:Create and select a new dataset by uploading he command-delimited file of penguin data.

    Step 2: Select the Classification task type

    Step 3: Set the Primary metric configuration setting to Accuracy.
    The available metrics you can select is determined by the task type you choose.
    Primary metrics for classification scenarios:
    Post thresholded metrics, like accuracy, average_precision_score_weighted, norm_macro_recall, and precision_score_weighted may not optimize as well for datasets which are very small, have very large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, AUC_weighted can be a better choice for the primary metric.

    Step 4: Configure the automated machine learning run by selecting the experiment name, target column, and compute target

    Step 5: Run the automated machine learning experiment and review the results.

  16. HOTSPOT

    You are tuning a hyperparameter for an algorithm. The following table shows a data set with different hyperparameter, training error, and validation errors.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q16 112
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q16 112

    Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic.

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q16 113 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q16 113 Question
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q16 113 Answer
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q16 113 Answer
    Explanation:

    Box 1: 4
    Choose the one which has lower training and validation error and also the closest match.
    Minimize variance (difference between validation error and train error).

    Box 2: 5
    Minimize variance (difference between validation error and train error).

  17. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You create a model to forecast weather conditions based on historical data.

    You need to create a pipeline that runs a processing script to load data from a datastore and pass the processed data to a machine learning model training script.

    Solution: Run the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q17 114
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q17 114

    Does the solution meet the goal?

    •  Yes
    • No
    Explanation:

    The two steps are present: process_step and train_step

    The training data input is not setup correctly.

    Note:
    Data used in pipeline can be produced by one step and consumed in another step by providing a PipelineData object as an output of one step and an input of one or more subsequent steps.

    PipelineData objects are also used when constructing Pipelines to describe step dependencies. To specify that a step requires the output of another step as input, use a PipelineData object in the constructor of both steps.

    For example, the pipeline train step depends on the process_step_output output of the pipeline process step:

    from azureml.pipeline.core import Pipeline, PipelineData
    from azureml.pipeline.steps import PythonScriptStep

    datastore = ws.get_default_datastore()
    process_step_output = PipelineData(“processed_data”, datastore=datastore)
    process_step = PythonScriptStep(script_name=”process.py”,
    arguments=[“–data_for_train”, process_step_output],
    outputs=[process_step_output],
    compute_target=aml_compute,
    source_directory=process_directory)
    train_step = PythonScriptStep(script_name=”train.py”,
    arguments=[“–data_for_train”, process_step_output],
    inputs=[process_step_output],
    compute_target=aml_compute,
    source_directory=train_directory)

    pipeline = Pipeline(workspace=ws, steps=[process_step, train_step])

  18. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You create a model to forecast weather conditions based on historical data.

    You need to create a pipeline that runs a processing script to load data from a datastore and pass the processed data to a machine learning model training script.

    Solution: Run the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q18 115
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q18 115

    Does the solution meet the goal?

    • Yes
    • No
  19. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You create a model to forecast weather conditions based on historical data.

    You need to create a pipeline that runs a processing script to load data from a datastore and pass the processed data to a machine learning model training script.

    Solution: Run the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q19 116
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q19 116

    Does the solution meet the goal?

    •  Yes
    • No
    Explanation:

    Note: Data used in pipeline can be produced by one step and consumed in another step by providing a PipelineData object as an output of one step and an input of one or more subsequent steps.

    Compare with this example, the pipeline train step depends on the process_step_output output of the pipeline process step:

    from azureml.pipeline.core import Pipeline, PipelineData
    from azureml.pipeline.steps import PythonScriptStep

    datastore = ws.get_default_datastore()
    process_step_output = PipelineData(“processed_data”, datastore=datastore)
    process_step = PythonScriptStep(script_name=”process.py”,
    arguments=[“–data_for_train”, process_step_output],
    outputs=[process_step_output],
    compute_target=aml_compute,
    source_directory=process_directory)
    train_step = PythonScriptStep(script_name=”train.py”,
    arguments=[“–data_for_train”, process_step_output],
    inputs=[process_step_output],
    compute_target=aml_compute,
    source_directory=train_directory)

    pipeline = Pipeline(workspace=ws, steps=[process_step, train_step])

  20. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You have a Python script named train.py in a local folder named scripts. The script trains a regression model by using scikit-learn. The script includes code to load a training data file which is also located in the scripts folder.

    You must run the script as an Azure ML experiment on a compute cluster named aml-compute.

    You need to configure the run to ensure that the environment includes the required packages for model training. You have instantiated a variable named aml-compute that references the target compute cluster.

    Solution: Run the following code:

    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q20 117
    DP-100 Designing and Implementing a Data Science Solution on Azure Part 06 Q20 117

    Does the solution meet the goal?

    •  Yes
    • No
    Explanation:

    The scikit-learn estimator provides a simple way of launching a scikit-learn training job on a compute target. It is implemented through the SKLearn class, which can be used to support single-node CPU training.

    Example:
    from azureml.train.sklearn import SKLearn

    }

    estimator = SKLearn(source_directory=project_folder,
    compute_target=compute_target,
    entry_script=’train_iris.py’
    )

Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments