Last Updated on November 1, 2022 by InfraExam

AZ-304 : Microsoft Azure Architect Design : Part 07

  1. You have an Azure subscription.

    Your on-premises network contains a file server named Server1. Server1 stores 5 TB of company files that are accessed rarely.

    You plan to copy the files to Azure Storage.

    You need to implement a storage solution for the files that meets the following requirements:

    – The files must be available within 24 hours of being requested.
    – Storage costs must be minimized.

    Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution.

    NOTE: Each correct selection is worth one point.

    • Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
    • Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
    • Create a general-purpose v1 storage account. Create a file share in the storage account and copy the files to the file share.
    • Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
    • Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.

    Explanation:

    The Cool access tier is optimized for storing data that is infrequently accessed and stored for at least 30 days.
    Using a file share is cheaper than using a blob container.

    Incorrect Answers:
    A: Using a file share would be cheaper than using a Blob container.
    B, E: The Archive tier is optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

  2. You use Azure Application Insights.

    You plan to use continuous export.

    You need to store Application Insights data for five years.

    Which Azure service should you use?

    • Azure SQL Database
    • Azure Monitor Logs
    • Azure Backup
    • Azure Storage
    Explanation:
    Create a Continuous Export.
    1. In the Application Insights resource for your app under configure on the left, open Continuous Export and choose Add:
    2. Choose the telemetry data types you want to export.
    3. Create or select an Azure storage account where you want to store the data. Click Add, Export Destination, Storage account, and then either create a new store or choose an existing store.
    4. Create or select a container in the storage.
  3. You have an Azure subscription that contains an Azure SQL database.

    You are evaluating whether to use Azure reservations on the Azure SQL database.

    Which tool should you use to estimate the potential savings?

    • The Purchase reservations blade in the Azure portal
    • The Advisor blade in the Azure portal
    • The SQL database blade in the Azure portal
    Explanation:

    Buy reserved capacity
    1. Sign in to the Azure portal.
    2. Select All services > Reservations.
    3. Select Add and then in the Purchase Reservations pane, select SQL Database to purchase a new reservation for SQL Database.
    4. Fill in the required fields. Existing databases in SQL Database and SQL Managed Instance that match the attributes you select qualify to get the reserved capacity discount. The actual number of databases or managed instances that get the discount depends on the scope and quantity selected.

    AZ-304 Microsoft Azure Architect Design Part 07 Q03 074
    AZ-304 Microsoft Azure Architect Design Part 07 Q03 074

    5. Review the cost of the capacity reservation in the Costs section.
    6. Select Purchase.
    7. Select View this Reservation to see the status of your purchase.

  4. HOTSPOT

    You have an Azure subscription that contains the storage accounts shown in the following table.

    AZ-304 Microsoft Azure Architect Design Part 07 Q04 075
    AZ-304 Microsoft Azure Architect Design Part 07 Q04 075

    You plan to implement two new apps that have the requirements shown in the following table.

    AZ-304 Microsoft Azure Architect Design Part 07 Q04 076
    AZ-304 Microsoft Azure Architect Design Part 07 Q04 076

    Which storage accounts should you recommend using for each app? To answer, select the appropriate options in the answer area.

    NOTE: Each correct selection is worth one point.

    AZ-304 Microsoft Azure Architect Design Part 07 Q04 077 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q04 077 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q04 077 Answer
    AZ-304 Microsoft Azure Architect Design Part 07 Q04 077 Answer
    Explanation:

    Box 1: Storage1, storage2, and storage3 only
    Azure Blob Storage lifecycle management offers a rich, rule-based policy for GPv2 and blob storage accounts. Use the policy to transition your data to the appropriate access tiers or expire at the end of the data’s lifecycle.

    Box 2: Storage1, storage2, and storage4 only
    General purpose version 2 (GPv2) storage accounts: GPv2 storage accounts allow you to deploy Azure file shares on standard/hard disk-based (HDD-based) hardware.

    FileStorage storage accounts: FileStorage storage accounts allow you to deploy Azure file shares on premium/solid-state disk-based (SSD-based) hardware.

  5. You have an Azure subscription that contains an Azure SQL database.

    You plan to use Azure reservations on the Azure SQL database.

    To which resource type will the reservation discount be applied?

    • vCore compute
    • DTU compute
    • Storage
    • License
    Explanation:
    Quantity: The amount of compute resources being purchased within the capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you run or plan to run multiple databases with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify the quantity as 16 to maximize the benefit for all the databases.
  6. You are designing an Azure Cosmos DB solution that will host multiple writable replicas in multiple Azure regions.

    You need to recommend the strongest database consistency level for the design. The solution must meet the following requirements:

    – Provide a latency-based Service Level Agreement (SLA) for writes.
    – Support multiple regions.

    Which consistency level should you recommend?

    • bounded staleness
    • strong
    • session
    • consistent prefix
    Explanation:

    Each level provides availability and performance tradeoffs. The following image shows the different consistency levels as a spectrum.

    AZ-304 Microsoft Azure Architect Design Part 07 Q06 078
    AZ-304 Microsoft Azure Architect Design Part 07 Q06 078

    Note: The service offers comprehensive 99.99% SLAs which covers the guarantees for throughput, consistency, availability and latency for the Azure Cosmos DB Database Accounts scoped to a single Azure region configured with any of the five Consistency Levels or Database Accounts spanning multiple Azure regions, configured with any of the four relaxed Consistency Levels.

    Incorrect Answers:
    B: Strong consistency for accounts with regions spanning more than 5000 miles (8000 kilometers) is blocked by default due to high write latency. To enable this capability please contact support.

  7. Case Study

    This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

    To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

    At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

    To start the case study
    To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

    Overview

    Contoso, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.

    Existing Environment. Payment Processing System

    Contoso hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.

    The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET. The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server Agent jobs.

    The database is currently 2 TB and is not expected to grow beyond 3 TB.

    The payment processing system has the following compliance-related requirements:

    – Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
    – Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
    – Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
    – Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
    – Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
    – Only allow all access to all the tiers from the internal network of Contoso.

    Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.

    Existing Environment. Historical Transaction Query System

    Contoso recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office. The data in the table storage is 50 GB and is not expected to increase.

    Existing Environment. Current Issues

    The Contoso IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.

    Requirements. Planned Changes

    Contoso plans to implement the following changes:

    – Migrate the payment processing system to Azure.
    – Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.

    Requirements. Migration Requirements

    Contoso identifies the following general migration requirements:

    – Infrastructure services must remain available if a region or a data center fails. Failover must occur without any administrative intervention.
    – Whenever possible, Azure managed services must be used to minimize management overhead.
    – Whenever possible, costs must be minimized.

    Contoso identifies the following requirements for the payment processing system:

    – If a data center fails, ensure that the payment processing system remains available without any administrative intervention. The middle-tier and the web front end must continue to operate without any additional configurations.
    – Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
    – Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
    – Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
    – Payment processing system must be able to use grouping and joining tables on encrypted columns.
    – Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
    – Ensure that the payment processing system preserves its current compliance status.
    – Host the middle tier of the payment processing system on a virtual machine

    Contoso identifies the following requirements for the historical transaction query system:

    – Minimize the use of on-premises infrastructure services.
    – Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
    – Minimize the frequency of table scans.
    – If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.

    Requirements. Information Security Requirements

    The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.

    Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.

    1. You need to recommend a solution for protecting the content of the payment processing system.

      What should you include in the recommendation?

      • Always Encrypted with deterministic encryption
      • Always Encrypted with randomized encryption
      • Transparent Data Encryption (TDE)
      • Azure Storage Service Encryption
    2. HOTSPOT

      You need to recommend a solution for the data store of the historical transaction query system.

      What should you include in the recommendation? To answer, select the appropriate options in the answer area.

      NOTE: Each correct selection is worth one point.

      AZ-304 Microsoft Azure Architect Design Part 07 Q07 079 Question
      AZ-304 Microsoft Azure Architect Design Part 07 Q07 079 Question

      AZ-304 Microsoft Azure Architect Design Part 07 Q07 079 Answer
      AZ-304 Microsoft Azure Architect Design Part 07 Q07 079 Answer
  8. Case Study

    This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

    To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

    At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

    To start the case study
    To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

    Overview

    Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam, Berlin, and Rome.

    Existing Environment. Active Directory Environment

    The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com. There are no trust relationships between the forests.

    Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication.

    Rd.fabrikam.com is used by the research and development (R&D) department only.

    Existing Environment. Network Infrastructure

    Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.

    All the offices have a high-speed connection to the Internet.

    An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.

    The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.

    Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

    Existing Environment. Problem Statements

    The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

    Requirements. Planned Changes

    Fabrikam plans to move most of its production workloads to Azure during the next few years.

    As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft 365 deployment.

    All R&D operations will remain on-premises.

    Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.

    Requirements. Technical Requirements

    Fabrikam identifies the following technical requirements:

    – Web site content must be easily updated from a single point.
    – User input must be minimized when provisioning new web app instances.
    – Whenever possible, existing on-premises licenses must be used to reduce cost.
    – Users must always authenticate by using their corp.fabrikam.com UPN identity.
    – Any new deployments to Azure must be redundant in case an Azure region fails.
    – Whenever possible, solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service.
    – An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
    – Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on-premises network.

    Requirements. Database Requirements

    Fabrikam identifies the following database requirements:

    – Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
    – To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
    – Database backups must be retained for a minimum of seven years to meet compliance requirements.

    Requirements. Security Requirements

    Fabrikam identifies the following security requirements:

    – Company information including policies, templates, and data must be inaccessible to anyone outside the company.
    – Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
    – Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
    – All administrative access to the Azure portal must be secured by using multi-factor authentication.
    – The testing of WebApp1 updates must not be visible to anyone outside the company.

    1. You need to recommend a data storage strategy for WebApp1.

      What should you include in the recommendation?

      • a vCore-based Azure SQL database 
      • an Azure virtual machine that runs SQL Server
      • an Azure SQL Database elastic pool
      • a fixed-size DTU Azure SQL database
  9. DRAG DROP

    Your company identifies the following business continuity and disaster recovery objectives for virtual machines that host sales, finance, and reporting applications in the company’s on-premises data center:

    – The sales application must be able to fail over to a second on-premises data center.
    – The finance application requires that data be retained for seven years. In the event of a disaster, the application must be able to run from Azure. The recovery time objective (RTO) is 10 minutes.
    – The reporting application must be able to recover point-in-time data at a daily granularity. The RTO is eight hours.

    You need to recommend which Azure services meet the business continuity and disaster recovery objectives. The solution must minimize costs.

    What should you recommend for each application? To answer, drag the appropriate services to the correct applications. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

    NOTE: Each correct selection is worth one point.

    AZ-304 Microsoft Azure Architect Design Part 07 Q09 080 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q09 080 Question

    AZ-304 Microsoft Azure Architect Design Part 07 Q09 080 Answer
    AZ-304 Microsoft Azure Architect Design Part 07 Q09 080 Answer
  10. You have an Azure Storage v2 account named storage1.

    You plan to archive data to storage1.

    You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.

    What should you do?

    • You create an Azure Blob storage container, and you configure a legal hold access policy.
    • You create a file share and snapshots.
    • You create a file share, and you configure an access policy.
    • You create an Azure Blob storage container, and you configure a time-based retention policy and lock the policy.
    Explanation:

    Time-based retention policy support: Users can set policies to store data for a specified interval. When a time-based retention policy is set, blobs can be created and read, but not modified or deleted. After the retention period has expired, blobs can be deleted but not overwritten.

    Note:
    Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state. This state makes the data non-erasable and non-modifiable for a user-specified interval. For the duration of the retention interval, blobs can be created and read, but cannot be modified or deleted. Immutable storage is available for general-purpose v2 and Blob storage accounts in all Azure regions.

  11. HOTSPOT

    You plan to deploy the backup policy shown in the following exhibit.

    AZ-304 Microsoft Azure Architect Design Part 07 Q11 081
    AZ-304 Microsoft Azure Architect Design Part 07 Q11 081

    Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

    NOTE: Each correct selection is worth one point.

    AZ-304 Microsoft Azure Architect Design Part 07 Q11 082 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q11 082 Question

    AZ-304 Microsoft Azure Architect Design Part 07 Q11 082 Answer
    AZ-304 Microsoft Azure Architect Design Part 07 Q11 082 Answer
  12. HOTSPOT

    You have an Azure web app named App1 and an Azure key vault named KV1.

    App1 stores database connection strings in KV1.

    App1 performs the following types of requests to KV1:

    – Get
    – List
    – Wrap
    – Delete
    – Unwrap
    – Backup
    – Decrypt
    – Encrypt

    You are evaluating the continuity of service for App1.

    You need to identify the following if the Azure region that hosts KV1 becomes unavailable:

    – To where will KV1 fail over?
    – During the failover, which request type will be unavailable?

    What should you identify? To answer, select the appropriate options in the answer area.

    NOTE: Each correct selection is worth one point.

    AZ-304 Microsoft Azure Architect Design Part 07 Q12 083 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q12 083 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q12 083 Answer
    AZ-304 Microsoft Azure Architect Design Part 07 Q12 083 Answer
    Explanation:

    Box 1: A server in the same paired region
    The contents of your key vault are replicated within the region and to a secondary region at least 150 miles away, but within the same geography to maintain high durability of your keys and secrets.

    Box 2: Delete
    During failover, your key vault is in read-only mode. Requests that are supported in this mode are:
    – List certificates
    – Get certificates
    – List secrets
    – Get secrets
    – List keys
    – Get (properties of) keys
    – Encrypt
    – Decrypt
    – Wrap
    – Unwrap
    – Verify
    – Sign
    – Backup

  13. You have an Azure Storage account that contains the data shown in the following exhibit.

    AZ-304 Microsoft Azure Architect Design Part 07 Q13 084
    AZ-304 Microsoft Azure Architect Design Part 07 Q13 084

    You need to identify which files can be accessed immediately from the storage account.

    Which files should you identify?

    • File1.bin only
    • File2.bin only
    • File3.bin only
    • File1.bin and File2.bin only
    • File1.bin, File2.bin, and File3.bin
    Explanation:

    Hot – Optimized for storing data that is accessed frequently.
    Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.
    Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

    Note: Lease state of the blob. Possible values: available|leased|expired|breaking|broken

  14. HOTSPOT

    You have a virtual machine scale set named SS1.

    You configure autoscaling as shown in the following exhibit.

    AZ-304 Microsoft Azure Architect Design Part 07 Q14 085
    AZ-304 Microsoft Azure Architect Design Part 07 Q14 085

    You configure the scale out and scale in rules to have a duration of 10 minutes and a cool down time of 10 minutes.

    Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic.

    NOTE: Each correct selection is worth one point.

    AZ-304 Microsoft Azure Architect Design Part 07 Q14 086 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q14 086 Question

    AZ-304 Microsoft Azure Architect Design Part 07 Q14 086 Answer
    AZ-304 Microsoft Azure Architect Design Part 07 Q14 086 Answer
  15. HOTSPOT

    You plan to create a storage account and to save the files as shown in the exhibit.

    AZ-304 Microsoft Azure Architect Design Part 07 Q15 087
    AZ-304 Microsoft Azure Architect Design Part 07 Q15 087

    Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

    NOTE: Each correct selection is worth one point.

    AZ-304 Microsoft Azure Architect Design Part 07 Q15 088 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q15 088 Question

    AZ-304 Microsoft Azure Architect Design Part 07 Q15 088 Answer
    AZ-304 Microsoft Azure Architect Design Part 07 Q15 088 Answer
  16. HOTSPOT

    You need to recommend an Azure Storage account configuration for two applications named Application1 and Application2. The configuration must meet the following requirements:

    – Storage for Application1 must provide the highest possible transaction rates and the lowest possible latency.
    – Storage for Application2 must provide the lowest possible storage costs per GB.
    – Storage for both applications must be optimized for uploads and downloads.
    – Storage for both applications must be available in an event of datacenter failure.

    What should you recommend? To answer, select the appropriate options in the answer area.

    NOTE: Each correct selection is worth one point.

    AZ-304 Microsoft Azure Architect Design Part 07 Q16 089 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q16 089 Question
    AZ-304 Microsoft Azure Architect Design Part 07 Q16 089 Answer
    AZ-304 Microsoft Azure Architect Design Part 07 Q16 089 Answer
    Explanation:

    Box 1: BloblBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication.
    BlockBlobStorage accounts: Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency.
    Premium: optimized for high transaction rates and single-digit consistent storage latency.

    Box 2: General purpose v2 with Standard performance..
    General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage.

    Incorrect Answers:
    Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region
    Standard: optimized for high capacity and high throughput
    General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible.
    BlobStorage accounts: Legacy Blob-only storage accounts. Use general-purpose v2 accounts instead when possible.

  17. Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.

    You plan to move all the virtual machines to Azure.

    You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.

    What should you use to make the recommendation?

    • Azure Pricing calculator
    • Azure Cost Management
    • Azure Advisor
    • Azure Migrate
  18. Your company purchases an app named App1.

    You plan to run App1 on seven Azure virtual machines in an Availability Set. The number of fault domains is set to 3. The number of update domains is set to 20.

    You need to identify how many App1 instances will remain available during a period of planned maintenance.

    How many App1 instances should you identify?

    • 1
    • 2
    • 6
    • 7
    Explanation:
    Only one update domain is rebooted at a time. Here there are 7 update domain with one VM each (and 13 update domain with no VM).
  19. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You have an Azure Storage v2 account named storage1.

    You plan to archive data to storage1.

    You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.

    Solution: You create an Azure Blob storage container, and you configure a legal hold access policy.

    Does this meet the goal?

    • Yes
    • No
    Explanation:

    Use an Azure Blob storage container, but use a time-based retention policy instead of a legal hold.

    Note:
    Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state. This state makes the data non-erasable and non-modifiable for a user-specified interval. For the duration of the retention interval, blobs can be created and read, but cannot be modified or deleted. Immutable storage is available for general-purpose v2 and Blob storage accounts in all Azure regions.

    Note: Set retention policies and legal holds
    1. Create a new container or select an existing container to store the blobs that need to be kept in the immutable state. The container must be in a general-purpose v2 or Blob storage account.
    2. Select Access policy in the container settings. Then select Add policy under Immutable blob storage.
    3. Either
    To enable legal holds, select Add Policy. Select Legal hold from the drop-down menu, or
    To enable time-based retention, select Time-based retention from the drop-down menu.
    4. Enter the retention interval in days (acceptable values are 1 to 146000 days).

  20. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You have an Azure Storage v2 account named storage1.

    You plan to archive data to storage1.

    You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.

    Solution: You create a file share and snapshots.

    Does this meet the goal?

    • Yes
    • No
    Explanation:
    Instead you could create an Azure Blob storage container, and you configure a legal hold access policy.