070-475試験無料問題集「Microsoft Design and Implement Big Data Analytics Solutions 認定」

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has multiple databases that contain millions of sales transactions.
You plan to implement a data mining solution to identity purchasing fraud.
You need to design a solution that mines 10 terabytes (TB) of sales data. The solution must meet the following requirements:
* Run the analysis to identify fraud once per week.
* Continue to receive new sales transactions while the analysis runs.
* Be able to stop computing services when the analysis is NOT running.
Solution: You create a Microsoft Azure HDlnsight cluster.
Does this meet the goal?

解説: (GoShiken メンバーにのみ表示されます)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Microsoft Azure deployment that contains the following services:
* Azure Data Lake
* Azure Cosmos DB
* Azure Data Factory
* Azure SQL Database
You load several types of data to Azure Data Lake.
You need to load data from Azure SQL Database to Azure Data Lake.
Solution: You use the AzCopy utility.
Does this meet the goal?

解説: (GoShiken メンバーにのみ表示されます)
You have a data warehouse that contains the sales data of several customers.
You plan to deploy a Microsoft Azure data factory to move additional sales data to the data warehouse.
You need to develop a data factory job that reads reference data from a table in the source data.
Which type of activity should you add to the control flow of the job?

解説: (GoShiken メンバーにのみ表示されます)
Your company supports multiple Microsoft Azure subscriptions.
You plan to deploy several virtual machines to support the services in Azure.
You need to automate the management of all the subscriptions. The solution must minimize administrative effort.
Which two cmdlets should you run? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

You have four on-premises Microsoft SQL Server data sources as described in the following table.

You plan to create three Azure data factories that will interact with the data sources as described in the following table.

You need to deploy Microsoft Data Management Gateway to support the Azure Data Factory deployment. The solution must use new servers to host the instances of Data Management Gateway.
What is the minimum number of new servers and data management gateways you should you deploy? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
正解:

Explanation

Box 1: 3
Box 2: 3
Considerations for using gateway
You plan to use Microsoft Azure IoT Hub to capture data from medical devices that contain sensors.
You need to ensure that each device has its own credentials. The solution must minimize the number of required privileges.
Which policy should you apply to the devices?

解説: (GoShiken メンバーにのみ表示されます)
You plan to deploy a Microsoft Azure Data Factory pipeline to run an end-to-end data processing workflow.
You need to recommend winch Azure Data Factory features must be used to meet the Following requirements:
Track the run status of the historical activity.
Enable alerts and notifications on events and metrics.
Monitor the creation, updating, and deletion of Azure resources.
Which features should you recommend? To answer, drag the appropriate features to the correct requirements.
Each feature may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
正解:

Explanation

Box 1: Azure Hdinsight logs
Logs contain historical activities.
Box 2: Azure Data Factory alerts
Box 3: Azure Data Factory events
You have a Microsoft Azure Stream Analytics solution.
You need to identify which types of windows must be used to group lite following types of events:
* Events that have random time intervals and are captured in a single fixed-size window
* Events that have random time intervals and are captured in overlapping windows Which window type should you identify for each event type? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
正解:

Explanation

Box 1. A sliding Window
Box 2: A sliding Window
With a Sliding Window, the system is asked to logically consider all possible windows of a given length and output events for cases when the content of the window actually changes - that is, when an event entered or existed the window.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy a Microsoft Azure SQL data warehouse and a web application.
The data warehouse will ingest 5 TB of data from an on-premises Microsoft SQL Server database daily. The web application will query the data warehouse.
You need to design a solution to ingest data into the data warehouse.
Solution: You use AzCopy to transfer the data as text files from SQL Server to Azure Blob storage, and then you use Azure Data Factory to refresh the data warehouse database.
Does this meet the goal?

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the states goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Apache Spark system that contains 5 TB of data.
You need to write queries that analyze the data in the system. The queries must meet the following requirements:
* Use static data typing.
* Execute queries as quickly as possible.
* Have access to the latest language features.
Solution: You write the queries by using Scala.