070-475試験無料問題集「Microsoft Design and Implement Big Data Analytics Solutions 認定」
You work for a telecommunications company that uses Microsoft Azure Stream Analytics.
You have data related to incoming calls.
You need to group the data in the following ways:
* Group A: Every five minutes for a duration of five minutes
* Group B: Every five minutes for a duration of 10 minutes
Which type of window should you use for each group? To answer, drag the appropriate window types to the correct groups. Each window type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
You have data related to incoming calls.
You need to group the data in the following ways:
* Group A: Every five minutes for a duration of five minutes
* Group B: Every five minutes for a duration of 10 minutes
Which type of window should you use for each group? To answer, drag the appropriate window types to the correct groups. Each window type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
正解:
Explanation
Group A: Tumbling
Tumbling Windows define a repeating, non-overlapping window of time.
Group B: Hopping
Like Tumbling Windows, Hopping Windows move forward in time by a fixed period but they can overlap with one another.
You have a Microsoft Azure data factory.
You assign administrative roles to the users in the following table.
You discover that several new data factory instances were created.
You need to ensure that only User5 can create a new data factory instance.
Which two roles should you change? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You assign administrative roles to the users in the following table.
You discover that several new data factory instances were created.
You need to ensure that only User5 can create a new data factory instance.
Which two roles should you change? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
正解:A,D
解答を投票する
You plan to deploy a Hadoop cluster that includes a Hive installation.
Your company identifies the following requirements for the planned deployment:
* During the creation of the cluster nodes, place JAR files in the clusters.
* Decouple the Hive metastore lifetime from the cluster lifetime.
* Provide anonymous access to the cluster nodes.
You need to identify which technology must be used for each requirement.
Which technology should you identify for each requirement? To answer, drag the appropriate technologies to the correct requirements. Each technology may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Your company identifies the following requirements for the planned deployment:
* During the creation of the cluster nodes, place JAR files in the clusters.
* Decouple the Hive metastore lifetime from the cluster lifetime.
* Provide anonymous access to the cluster nodes.
You need to identify which technology must be used for each requirement.
Which technology should you identify for each requirement? To answer, drag the appropriate technologies to the correct requirements. Each technology may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
正解:
Explanation
You need to recommend a data analysis solution for 20,000 Internet of Things (IoT) devices. The solution must meet the following requirements:
* Each device must be identified by using its own credentials.
* Each device must be able to route data to multiple endpoints.
* The solution must require the minimum amount of customized code.
What should you recommend?
* Each device must be identified by using its own credentials.
* Each device must be able to route data to multiple endpoints.
* The solution must require the minimum amount of customized code.
What should you recommend?
正解:C
解答を投票する
You have four on-premises Microsoft SQL Server data sources as described in the following table.
You plan to create three Azure data factories that will interact with the data sources as described in the following table.
You need to deploy Microsoft Data Management Gateway to support the Azure Data Factory deployment. The solution must use new servers to host the instances of Data Management Gateway.
What is the minimum number of new servers and data management gateways you should you deploy? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
You plan to create three Azure data factories that will interact with the data sources as described in the following table.
You need to deploy Microsoft Data Management Gateway to support the Azure Data Factory deployment. The solution must use new servers to host the instances of Data Management Gateway.
What is the minimum number of new servers and data management gateways you should you deploy? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
正解:
Explanation
Box 1: 3
Box 2: 3
Considerations for using gateway
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the states goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Apache Spark system that contains 5 TB of data.
You need to write queries that analyze the data in the system. The queries must meet the following requirements:
* Use static data typing.
* Execute queries as quickly as possible.
* Have access to the latest language features.
Solution: You write the queries by using Scala.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Apache Spark system that contains 5 TB of data.
You need to write queries that analyze the data in the system. The queries must meet the following requirements:
* Use static data typing.
* Execute queries as quickly as possible.
* Have access to the latest language features.
Solution: You write the queries by using Scala.
正解:B
解答を投票する
You are designing an Internet of Thing: (IoT) solution intended to identify trends. The solution requires the realtime analysis of data originating from sensors. The results of the analysis will be stored in a SQL database.
You need to recommend a data processing solution that uses the Transact-SQL language.
Which data processing solution should you recommend?
You need to recommend a data processing solution that uses the Transact-SQL language.
Which data processing solution should you recommend?
正解:C
解答を投票する