070-475試験無料問題集「Microsoft Design and Implement Big Data Analytics Solutions 認定」
You plan to deploy a Hadoop cluster that includes a Hive installation.
Your company identifies the following requirements for the planned deployment:
* During the creation of the cluster nodes, place JAR files in the clusters.
* Decouple the Hive metastore lifetime from the cluster lifetime.
* Provide anonymous access to the cluster nodes.
You need to identify which technology must be used for each requirement.
Which technology should you identify for each requirement? To answer, drag the appropriate technologies to the correct requirements. Each technology may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Your company identifies the following requirements for the planned deployment:
* During the creation of the cluster nodes, place JAR files in the clusters.
* Decouple the Hive metastore lifetime from the cluster lifetime.
* Provide anonymous access to the cluster nodes.
You need to identify which technology must be used for each requirement.
Which technology should you identify for each requirement? To answer, drag the appropriate technologies to the correct requirements. Each technology may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
正解:
Explanation
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has multiple databases that contain millions of sales transactions.
You plan to implement a data mining solution to identity purchasing fraud.
You need to design a solution that mines 10 terabytes (TB) of sales data. The solution must meet the following requirements:
* Run the analysis to identify fraud once per week.
* Continue to receive new sales transactions while the analysis runs.
* Be able to stop computing services when the analysis is NOT running.
Solution: You create a Microsoft Azure HDlnsight cluster.
Does this meet the goal?
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has multiple databases that contain millions of sales transactions.
You plan to implement a data mining solution to identity purchasing fraud.
You need to design a solution that mines 10 terabytes (TB) of sales data. The solution must meet the following requirements:
* Run the analysis to identify fraud once per week.
* Continue to receive new sales transactions while the analysis runs.
* Be able to stop computing services when the analysis is NOT running.
Solution: You create a Microsoft Azure HDlnsight cluster.
Does this meet the goal?
正解:A
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy a Microsoft Azure SQL data warehouse and a web application.
The data warehouse will ingest 5 TB of data from an on-premises Microsoft SQL Server database daily. The web application will query the data warehouse.
You need to design a solution to ingest data into the data warehouse.
Solution: You use AzCopy to transfer the data as text files from SQL Server to Azure Blob storage, and then you use PolyBase to run Transact-SQL statements that refresh the data warehouse database.
Does this meet the goal?
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy a Microsoft Azure SQL data warehouse and a web application.
The data warehouse will ingest 5 TB of data from an on-premises Microsoft SQL Server database daily. The web application will query the data warehouse.
You need to design a solution to ingest data into the data warehouse.
Solution: You use AzCopy to transfer the data as text files from SQL Server to Azure Blob storage, and then you use PolyBase to run Transact-SQL statements that refresh the data warehouse database.
Does this meet the goal?
正解:B
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
You are using a Microsoft Azure Data Factory pipeline to copy data to an Azure SQL database.
You need to prevent the insertion of duplicate data for a given dataset slice.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You need to prevent the insertion of duplicate data for a given dataset slice.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
正解:B,D
解答を投票する
You need to create a new Microsoft Azure data factory by using Azure PowerShell. The data factory will have a pipeline that copies data to and from Azure Storage.
Which four cmdlets should you use in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order.
Which four cmdlets should you use in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order.
正解:
Explanation
Perform these operations in the following order:
* Create a data factory.
* Create linked services.
* Create datasets.
* Create a pipeline.
Step 1: New-AzureRMDataFactory
Create a data factory
The New-AzureRmDataFactory cmdlet creates a data factory with the specified resource group name and location.
Step 2: New-AzureRMDataFactoryLinkedService
Create linked services in a data factory to link your data stores and compute services to the data factory.
The New-AzureRmDataFactoryLinkedService cmdlet links a data store or a cloud service to Azure Data Factory.
Step 3: New-AzureRMDataFactoryDataset
You define a dataset that represents the data to copy from a source to a sink. It refers to the Azure Storage linked service you created in the previous step.
The New-AzureRmDataFactoryDataset cmdlet creates a dataset in Azure Data Factory.
Step 4: New-AzureRMDataFactoryPipeline
You create a pipeline.
The New-AzureRmDataFactoryPipeline cmdlet creates a pipeline in Azure Data Factory.
References:
https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-powershell
https://docs.microsoft.com/en-us/powershell/module/azurerm.datafactories/new-azurermdatafactory
Your company has two Microsoft Azure SQL databases named db1 and db2.
You need to move data from a table in db1 to a table in db2 by using a pipeline in Azure Data Factory.
You create an Azure Data Factory named ADF1.
Which two types Of objects Should you create In ADF1 to complete the pipeline? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You need to move data from a table in db1 to a table in db2 by using a pipeline in Azure Data Factory.
You create an Azure Data Factory named ADF1.
Which two types Of objects Should you create In ADF1 to complete the pipeline? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
正解:A,C
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
You have data pushed to Microsoft Azure Blob storage every few minutes.
You want to use an Azure Machine Learning web service to score the data hourly.
You plan to deploy the data factory pipeline by using a Microsoft.NET application.
You need to create an output dataset for the web service.
Which three properties should you define? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You want to use an Azure Machine Learning web service to score the data hourly.
You plan to deploy the data factory pipeline by using a Microsoft.NET application.
You need to create an output dataset for the web service.
Which three properties should you define? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
正解:A,B,C
解答を投票する
A Company named Fabrikam, Inc. has a web app. Millions of users visit the app daily.
Fabrikam performs a daily analysis of the previous day's logs by scheduling the following Hive query.
You need to recommend a solution to gather the log collections from the web app.
What should you recommend?
Fabrikam performs a daily analysis of the previous day's logs by scheduling the following Hive query.
You need to recommend a solution to gather the log collections from the web app.
What should you recommend?
正解:B
解答を投票する
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Microsoft Azure deployment that contains the following services:
* Azure Data Lake
* Azure Cosmos DB
* Azure Data Factory
* Azure SQL Database
You load several types of data to Azure Data Lake.
You need to load data from Azure SQL Database to Azure Data Lake.
Solution: You use the AzCopy utility.
Does this meet the goal?
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Microsoft Azure deployment that contains the following services:
* Azure Data Lake
* Azure Cosmos DB
* Azure Data Factory
* Azure SQL Database
You load several types of data to Azure Data Lake.
You need to load data from Azure SQL Database to Azure Data Lake.
Solution: You use the AzCopy utility.
Does this meet the goal?
正解:A
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)