AI-102試験無料問題集「Microsoft Designing and Implementing a Microsoft Azure AI Solution 認定」

You plan to use containerized versions of the Anomaly Detector API on local devices for testing and in on- premises datacenters.
You need to ensure that the containerized deployments meet the following requirements:
Prevent billing and API information from being stored in the command-line histories of the devices that run the container.
Control access to the container images by using Azure role-based access control (Azure RBAC).
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. (Choose four.) NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.
正解:

Explanation:
Step 1: Pull the Anomaly Detector container image.
Step 2: Create a custom Dockerfile
Step 3: Build the image
Step 4: Push the image to an Azure container registry.
https://docs.microsoft.com/en-us/azure/cognitive-services/containers/container-reuse-recipe
You have an app that manages feedback.
You need to ensure that the app can detect negative comments by using the Sentiment Analysis API in Azure Cognitive Service for Language. The solution must ensure that the managed feedback remains on your company's internal network.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
NOTE: More than one order of answer choices is correct You will receive credit for any of the correct orders you select.
正解:

Explanation:
Provision the Language service resource in Azure.
Deploy a Docker container to an on-premises server.
Run the container and query the prediction endpoint.
According to the Microsoft documentation, the Language service is a cloud-based service that provides various natural language processing features, such as sentiment analysis, key phrase extraction, named entity recognition, etc. You can provision the Language service resource in Azure by following the steps in Create a Language resource. You will need to provide a name, a subscription, a resource group, a region, and a pricing tier for your resource. You will also get a key and an endpoint for your resource, which you will use to authenticate your requests to the Language service API.
According to the Microsoft documentation, you can also use the Language service as a container on your own premises or in another cloud. This option gives you more control over your data and network, and allows you to use the Language service without an internet connection. You can deploy a Docker container to an on- premises server by following the steps in Deploy Language containers. You will need to have Docker installed on your server, pull the container image from the Microsoft Container Registry, and run the container with the appropriate parameters. You will also need to activate your container with your key and endpoint from your Azure resource.
According
to the Microsoft documentation, once you have deployed and activated your container, you can run it and query the prediction endpoint to get sentiment analysis results. The prediction endpoint is a local URL that follows this format: http://
<container IP address>:<port>/text/analytics/v3.1-preview.4/sentiment. You can send HTTP POST requests to this endpoint with your text input in JSON format, and receive JSON responses with sentiment labels and scores for each document and sentence in your input.
You have a custom Azure OpenAI model.
You have the files shown in the following table.

You need to prepare training data for the model by using the OpenAI CLI data preparation tool. Which files can you upload to the tool?

You are examining the Text Analytics output of an application.
The text analyzed is: "Our tour guide took us up the Space Needle during our trip to Seattle last week." The response contains the data shown in the following table.

Which Text Analytics API is used to analyze the text?

解説: (GoShiken メンバーにのみ表示されます)
You have an Azure Cognitive Search instance that indexes purchase orders by using Form Recognizer You need to analyze the extracted information by using Microsoft Power Bl. The solution must minimize development effort.
What should you add to the indexer?

You are building a bot and that will use Language Understanding.
You have a LUDown file that contains the following content.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
正解:

Explanation:

Reference:
https://github.com/solliancenet/tech-immersion-data-ai/blob/master/ai-exp1/README.md
You build a conversational bot named bot1.
You need to configure the bot to use a QnA Maker application.
From the Azure Portal, where can you find the information required by bot1 to connect to the QnA Maker application?

解説: (GoShiken メンバーにのみ表示されます)
You have a Language Understanding solution that runs in a Docker container.
You download the Language Understanding container image from the Microsoft Container Registry (MCR).
You need to deploy the container image to a host computer.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
正解:

Explanation:
Your company needs to implement a relational database in Azure. The solution must minimize ongoing maintenance. Which Azure service should you use?

You are developing the smart e-commerce project.
You need to design the skillset to include the contents of PDFs in searches.
How should you complete the skillset design diagram? To answer, drag the appropriate services to the correct stages. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
正解:

Explanation:

Box 1: Azure Blob storage
At the start of the pipeline, you have unstructured text or non-text content (such as images, scanned documents, or JPEG files). Data must exist in an Azure data storage service that can be accessed by an indexer.
Box 2: Computer Vision API
Scenario: Provide users with the ability to search insight gained from the images, manuals, and videos associated with the products.
The Computer Vision Read API is Azure's latest OCR technology (learn what's new) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi- page PDF documents.
Box 3: Translator API
Scenario: Product descriptions, transcripts, and all text must be available in English, Spanish, and Portuguese.
Box 4: Azure Files
Scenario: Store all raw insight data that was generated, so the data can be processed later.
Reference:
https://docs.microsoft.com/en-us/azure/search/cognitive-search-concept-intro
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr
You have an Azure subscription that contains an Azure OpenAI resource named All and an Azure Al Content Safety resource named CS1.
You build a chatbot that uses All to provide generative answers to specific questions and CSl to check input and output for objectionable content.
You need to optimize the content filter configurations by running tests on sample questions.
Solution: From Content Safety Studio, you use the Monitor online activity feature to run the tests Does this meet the requirement?

You need to recommend a non-relational data store that is optimized for storing and retrieving text files, videos, audio streams, and virtual disk images. The data store must store data, some metadata, and a unique ID for each file. Which type of data store should you recommend?