1z0-1067-24試験無料問題集「Oracle Cloud Infrastructure 2024Cloud Operations Professional 認定」
SIMULATION
Scenario: 3 (Use the OCI CLI to Work with Object Storage from a Compute Instance) Scenario Description: (Hands-On Performance Exam Certification) Your company runs a web application in OCI that generates log files. You want to upload these files to OCI Object Storage to meet data retention requirements. Some files need to be retained indefinitely, whereas others can be deleted after 30 days. Use the OCI CLI to create bucket and upload the log directory and create a lifecycle policy rule to delete temporary files after 30 days.
Pre-Configuration:
To fulfill this requirement, you are provided with the following:
Access to an OCI tenancy, an assigned compartment, and OCI credentials
A compute instance with OCI CLI installed and a set of files in ~/dir_to_upload to use Access to the OCI Console Required IAM policies Assumptions:
Perform the tasks by using the OCI CLI on the compute instance.
Use instance principal authentication for all CLI commands; the instance has been given the policies necessary.
Connect to the compute instance using Cloud Shell's private networking and the provided SSH key.
An SSH key pair has been provided to you for the compute instance.
Private Key https://objectstorage.us-ashburn-1.oraclecloud.com/n/tenancyname/b/PBT_Storage/o/PKey.key Note: Throughout your exam, ensure to use assigned Compartment , User Name and Region.
Complete the following tasks in the provisioned OCI environment:
Task 1: Create a Bucket in Object Storage
Task 2: Upload a Directory's Contents to Object Storage
Task 3: Add a Lifecycle Policy to the Bucket
Scenario: 3 (Use the OCI CLI to Work with Object Storage from a Compute Instance) Scenario Description: (Hands-On Performance Exam Certification) Your company runs a web application in OCI that generates log files. You want to upload these files to OCI Object Storage to meet data retention requirements. Some files need to be retained indefinitely, whereas others can be deleted after 30 days. Use the OCI CLI to create bucket and upload the log directory and create a lifecycle policy rule to delete temporary files after 30 days.
Pre-Configuration:
To fulfill this requirement, you are provided with the following:
Access to an OCI tenancy, an assigned compartment, and OCI credentials
A compute instance with OCI CLI installed and a set of files in ~/dir_to_upload to use Access to the OCI Console Required IAM policies Assumptions:
Perform the tasks by using the OCI CLI on the compute instance.
Use instance principal authentication for all CLI commands; the instance has been given the policies necessary.
Connect to the compute instance using Cloud Shell's private networking and the provided SSH key.
An SSH key pair has been provided to you for the compute instance.
Private Key https://objectstorage.us-ashburn-1.oraclecloud.com/n/tenancyname/b/PBT_Storage/o/PKey.key Note: Throughout your exam, ensure to use assigned Compartment , User Name and Region.
Complete the following tasks in the provisioned OCI environment:
Task 1: Create a Bucket in Object Storage
Task 2: Upload a Directory's Contents to Object Storage
Task 3: Add a Lifecycle Policy to the Bucket
正解:
See the solution below with Step by Step Explanation
Explanation:
Task 1: Create a Bucket in Object Storage
Create a bucket named CloudOpsBucket_<user id> with the following properties:
Storage tier: Standard
Auto-tiering: Disabled
Object versioning: Enabled
Emit events: Disabled
Keys: Oracle-managed
Visibility: Private
Task 2: Upload a Directory's Contents to Object Storage
Upload the contents of the directory ~/dir_to_upload and its subdirectories to the bucket CloudOpsBucket Task 3: Add a Lifecycle Policy to the Bucket Create a lifecycle policy rule that deletes all files from ~/dir_to_upload/temp after 30 days Task 1: Create a bucket in Object Storage
1. Open Cloud Shell in the console. Under Network along the top, select Ephemeral Private Network Setup.
2. Select the subnet of the compute instance.
3. SSH into the compute instance using the provided SSH key:
ssh -i /path/to/key opc@<private_ip>
4. In the compute instance, create the bucket with the following command (note that it's one long line):
oci os bucket create -c "<compartment_id>" --name "CloudOpsBucket" --auth instance_principal --versioning 'Enabled' Task 2: Upload a directory's contents to Object Storage
1. Upload the contents of the specified directory and subdirectories with the following command (note that it's one long line):
oci os object bulk-upload -bn "CloudOpsBucket" --src-dir "~/dir_to_upload" --auth instance_principal Task 3: Add a lifecycle policy to the bucket
1. Create a file named rule.json
2. Add the following content to rule.json:
{"items": [{"action": "DELETE","is-enabled": true,"name": "Delete-Rule","object-name-filter": {"exclusion-patterns": null,"inclusion-patterns": null,"inclusion-prefixes": ["temp/"]},"target": "objects","time-amount": 30,"time-unit": "DAYS"}]}
3. Add the lifecycle policy rule with the following command:
oci os object-lifecycle-policy put -bn "CloudOpsBucket" --from-json file://rule.json --auth instance_principal Top of Form
Explanation:
Task 1: Create a Bucket in Object Storage
Create a bucket named CloudOpsBucket_<user id> with the following properties:
Storage tier: Standard
Auto-tiering: Disabled
Object versioning: Enabled
Emit events: Disabled
Keys: Oracle-managed
Visibility: Private
Task 2: Upload a Directory's Contents to Object Storage
Upload the contents of the directory ~/dir_to_upload and its subdirectories to the bucket CloudOpsBucket Task 3: Add a Lifecycle Policy to the Bucket Create a lifecycle policy rule that deletes all files from ~/dir_to_upload/temp after 30 days Task 1: Create a bucket in Object Storage
1. Open Cloud Shell in the console. Under Network along the top, select Ephemeral Private Network Setup.
2. Select the subnet of the compute instance.
3. SSH into the compute instance using the provided SSH key:
ssh -i /path/to/key opc@<private_ip>
4. In the compute instance, create the bucket with the following command (note that it's one long line):
oci os bucket create -c "<compartment_id>" --name "CloudOpsBucket" --auth instance_principal --versioning 'Enabled' Task 2: Upload a directory's contents to Object Storage
1. Upload the contents of the specified directory and subdirectories with the following command (note that it's one long line):
oci os object bulk-upload -bn "CloudOpsBucket" --src-dir "~/dir_to_upload" --auth instance_principal Task 3: Add a lifecycle policy to the bucket
1. Create a file named rule.json
2. Add the following content to rule.json:
{"items": [{"action": "DELETE","is-enabled": true,"name": "Delete-Rule","object-name-filter": {"exclusion-patterns": null,"inclusion-patterns": null,"inclusion-prefixes": ["temp/"]},"target": "objects","time-amount": 30,"time-unit": "DAYS"}]}
3. Add the lifecycle policy rule with the following command:
oci os object-lifecycle-policy put -bn "CloudOpsBucket" --from-json file://rule.json --auth instance_principal Top of Form
Your deployment platform within Oracle Cloud Infrastructure (OCI) leverages a compute instance with multiple block volumes attached. There are multiple teams that use the same compute instance and have access to these block volumes. You want to ensure that no one accidentally deletes any of these block volumes. You have started to construct the following IAM policy but need to determine which permissions should be used. allow group DeploymentUsers to manage volume-family where ANY { request.permission != <???>, request.permission != <???>, request.permission != <???> } Which permissions can you use in place of <???> in this policy? (Choose the best answer.)
正解:C
解答を投票する
You are using Oracle Cloud Infrastructure (OCI) services across several regions: us-phoenix-1, us-ashburn-1, uk-london-1 and ap-tokyo-1. You have creates a separate administrator group for each region: PHX-Admins, ASH-Admins, LHR-Admins and NRT-Admins, respectively. You want to restrict admin access to a specific region. E.g., PHX-Admins should be able to manage all resources in the us phoenix-1 region only and not any other OCI regions. What IAM policy syntax is required to restrict PHX-Admins to manage OCI resources in the us-phoenix-1 region only? (Choose the best answer.)
正解:D
解答を投票する
You are asked to implement the disaster recovery (DR) and business continuity requirements for Oracle Cloud Infrastructure (OCI) Block Volumes. Two OCI regions being used: a primary/source region and a DR/destination region. The requirements are: There should be a copy of data in the destination region to use if a region-wide disaster occurs in the source region
* Minimize costs Which design will help you meet these requirements? (Choose the best answer.)
* Minimize costs Which design will help you meet these requirements? (Choose the best answer.)
正解:A
解答を投票する
You set up a bastion host in your Virtual Cloud Network (VCN) to allow only your IP ad-dress (140.19.2.140) to establish SSH connections with your compute instances that are deployed in a private subnet. The compute instances have an attached Network Security Group (NSG) with a Source Type: Network Security Group (NSG), Source NSG: NSG-050504. To secure the bastion host, you add the following ingress rules to its NSG: Type: All TCP Proto-col: TCP Port Range: 22 Source: 140.19.2.140/32 Type: All TCP Protocol: TCP Port Range: 22 Source: NSG-050504 However, when you check the bastion host logs, you discover that there are IP addresses other than your own that can access your bastion host. What is the root cause of this issue?
正解:D
解答を投票する
You launched a Linux compute instance to host the new version of your company website via Apache Httpd server on HTTPS (port 443). The instance is created in a public subnet along with other instances. The default security list associated to the subnet is:
正解:C
解答を投票する
解説: (GoShiken メンバーにのみ表示されます)
You have created an Autonomous Data Warehouse (ADW) service in your company Oracle Cloud Infrastructure (OCI) tenancy and you now have to load historical data into it. You have already extracted this historical data from multiple data marts and data warehouses. This data is stored in multiple CSV text files and these files are ranging in size from 25 MB to 20 GB. Which is the most efficient and error tolerant method for loading data into ADW? (Choose the best answer.)
正解:A
解答を投票する