Tuesday, December 21, 2021

Backing up Oracle Key Vault from your datacenter to OCI

  This is the second of a multi-part blog series walking through how to copy your TDE encrypted on premise Oracle Database to an OCI VM in the oracle cloud. This blog post will focus on how to leverage OKV (Oracle Key Vault) to help with storing, backing up, and migrating encryption keys. In this post I will walk through backing up OKV to both a local ZFS, and an OCI bucket.

The first part of this series went through how to migrate a database from on premise to the OCI cloud using the Oracle Database Cloud Backup Module. You can find it here.

I will add to this first by including how to migrate my OKV (Oracle Key Vault) environment to OCI to allow me to restore my encrypted database in OKV.

I am going to skip over how to migrate migrate my database to using OKV. If you are starting at the beginning (small database no encryption), the steps to get to this next blog post are.

  1. Create a new database for testing.
  2. Implement Advanced Security (TDE) which is covered in my post here.
  3. Migrating from a local wallet to OKV which is covered in my post here.
At this point my database (ocitest),  is using my OKV environment on premise, and I am ready to backup and restore my OKV host along with my database.

Backup the database to an OCI bucket


First I am going to back my database up to an OCI bucket.

I am going to set my channels and perform a level 0 backup with archive logs.

NOTE: It is encrypted using the encryption key from OKV, rather than a password.


### Default device is tape
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';


### Ensure autobackups are sent to my bucket
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default


### Backup set is a compressed backupset (this is included for free with the backup module)
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;

### Channel configuration
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ocicloud/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.ora)';

### Encryption is on
CONFIGURE ENCRYPTION FOR DATABASE ON;
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default

### Compression is medium
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;


Configure ZFSSA as a destination

Step # 1 Add a dedicated user on the ZFSSA to own the backups.

Log onto the ZFSSA console and go to "Configuration" => "Users". Add a new user to be the owner of the OKV backups on the ZFSSA.  Click add when completed.



Step # 2 Retrieve the public SSH key from OKV.

Log onto the OKV console and go to "system" => "Settings" ==> "Backup and Restore". Click on "Backup and Restore" and then "Manage Backup destinations".  Once there click on "Create" to add a new backup destination.

On the screen below you want capture the "Public Key", which is the strings AFTER the "ssh-rsa".  You can save this in notepad, or some sort of scratch pad without the beginning line.


Step # 3 Add the user to the ZFSSA with the public key

Now go back to the ZFFSA console, and log into the configuration for SFTP. This can be under "SERVICES" => "SFTP". Click on "SFTP", and you will see the screen in the background below.  Click on the "+" to the left of "Keys".  On the window that pops up you will enter the "Public Key" characters you previously saved, and the "user" that you created as the owner of the OKV backups. Once you complete this click on "ADD" to add the OKV Public Key to the ZFSSA.



Step # 4 Add a new Project/Share to hold the OKV backups

Add a new project to hold the backups.  With the project, navigate to the "General" tab, and go to to bottom of the window and change the "Default Settings".  For this project, the "user" should be the user you created on the ZFS. This ensures that the OKV backups are separate from other backups on the ZFS, and are only accessible by the new users.



Then set the protocol for the project to be SFTP as the only read/write protocol on the "Protocols" tab.


Navigate to the "Snapshots" tab and we will now create 3 immutable snapshots taken every day.
Ensure you click on "Enable retention policy for Scheduled Snapshots"
Under the Snapshots section, click on the “Schedules” tab and click on the “+” next to it.
Change the desired frequency of the snapshot to daily for a daily backup that matches the OKV backup.
Change the “scheduled time” to a time of day following the daily backup.
Decide how many backups in total you wish to keep. This is the “KEEP AT MOST”.
Change the “RETENTION” to “Locked” with the drop down to ensure the backups will be immutable:
Decide how many backups you wish to keep as immutable. This is the “RETENTION”.
Click on “Apply”.


And then add a new share to the project to backups.

Step # 5 Add the ZFSSA as a destination.

Go back to OKV and navigate back to the “Create Backup Destination” under “System”.
On the “create Backup Destination” page 
give the “Destination Name” the name you want to use for the ZFS.  
Change “Transfer Method” to “sftp” using the radio button.
Enter the “Hostname” for the ZFS. This can be either the IP or the DNS name.
Under the “Port” ensure the port matches the ZFS port used for “SFTP”, which defaults to 218.
Enter the “Destination Path” which is “/export/” followed by the share name given in step 021.
Enter the “User Name” which is the user created in step 006 and the user that owns the share from step 021.
Click on “Save”


Backup OKV to  ZFSSA 

With the “Backup” screen
Give your backup a descriptive name
Leave the start time (or change it to the time to run the backup).
Choose the destination entered in step 022
Change the dial to “PERIODIC” to schedule a regular backup
Chose the frequency for the backup
Click on Schedule.


Once the first backup completes you will see it on this "Backup and Restore" window.


Backup   ZFSSA to OCI

 

Now that we have our backup sent to the ZFSSA, we need to configure the ZFSSA to send the backup to an OCI bucket.  Navigate to "SERVICES" => "Cloud" on the ZFSSA, and click on the "+" sign to the left of "Targets" top add a new cloud target.  On the window that pops up, enter the authentication information for your cloud bucket in OCI (It should be set as immutable). 

In the “Add Cloud Target” window enter.
Name of the cloud target, if you are setting up multiple targets to different buckets having the bucket name is most descriptive
The location is https://objectstorage.{cloud location for your tenancy and bucket}.oraclecloud.com
Bucket name from the previous step.
“User” which is the user OCID from the previous steps
“Tenancy” which is the Tenancy OCID from the previous steps
“Private key” associated with the public key assigned with OCI.
Any proxy information and bandwidth information if needed.
Click on “ADD”.


Navigate to your project, and go to the "Snapshots" tab. You should see the snapshots that have been created and click on the Symbol under clones that looks like a globe.


Once there, choose the target you previously created, and send the backup as "tar" format. and click on "APPLY", this will send a copy of your OKV backup (which is encrypted) to your bucket in OCI as an offsite backup.




Monday, December 20, 2021

Backing up your database to a bucket in OCI and restoring it in OCI

 This is the first of a multi-part blog series walking through how to copy your TDE encrypted on premise Oracle Database to an OCI VM in the oracle cloud using the Oracle Database Backup Cloud Service. 


I am going to start with a simple test case of a small database which doesn't contain any TDE encryption or wallet, and back it up to an OCI bucket.

As far as where to start, below are some documentation links that will help.


NOTE: You will be doing downloading, and installing the library files on both the source database and the destination database.

Install Database backup module

The first thing I am going to do is unzip the Cloud Backup Module (opc_installer.zip).  This can downloaded using the link above, but it can also be found within the $ORACLE_HOME/lib  directory.  As always, I would recommend downloading the current copy to be sure it is the latest release.   Once unzipped you will find the module contains a directory "opc_installer".  Within "opc_installer" there are 2 subdirectories with a ".jar" file to install the library, and a readme file.

    oci_installer/                                                  ---> OCI (Oracle Cloud Native) library install
                       oci_install.jar
                       oci_readme.txt
    opc_installer/                                                  ---> OPC (Oracle Cloud Gen 1/swift) library install
                       opc_install.jar
                       opc_readme.txt

I am going to use "oci_install.jar" file and access the bucket using the Oracle Cloud Native API.

If I look in the "readme" file, I can see that I install the library using the following parameters.


I am going to install my files within a new directory for my Database host.

/home/oracle/ocicloud/
                                    config/
                                    lib/
                                    wallet/

To install and configure my library I am going to execute

java -jar oci_install.jar
         -host https://objectstorage.us-ashburn-1.oraclecloud.com  
        -pvtkeyFile  /home/oracle/ocicloud/myprivatekey.ppk 
        -pubFingerPrint 6d:f9:57:d5:ff:b1:c0:98:81:90:1e:6e:08:0f:d0:69 
        -tOCID ocid1.tenancy.oc1..aaaxxxnoq 
        -uOCID ocid1.user.oc1..aaaaaaaae2mlwyke4gvxxsaouxq 
        -bucket migest_backups  
        -walletDir /home/oracle/ocicloud/wallet 
        -configFile /home/oracle/ocicloud/config/migtestbackup.ora 
         -cOCID ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igzd3tyq 
        -libDir /home/oracle/ocicloud/lib  

Oracle Database Cloud Backup Module Install Tool, build MAIN_2021-08-31
Oracle Database Cloud Backup Module credentials are valid.
Backups would be sent to bucket migest_backups.
Oracle Database Cloud Backup Module wallet created in directory /home/oracle/ocicloud/wallet.
Oracle Database Cloud Backup Module initialization file /home/oracle/ocicloud/config/migtestbackup.ora created.
Downloading Oracle Database Cloud Backup Module Software Library from Oracle Cloud Infrastructure.
Download complete.

Now that it is successfully installed we can go to configuring the module.

Configure Database backup module

Running the command below, lets see what is in my directory now.

find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"

.
 |-lib
 | |-bulkimport.pl
 | |-libopc.so
 | |-metadata.xml
 | |-odbsrmt.py
 | |-perl_readme.txt
 | |-python_readme.txt
 |-config
 | |-migtestbackup.ora
 |-wallet
 | |-cwallet.sso.lck
 | |-cwallet.sso
 |-oci_install.jar
 |-myprivatekey.ppk

Looking at the configuration file created you can see the information used to connect to the bucket in the OCI Object store.

OPC_HOST=https://objectstorage.us-ashburn-1.oraclecloud.com/n/id20avsofo
OPC_WALLET='LOCATION=file:/home/oracle/ocicloud/wallet CREDENTIAL_ALIAS=alias_oci'
OPC_CONTAINER=migest_backups
OPC_COMPARTMENT_ID=ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7xuiijael2fwcpqyvzzb4ykd3tyq
OPC_AUTH_SCHEME=BMC


Now we can create the channel configuration to send backups to the oci bucket.
The create channel would be executed like the command below filling in the library and configuration file

CONFIGURE CHANNEL DEVICE TYPE 
        'SBT_TAPE' PARMS 
         'SBT_LIBRARY={library name and location},
                SBT_PARMS=(OPC_PFILE=/{configuration file})';

Below are the commands I am going to execute in RMAN to configure my channel and settings to backup my database.




## Default device type is tape
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE'; 

## Backup using the library and config file we just installed
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ocicloud/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.ora)';

## Backup with 4 channels to a compressed backupset
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;

## Use medium compression since this is included in the license for the module.
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM';

## Encrypt the backup being sent, this is mandatory for writing to the cloud.
CONFIGURE ENCRYPTION FOR DATABASE ON;



Backup Database to an OCI bucket

Set a password to encrypt the backup (it must be encrypted to send to a bucket) and perform a full backup.



set encryption identified by oracle only;
backup incremental level 0 database plus archivelog not backed up;




This will send the backup to the object store

Configure Database backup module in OCI.

I am going to go through the same series of steps to install the Oracle Database Cloud backup Module in my OCI instance. 

Oracle Database Cloud Backup Module Install Tool, build MAIN_2021-08-31
Oracle Database Cloud Backup Module credentials are valid.
Backups would be sent to bucket migest_backups.
Oracle Database Cloud Backup Module wallet created in directory /home/oracle/ocicloud/wallet.
Oracle Database Cloud Backup Module initialization file /home/oracle/ocicloud/config/migtestbackup.ora created.
Downloading Oracle Database Cloud Backup Module Software Library from Oracle Cloud Infrastructure.
Download complete.

Configure pfile for database in OCI.


I now need to configure my database pfile in OCI. I just need a few basic things 

audit_file_dest='/u01/app/oracle/admin/migtest/adump'
*.audit_trail='db'
*.compatible='19.0.0'
*.control_files='/u01/app/oracle/oradata/MIGTEST/controlfile/controlfile1.ctl','/u01/app/oracle/oradata/MIGTEST/controlfile/controlfile2.ctl'
*.db_block_size=8192
*.db_create_file_dest='/u01/app/oracle/oradata'
*.db_name='migtest'
*.processes=300
*.sga_target=4638m


Restore pfile and controlfile for database in OCI

There a few steps to get ready to restore the spile and controlfile
  • I add my database to the "/etc/oratab" to ensure I can connect to it, and ". oraenv" to set the environment.
  • I now start up the database nomount
  • I go back to the original database to retrieve the dbid.

Now I am ready to restore the spfile (note that I am setting the password to decrypt the backups).


In RMAN I restore the spfile
set decryption identified by oracle;

 run {
 allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ocicloud/ib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.or)';
 restore spfile from autobackup ;
 release channel c1;
 }
rman target /
RMAN> set decryption identified by oracle;

executing command: SET decryption
 run {
 allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ocicloud/ib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.or)';
 restore spfile from autobackup ;
 release channel c1;
 }


RMAN> 2> 3> 4> 5>
allocated channel: c1
channel c1: SID=20 device type=SBT_TAPE
channel c1: Oracle Database Backup Service Library VER=21.0.0.1

Starting restore at 20-DEC-21

channel c1: looking for AUTOBACKUP on day: 20211220
channel c1: AUTOBACKUP found: c-286701374-20211220-00
channel c1: restoring spfile from AUTOBACKUP c-286701374-20211220-00
channel c1: SPFILE restore from AUTOBACKUP complete
Finished restore at 20-DEC-21


Then I restore the controlfile.


set decryption identified by oracle;

 run {
 allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ocicloud/ib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.or)';
 restore controlfile from autobackup ;
 release channel c1;
 }

RMAN>
executing command: SET decryption
using target database control file instead of recovery catalog


RMAN> 2> 3> 4> 5>
allocated channel: c1
channel c1: SID=17 device type=SBT_TAPE
channel c1: Oracle Database Backup Service Library VER=21.0.0.1

Starting restore at 20-DEC-21

channel c1: looking for AUTOBACKUP on day: 20211220
channel c1: AUTOBACKUP found: c-286701374-20211220-00
channel c1: restoring control file from AUTOBACKUP c-286701374-20211220-00
channel c1: control file restore from AUTOBACKUP complete
output file name=/u01/app/oracle/oradata/MIGTEST/controlfile/controlfile1.ctl
output file name=/u01/app/oracle/oradata/MIGTEST/controlfile/controlfile2.ctl
Finished restore at 20-DEC-21

released channel: c1



Now I can mount the database


Restore the datafile for the database in OCI


Since the location in OCI is different.

My on-premise database  "/home/oracle/app/oracle/oradata/"
My OCI database  "/u01/app/oracle/oradata/"

I am going to create a script to set newname from my datafiles to restore to.




set linesize 160
set pagesize 0

SELECT REPLACE(file_name,'/home/oracle/app/oracle/oradata/','/u01/app/oracle/oradata/') "Changes"
     FROM (select
           'set newname for datafile ' || file# || ' to ' || '''' || name || '''' || ';' file_name
           from v$datafile
           )
;

Which will create the script that sets "new name for my datafiles"
I just need to execute in RMAN within a run block.

run {
set newname ....
  }

Now I configure the channels just like I did in the for my on premise (unless they are are already set from the controlfile).



## Default device type is tape
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE'; 

## Backup using the library and config file we just installed
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ocicloud/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.ora)';

## Backup with 4 channels to a compressed backupset
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;

## Use medium compression since this is included in the license for the module.
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM';

## Encrypt the backup being sent, this is mandatory for writing to the cloud.
CONFIGURE ENCRYPTION FOR DATABASE ON;


Now we can restore and recover the database and switch to the new copy of the datafiles.


run {
  restore database; 
  recovery database;
  switch datafile all;
    }


And finally (if we want to start it up) open it resetlogs.

RMAN> alter database open resetlogs;

Statement processed

RMAN>


That's all there is to it.

Tuesday, November 16, 2021

ZFSSA now offers immutable snapshots

 The latest ZFSSA software release (as of this post) is OS8.8.39

 This release contains the ability to make both scheduled snapshots and manual snapshots immutable, and I will go through how this works in this post.



New Authorizations

By default non-root users are not authorized to create scheduled locked snapshots, or manual locked snapshots and you will see the message below.




There are 3 new authorizations added to support Snapshot immutability.  The authorizations are

  • releaseSnapRetention - This allows the role to release a snapshot from it's retention hold
  • scheduleLockedSnap - This allows the role to schedule a locked snapshot
  • retainSnap                   - This allows the role to create a manual locked snapshot

In order to show how this works I created a new role "Security_Admin" and granted this role the new authorizations.

You can see that the "Security_Admin" role has releaseSnapRetention, scheduleLockedSnap and retainSnap authorizations which reside under the "Projects and shares" scope.





I then added the new role "Security_Admin" to my administration user.  This limits who has the authority to create and change the status on the immutable snapshots.


Create a Manual Locked Snapshot (BUI) 

First I am going to create a manual locked snapshot.  Below is the window that appears when I click on the "+" to create the snapshot.
Notice below the name there is a new field "Retention policy". This can be either
  • Off           - There is no retention on this snapshot (normal)
  • Unlocked - There is a locked retention on this snapshot 
I am going to create my manual snapshot with an "unlocked" retention policy



Change the retention setting of a Snapshot (BUI) 

Once I create the manual snapshot, I can see that it has an "unlocked" retention when I click on the edit button.  Here I can update the snapshot and turn the retention policy to "Off" to unlock the snapshot when I am ready to delete it. I can also change the status of a snapshot without a retention to have a retention policy.




Create a Manual Locked Snapshot (CLI) 

1) Navigate to the share or project you want to snapshot.

zfssim:shares NFSbackups> select NFS_immutable
zfssim:shares NFSbackups/NFS_immutable>

 
 2) Enter snapshots
zfssim:shares NFSbackups> snapshots
zfssim:shares NFSbackups/NFS_immutable snapshots>


3) Use the snapshot command followed by a "-r" to set the retention lock, and set the new snapshot name

zfssim:shares NFSbackups/NFS_immutable snapshots> snapshot -r Save_until_Jan_1_2022
zfssim:shares NFSbackups/NFS_immutable snapshots>

4) You can use the list command to see the snapshot, and then select the snapshot
zfssim:shares objectstore> select rmanbackups
zfssim:shares objectstore/rmanbackups>

5) The "show" command will display the settings for the snapshot, and you will see that has a retentionpolicy of "unlocked"

zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> show
Properties:
                      creation = Tue Nov 16 2021 20:35:25 GMT+0000 (UTC)
                     numclones = 0
                        isauto = false
               retentionpolicy = unlocked
                          pool = generalpool1
                canonical_name = generalpool1/local/NFSbackups/NFS_immutable@Save_until_Jan_1_2022
                    shadowsnap = false
                  space_unique = 0
                    space_data = 31K




Change the retention setting of a Snapshot (CLI) 

Continuing from the previous set of commands, with the "show" I can see the status of the retention lock.
Using the "set retentionpolicy={off | unlocked}" you can change the status of a snapshot.

Below is the example when I turned the retention policy to off for the snapshot I took in the prior example.


zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> get retentionpolicy
               retentionpolicy = unlocked
zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> set retentionpolicy=off
               retentionpolicy = off (uncommitted)
zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> get retentionpolicy
               retentionpolicy = off (uncommitted)
zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> commit
zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> show
Properties:
                      creation = Tue Nov 16 2021 20:35:25 GMT+0000 (UTC)
                     numclones = 0
                        isauto = false
               retentionpolicy = off
                          pool = generalpool1
                canonical_name = generalpool1/local/NFSbackups/NFS_immutable@Save_until_Jan_1_2022
                    shadowsnap = false
                  space_unique = 0
                    space_data = 31K

Children:
                          backups => Configure Cloud Backups
                          targets => List snapshot parents per target

zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022>



Deleting a Manual Locked Snapshot

BUI 

When you delete a manual snapshot that has a retention policy, you will receive an error screen once you click through the "are you sure" message. Below is the message that will appear if the snapshot still has a retention lock.



In order to allow the snapshot to be deleted, you need to edit the snapshot, and set the retention to "Off".  Once you remove the retention lock the snapshot can be deleted.

CLI

You will receive an error when trying to delete the snapshot. You must release the lock (or in the case of a schedule snapshot) wait for it to roll off.
zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> destroy
This will destroy all data in "Save_until_Jan_1_2022"! Are you sure? (Y/N) y
error: The action could not be completed because the target 'NFSbackups/NFS_immutable@Save_until_Jan_1_2022' is in use. It cannot be modified while it, or its children, are actively changing. Make sure no other users are editing the
       share configuration and try again. If this problem persists, contact your service provider.




Enable Scheduled Locked Snapshots (BUI)

The next step is to enable scheduled locked snapshots. You will notice (highlighted below) that there is a new option to enable the retention policy for locked scheduled snapshots under the project and share.

shares --> share/project --> snapshots




Create Scheduled Locked Snapshots (BUI)

To create a schedule snapshot that is locked, you will noticed there are  addition fields on the scheduling popup.  You have the ability to schedule a snapshot with retention either "Off" or "Locked".  When schedule with "Locked" you must also decide on how many of the "kept" snapshots will be locked. Below I am scheduling snapshots every half hour.  5 snapshots will be kept, and the most recent 3 snapshots will be locked (since I chose locked).


Viewing retention status of scheduled snapshots (BUI)


Using the schedule from above (5 snapshots, 3 of which are locked), below is what I am seeing after it has been executing for awhile.  I chose one of the 3 most recent snapshots and I can see that it has a status of "locked" and I am unable to change that status.






Deleting locked scheduled snapshots (BUI) - Not allowed

Now I am going to try to delete the schedule that contains locked snapshots.  I click on the delete button and hit apply. I get a message saying the snaps will be converted to manual snapshots.


I click on "CONVERT" but it won't let me convert them to manual snapshots.


If I try to change the schedule to have the snapshots no longer be "Locked", I get the same message.



Enable Scheduled Locked Snapshots (CLI)

I navigated through the CLI and got to the share that I wanted to create a scheduled, locked snapshot for.  I first need to make sure the property "snap" is set. In my case it wasn't so I set the value and committed the change.


zfssim:shares NFSbackups/nfstest> get snapret_enabled
               snapret_enabled = false (inherited)
zfssim:shares NFSbackups/nfstest> set snapret_enabled=true
               snapret_enabled = true (uncommitted)
zfssim:shares NFSbackups/nfstest> commit
zfssim:shares NFSbackups/nfstest> get snapret_enabled
               snapret_enabled = true
zfssim:shares NFSbackups/nfstest>



Create Schedule Locked Snapshots (CLI)

Navigate to the share --> snapshots --> automatic

Once there create a new snapshot, and set the properties for the snapshot.
In order to make this a locked snapshot, you need to set the property "retentionpolicy" to "locked".

Below is the steps I followed to create a daily snapshot, kept for 35 days, and immutable for 30 days.



 zfssim:shares NFSbackups/nfstest>
zfssim:shares NFSbackups/nfstest> snapshots
zfssim:shares NFSbackups/nfstest snapshots> automatic
zfssim:shares NFSbackups/nfstest snapshots automatic> create
zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> show
Properties:
                     frequency = (unset)
                           day = (unset)
                          hour = (unset)
                        minute = (unset)
                          keep = 0
                 retentionhold = 0
               retentionpolicy = off

zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set frequency=day
                     frequency = day (uncommitted)
zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set hour=06
                          hour = 06 (uncommitted)
zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set minute=00
                        minute = 00 (uncommitted)
zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set keep=35
                          keep = 35 (uncommitted)
zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set retentionhold=30
                 retentionhold = 30 (uncommitted)
zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set retentionpolicy=locked
               retentionpolicy = locked (uncommitted)
zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> show
Properties:
                     frequency = day (uncommitted)
                           day = (unset)
                          hour = 06 (uncommitted)
                        minute = 00 (uncommitted)
                          keep = 35 (uncommitted)
                 retentionhold = 30 (uncommitted)
               retentionpolicy = locked (uncommitted)

zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)>
zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> commit
zfssim:shares NFSbackups/nfstest snapshots automatic> show
Properties:
                       convert = false

Automatics:

NAME           FREQUENCY      DAY             HH:MM KEEP
automatic-000  day            -               06:00   35


 


Viewing retention status of scheduled snapshots (CLI)


Below I listed out the snapshots that were automatically created. I can see that the snapshot chose has a "retentionpolicy" of "locked" and this lock will be removed according to the schedule.


zfssim:shares NFSbackups/zfsshare> snapshots
zfssim:shares NFSbackups/zfsshare snapshots> list
.auto-Bihourly_snapshots-20211116T193000UTC
.auto-Bihourly_snapshots-20211116T200000UTC
.auto-Bihourly_snapshots-20211116T203000UTC
.auto-Bihourly_snapshots-20211116T210000UTC
.auto-Bihourly_snapshots-20211116T213000UTC
zfssim:shares NFSbackups/zfsshare snapshots> select .auto-Bihourly_snapshots-20211116T203000UTC
zfssim:shares NFSbackups/zfsshare@.auto-Bihourly_snapshots-20211116T203000UTC> show
Properties:
                      creation = Tue Nov 16 2021 20:30:00 GMT+0000 (UTC)
                     numclones = 0
                        isauto = true
               retentionpolicy = locked
                          pool = generalpool1
                canonical_name = generalpool1/local/NFSbackups/zfsshare@.auto-Bihourly_snapshots-20211116T203000UTC
                    shadowsnap = false
                  space_unique = 0
                    space_data = 1.22G

Children:
                          backups => Configure Cloud Backups
                          targets => List snapshot parents per target

zfssim:shares NFSbackups/zfsshare@.auto-Bihourly_snapshots-20211116T203000UTC>




BONUS : 


In the audit logs you can see the changes occur, and who made them.. I highlighted where I changed status of one of the shares from Unlocked to OFF and from Off to Unlocked.





Sunday, October 17, 2021

ZFS now supports Object Store Pre-Authenticated Requests

ZFS now supports Pre-Authenticated requests which can be useful for loading data into your data warehouse.


Version OS8.8.36 of the ZFS software was recently released, and one of the most interesting features is the support for Pre-Authenticated Requests. 

Here is the note "On-premise Object Storage Best Practices and Recommended Use Cases". This Document outlines how to use the new Object Store features, some of which I will cover in future posts.

Here is my post on configuring your database to point to ZFS as an object store. It is necessary to  configure ZFS as an object store if you want to do the same testing.

Here is my post on configuring your database to access ZFS as an object store.  This document walks through how to configure DBMS_CLOUD in an 19c+ database.

By going through these notes you can reach the same point that I am with my sample database and ZFS.  Below is the environment that I will be using for this demo.

Environment:

  • Oracle Database 21c (though 19c would work just as well)
    • I have updated the DATABASE_PROPERTIES to point to my wallet for SSL certificates.
    • I added the unsigned SSL certificate for my ZFS simulator HTTPS server to the wallet.
    • I have updated the C##CLOUD$SERVICE.dbms_cloud_store table to point to my ZFS appliance using the ORACLE_BMC authentication method.
    • I installed the sales schema into my account.
  • I am running the ZFS simulator
    • I updated the release of the simulator to OS8.8.36 (2013.06.05.8.36 of the ZFS software.
    • I created a user "salesdata" as a local user
    • I created a share named "salesupload" that is owned by salesdata.
    • The share "salesupload" is configured read/write using the OCI API within HTTP
    • I added the user "salesdata" and it's public SSH keys for authentication with OCI Protocol within HTTP.
  • I have the OCI Client tool installed
    • I installed the OCI client tool
    • I configured an entry for my object store in the ~/.oci/config file called salesdata

Pre-Authenticated Requests for uploading files


The first part of this post will go through creating a request for uploading files to an object store.
In my example, I will be uploading the sales.dat file which comes with the sample sales history schema.

Step 1: Create the bucket to store the files.


The first step is to create a bucket that  I am going to use to store the files that are being uploaded to the object store.  The beauty of using an object store, is that I can have many buckets on the same share that are isolated from each other.

I am going to call my bucket "salesdrop".

Below is the OCI client call I am going to use to create my bucket "salesdrop".


[oracle@oracle-server]$ oci os bucket create --config-file ~/.oci/config --profile SALESDATA --compartment-id salesupload --namespace-name salesupload --endpoint http://10.0.0.231/oci   --name salesdrop
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "compartment-id": "salesupload",
    "created-by": "salesdata",
    "defined-tags": null,
    "etag": "b7ced3b97859a3cc22a23670fc59a535",
    "freeform-tags": null,
    "id": null,
    "is-read-only": null,
    "kms-key-id": null,
    "metadata": null,
    "name": "salesdrop",
    "namespace": "salesupload",
    "object-events-enabled": null,
    "object-lifecycle-policy-etag": null,
    "public-access-type": "NoPublicAccess",
    "replication-enabled": null,
    "storage-tier": "Standard",
    "time-created": "2021-10-17T19:06:47+00:00",
    "versioning": "Disabled"
  },
  "etag": "b7ced3b97859a3cc22a23670fc59a535"
}


Step 2: Create a Pre-Authenticated URL for this bucket


Below is my OCI client call, and the what the parameters mean.


oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop --name upload_sales_data --access-type AnyObjectWrite --time-expires="2022-11-21T23:00:00+00:00"

To go through the parameter used they are
  • config-file: Location of the configuration file
  • profile: Entry to use within the configuration file (if not the default)
  • namespace-name: For ZFS this is the share name
  • endpoint: This is the URL for the ZFS http server + "/oci" to use the OCI API
  • bucket-name: Bucket to create the Pre-Authenticated Request for.
  • name: Identifying name given to this request
  • access-type: What type of Pre-Authenticated request to create
  • time-expires: When will this URL expire? This is mandatory.
Now to execute my request and create the URL.

[oracle@oracle-server ~]$ oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop  --name upload_sales_data --access-type AnyObjectWrite --time-expires="2022-11-21T23:00:00+00:00"
{
  "data": {
    "access-type": "AnyObjectWrite",
    "access-uri": "/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/",
    "id": "11c01b5c-92d8-4c2d-8cba-d9ec4e2649c5",
    "name": "upload_sales_data",
    "object-name": null,
    "time-created": "2021-10-17T19:15:32+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
  }
}


My request was successful, and I can see the URL that was created.  I just need to add the access-uri to the end of  HTTP host URL.

http://10.0.0.231/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/


Step 3: Upload my file

Now I am going to upload the file from my Windows PC using curl.
The file "sh_sales.dat" is on my d: drive.

d:\> curl -X PUT --data-binary '@d:\sh_sales.dat' http://10.0.0.231/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat
d:\> 

No errors. Let's check with and make sure the file got uploaded using the OCI Client tool

[oracle@oracle-server ~]$ oci os object list --endpoint http://10.0.0.231/oci --namespace-name salesupload   --config-file ~/.oci/config --profile SALESDATA     --bucket-name salesdrop --fields name,size,timeCreated
{
  "data": [
    {
      "etag": null,
      "md5": null,
      "name": "loadfiles/sales_history_05012021.dat",
      "size": 55180902,
      "time-created": "2021-10-17T19:35:34+00:00",
      "time-modified": null
    }
  ],
  "prefixes": []
}

I can see the file is there, and the size is 55MB.

Now where can you go with this ? Below is a diagram of how the Oracle IOT cloud can be used as a hub for datafiles from IOT. You can do the same thing by having all your IOT devices "drop" their data onto a central object store (hosted on ZFS), then filtered and loaded into a database.


Pre-Authenticated Requests for loading files

The part of this post is going to show you how to use Pre-Authenticated Requests to load data into your database.

First I wanted to do shout out to @thatjeffsmith. Jeff Smith is the product manager for SQL Developer, and he has a blog http://www.thatjeffsmith.com where he constantly blogs about SQL Developer and all the great work his team is doing.
I saw one of his posts on Pre-Authenticated Requests  to load data (which you can find here), and I realized that you can do almost the same things on any version of 19c+ with the object store on ZFS.

I am going to go through most of the same steps Jeff did in his post.

Step 1:Create the Pre-Authenticated Request to read the object.

Jeff does this in the Console, but I am going to do it with the OCI Client tool.

The command is similar to the command I used to create the "upload" request.
I am going to use a different access-type. I am going to use "ObjectRead" and create a request that points to the object that was uploaded.

[oracle@oracle-server]$ oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop  --name upload_sales_data --access-type ObjectRead --time-expires="2022-11-21T23:00:00+00:00" --object-name loadfiles/sales_history_05012021.dat
{
  "data": {
    "access-type": "ObjectRead",
    "access-uri": "/oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat",
    "id": "547227b4-73b0-4980-bb94-ab5ee87d4c81",
    "name": "upload_sales_data",
    "object-name": "loadfiles/sales_history_05012021.dat",
    "time-created": "2021-10-17T19:56:45+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
  }
}


Now I have my request URL.

http://10.0.0.231//oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat


Step 2: Load the data

Now back to Jeff's example, I am going to log onto my database and load the data.

First I am going to count the rows in my table, then check again after.

SQL> select count(1) from sales;

  COUNT(1)
----------
         0

 SQL> BEGIN
  DBMS_CLOUD.COPY_DATA(
     table_name =>'SALES',
     file_uri_list =>'https://10.0.0.231/oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat',
     format => json_object('delimiter' VALUE '|') );
END;


PL/SQL procedure successfully completed.


SQL> select count(1) from sales;

  COUNT(1)
----------
   1016271

SQL>


I can see that 1 Million rows were successfully loaded into the table.


Step 3: Verify through USER_LOAD_OPERATIONS

Now, like Jeff did with his example, I am going to look at the view USER_LOAD_OPERATIONS to see the information about my load job.


col id format 999
col type format a8
col status format a10
col start_time format a15
col owner_name format a10
col table_name format a20
col file_uri_list format a70
set linesize 160


select
        id,
        type,
        to_char(update_time,'mm/dd/yy hh24:mi:ss') update_time,
        status,
        owner_name,
        table_name,
        substr(file_uri_list,60,160) File_uri_list,
        rows_loaded
from
        user_load_operations
where status='COMPLETED';

SQL> 

  ID TYPE     UPDATE_TIME       STATUS     OWNER_NAME TABLE_NAME           FILE_URI_LIST                                                          ROWS_LOADED
---- -------- ----------------- ---------- ---------- -------------------- ---------------------------------------------------------------------- -----------
   3 COPY     10/17/21 16:13:21 COMPLETED  BGRENN     SALES                salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat             1016271


Other Pre-Authenticated Requests 


There are 2 other "access-types" for Pre-Authenticated requests.

  • ObjectReadWrite: This will give both read and write to a specific object. 
  • ObjectWrite: This give write only access to a specific object (rather than having full access to the bucket).
If you try to use the Pre-Authenticated Requested to anything other than object granted you get an error message.

{"code": "NotAuthenticated", "message": "The required information to complete authentication was not provided"}

List Pre-Authenticated Requests 

You can list all of your Pre-Authenticated Requests to keep a handle on them.


[oracle@oracle-server ~]$ oci os preauth-request list --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop
{
  "data": [
    {
      "access-type": "AnyObjectWrite",
      "id": "11c01b5c-92d8-4c2d-8cba-d9ec4e2649c5",
      "name": "upload_sales_data",
      "object-name": null,
      "time-created": "2021-10-17T19:15:32+00:00",
      "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
      "access-type": "ObjectRead",
      "id": "547227b4-73b0-4980-bb94-ab5ee87d4c81",
      "name": "load_sales_data",
      "object-name": "loadfiles/sales_history_05012021.dat",
      "time-created": "2021-10-17T19:56:45+00:00",
      "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
      "access-type": "ObjectReadWrite",
      "id": "87b4fe97-3e2e-4b22-96aa-a7e3b566dc59",
      "name": "read_write_sales_data",
      "object-name": "loadfiles/sales_history_06012021.dat",
      "time-created": "2021-10-17T20:37:23+00:00",
      "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
      "access-type": "ObjectWrite",
      "id": "828a0651-60f7-4d2a-998c-b3518e1bfa92",
      "name": "write_sales_data",
      "object-name": "loadfiles/sales_history_07012021.dat",
      "time-created": "2021-10-17T20:40:08+00:00",
      "time-expires": "2022-11-21T22:59:59+00:00"
    }
  ]
}


Get Detail on a Pre-Authenticated Requests 


If you want the detail of a specific Pre-Authenticated Request you can use the "get" option and include the --par-id (which is the ID from the list request command).

[oracle@oracle-server ~]$ oci os preauth-request get --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop   --par-id 828a0651-60f7-4d2a-998c-b3518e1bfa92
{
  "data": {
    "access-type": "ObjectWrite",
    "id": "828a0651-60f7-4d2a-998c-b3518e1bfa92",
    "name": "upload_sales_data",
    "object-name": "loadfiles/sales_history_07012021.dat",
    "time-created": "2021-10-17T20:40:08+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
  }
}


NOTE: this does not give you the URL.

Delete a Pre-Authenticated Requests 


Finally you can delete a Pre-Authenticated Request if it is no longer needed using the par-id of the request.


[oracle@oracle-server ~]$ oci os preauth-request delete --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop   --par-id 828a0651-60f7-4d2a-998c-b3518e1bfa92
Are you sure you want to delete this resource? [y/N]: y





Hopefully this gives you an idea of all the things you can do with Pre-Authenticated URLs.




Tuesday, August 31, 2021

TDE–How to implement TDE in your database and what to think about (part 4)

 In this post, I am going to include some lessons learned from implementing "Restore as encrypted" of a large database with over 500,000 objects.

 


The error we were receiving when trying open our database was

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00604: error occurred at recursive SQL level 1
ORA-25153: Temporary Tablespace is Empty
Process ID: 133196
Session ID: 1769 Serial number: 6805

    

And in the alert log we saw.

      Parallel first-pass transaction recovery timed out. Switching to serial recovery.
Undo initialization recovery: Parallel FPTR failed: start:685625075 end:685692452 diff:67377 ms (67.4 seconds)
2021-08-27T10:02:39.567998-04:00
Undo initialization recovery: err:0 start: 685625075 end: 685693406 diff: 68331 ms (68.3 seconds)
2021-08-27T10:02:43.015891-04:00
[339055] Successfully onlined Undo Tablespace 17.
Undo initialization online undo segments: err:0 start: 685693406 end: 685696854 diff: 3448 ms (3.4 seconds)
Undo initialization finished serial:0 start:685625075 end:685697235 diff:72160 ms (72.2 seconds)
Dictionary check beginning
2021-08-27T10:02:44.819881-04:00
TT03 (PID:360221): Sleep 80 seconds and then try to clear SRLs in 6 time(s)
2021-08-27T10:02:54.759120-04:00
Tablespace 'PSAPTEMP' #3 found in data dictionary,
but not in the controlfile. Adding to controlfile.
2021-08-27T10:02:55.826700-04:00
Errors in file /u02/app/oracle/diag/rdbms/bsg/BSG1/trace/BSG1_ora_339055.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-25153: Temporary Tablespace is Empty
2021-08-27T10:02:55.826827-04:00
Errors in file /u02/app/oracle/diag/rdbms/bsg/BSG1/trace/BSG1_ora_339055.trc:
ORA-00604: error occurred at recursive SQL level 1

    


What we found is that there some work the database has to do when opening for the first time after encrypting tablespaces offline.

Background:

Movement of data to disk that includes any objects that reside in an encrypted tablespace is encrypted. This means that if an object resides in an encrypted tablespace, the following data is also encrypted.

  • TEMP - If an object resides in an encrypted tablespace, any sort information in the TEMP tablespace is encrypted. This includes joins to other tables.  Any piece of data in a sort operation on disk causes the whole set of data to be encrypted.
  • UNDO - If an object resides in an encrypted tablespace, the blocks stored in UNDO are encrypted.
  • REDO/Archive - If an object resides in an encrypted tablespace, the changes to that object are encrypted in the redo stream (including redo sent through the network to a standby database).

How this happens:


The way the database manages encryption is to internally mark an object as an encrypted object so that it ensures the objects data stays encrypted on disk. 
Now back to "restore as encrypted".  Since we restored the database and encrypted the tablespaces, the database needs to mark all the  objects in the "newly encrypted" tablespaces as encrypted.
This is part of the database open operation.  The open database operation will sort through the internal object metadata to determine what objects now reside in "newly encrypted" tablespaces.
There are a few things to be aware of around this process.
  1. It requires a sorting of objects.  Because of this you may need a much bigger sort_area_size or PGA_TARGET.  This is only needed to open the database after encrypting, but this was cause of the issue I was seeing.
  2. It may take some time. Lots of time depending on the # of objects.

How to mitigate it:


Since we know this is going to happen, there are a few ways to mitigate it.

  1. Empty out your recycle bin to limit the # of objects to update.
  2. Proactively increase your PGA (or sort_area_size) for opening the database for the first time after encrypting.
  3. Encrypt the database in sections. Do not encrypt every tablespace at once to decrease the # of objects that will be marked encrypted. After encrypting a tablespace, open the database, shut it and do the next tablespace. NOTE: this may not be practical.
  4. Encrypt the tablespace online, as this will mark object as the processing of each tablespace completes.
  5. Check the number of objects that will need to be updated. This can be done by look at the TAB$ internal table using the TS# matching to the tablespaces that will be encrypted.

NOTE:


Remember a standby database may also have this same issue when opened up read only.
Also, it is possible to have the Primary Database encrypted, and the Standby database unencrypted. Or the opposite if encrypting your standby database first.  Restoring from encrypted --> unencrypted or unencrypted --> encrypted and opening up the database will cause this update of metadata to occur.

Saturday, August 14, 2021

Using rclone to download Objects from OCI

 I previously created a post that walked through how to configure rclone to easily access objects within the Oracle Cloud Object Store.


Object Store access with rclone


This post is going to go a little deeper on how to quickly download objects from the OCI object store onto your host.

In my example, I needed to download RMAN disk backup files that were copied to the Object Store in OCI.

I have over 10 TB of RMAN backup pieces, so I am going to create an ACFS mount point to store them on.


1) Create ACFS mount point

Creating the mount point is made up of multiple small steps that are documented here. This is a link to the 19c documentation so note it is subject to change over time.

  • Use ASMCMD to create a volume on the data disk group of 20 TB 
- Start ASMCMD connected to the Oracle ASM instance. You must be a user in the OSASM operating system group.

                    - Create the volume "volume1" on the "data" disk group

                    ASMCMD [+] > volcreate -G data -s 20G volume1

  • Use ASMCMD to list the volume information  NOTE: my volume name is volume1-123
                             
ASMCMD [+] > volinfo -G data volume1
Diskgroup Name: DATA

         Volume Name: VOLUME1
         Volume Device: /dev/asm/volume1-123
         State: ENABLED
     ... 

SQL> SELECT volume_name, volume_device FROM V$ASM_VOLUME 
     WHERE volume_name ='VOLUME1';

VOLUME_NAME        VOLUME_DEVICE
-----------------  --------------------------------------
VOLUME1            /dev/asm/volume1-123


  • Create the file system with mkfs from the volume "/dev/asm/volume1-123"
$ /sbin/mkfs -t acfs /dev/asm/volume1-123
mkfs.acfs: version                   = 19.0.0.0.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/volume1-123
mkfs.acfs: volume size               = 21474836480  (   20.00 GB )
mkfs.acfs: Format complete.
  • Register the file system with srvctl
# srvctl add filesystem -device /dev/asm/volume1-123 -path /acfsmounts/acfs2
       -user oracle -mounttowner oracle -mountgroup dba -mountperm 755
NOTE: This will mount the filesystem on /acfsmounts/acfs2

  • Start the filesystem with srvctl
$ srvctl start filesystem -device /dev/asm/volume1-123

  • Change the ownership to oracle

chown -R oracle:dba /acfsmounts/acfs2

2) Use rclone to view objects

The next step is to look at the objects I want to copy to my new ACFS file system. The format of accessing the object store in the commands is
 "rclone {command} [connection name]:{bucket/partial object name - optional}.


NOTE: For all examples my connection name is oci_s3 

I am going to start with the simplest command list buckets (lsd).

NOTE: We are using the s3 interface to view the objects in the namespace.  There is a single namespace space for the entire tenancy.  With OCI there is the concept of "compartments" which can be used to separate applications and users.  The S3 interface does not have this concept, which means that all buckets are visible.
  • rclone lsd - This is the simplest command to list the buckets, and as I noted previously, it lists all buckets, not just my bucket.
       ./rclone lsd oci_s3:
          -1 2021-02-22 15:33:06        -1 Backups
          -1 2021-02-16 21:31:05        -1 MyCloudBucket
          -1 2020-09-23 22:21:36        -1 Test-20200923-1719
          -1 2021-07-20 20:03:27        -1 ZDM_bucket
          -1 2020-11-23 23:47:03        -1 archive
          -1 2021-01-21 13:03:33        -1 bsgbucket
          -1 2021-02-02 15:35:18        -1 bsgbuckets3
          -1 2021-03-03 11:42:13        -1 osctransfer
          -1 2021-03-19 19:57:16        -1 repo
          -1 2021-01-21 19:35:24        -1 short_retention
          -1 2020-11-12 13:41:48        -1 jsmithPublicBucket
          -1 2020-11-04 14:10:33        -1 jsmith_top_bucket
          -1 2020-11-04 11:43:55        -1 zfsrepl
          -1 2020-09-25 16:56:01        -1 zs-oci-bucket

If I want to list what is within my bucket (bsgbucket) I can list that bucket. In this case it treats the flat structure of the object name as if it is a file system, and lists only the top "directories" within my bucket.

./rclone lsd oci_s3:bsgbucket
           0 2021-08-14 23:58:02        -1 file_chunk
           0 2021-08-14 23:58:02        -1 sbt_catalog


  • rclone tree - command will list what is within my bucket as a tree structure.
[opc@rlcone-test rclone]$ ./rclone tree oci_s3:bsgbucket
/
├── expdat.dmp
├── file_chunk
│   └── 2985366474
│       └── MYDB
│           └── backuppiece
│               └── 2021-06-14
│                   ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
│                   │   └── yHqtjSE51L3B
│                   │       ├── 0000000001
│                   │       └── metadata.xml
│                   └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
│                       └── lS9Sdnka2nD0
│                           ├── 0000000001
│                           └── metadata.xml
└── sbt_catalog
    ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
    │   └── metadata.xml
    └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
        └── metadata.xml


  • rclone lsl- command will list what is within my bucket as a long listing with more detail
[opc@rlcone-test rclone]$ ./rclone lsl oci_s3:bsgbucket
   311296 2021-01-21 13:04:05.000000000 expdat.dmp
337379328 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001
     1841 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml
 36175872 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001
     1840 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml
     1841 2021-06-14 19:48:46.000000000 sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml
     1840 2021-06-14 19:49:10.000000000 sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml


3) Use rclone to copy the objects to my local file system.


There are 2 command you can use to copy the files from the object store to the local file system.
  • copy - This is as you expect. It copies the files to the local file system and overwrites the local copy
  • sync - This syncronizes the local file system with the objects in the object store, and will not copy down the object if it already has a local copy.

In my case I am going to use the sync command. This will allow me to re-start copying the objects and it will ignore any objects that were previously successfully copies.

Below is the command I am using to copy (synchronize) the objects from my bucket in the object store (oci_s3:bsgbucket) to the local filesystem (/home/opc/acfs).

./rclone -vv sync -P --multi-thread-streams 12 --transfers 64  oci_s3:bsgbucket   /home/opc/acfs

To break down the command.

  • -vv  This option to rclone gives me "verbose" output so I can see more of what is being copied as the command is executed.
  • -P  This option to rclone gives me feedback on how much of the object has downloaded so far to help me monitor it.
  • --multi-threaded-streams 12 This option to rclone breaks larger objects into chunks to increase the concurrency.
  • --transfers 64 This option to rclone allows for 64 concurrent transfers to occur. This increases the download throughput
  • oci-s3:bsgbucket - This is the source to copy/sync
  • /home/opc/acfs - this is the destination to copy/.sync with

Finally, this is the what the command looks like when it is executing.

opc@rlcone-test rclone]$  ./rclone -vv sync -P --multi-thread-streams 12 --transfers 64  oci_s3:bsgbucket   /home/opc/acfs
2021/08/15 00:15:32 DEBUG : rclone: Version "v1.56.0" starting with parameters ["./rclone" "-vv" "sync" "-P" "--multi-thread-streams" "12" "--transfers" "64" "oci_s3:bsgbucket" "/home/opc/acfs"]
2021/08/15 00:15:32 DEBUG : Creating backend with remote "oci_s3:bsgbucket"
2021/08/15 00:15:32 DEBUG : Using config file from "/home/opc/.config/rclone/rclone.conf"
2021/08/15 00:15:32 DEBUG : Creating backend with remote "/home/opc/acfs"
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO  : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : expdat.dmp: md5 = f97060f5cebcbcea3ad6fadbda136f4e OK
2021-08-15 00:15:33 INFO  : expdat.dmp: Copied (new)
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for checks to finish
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for transfers to finish
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: Starting multi-thread copy with 2 parts of size 160.875Mi
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 2/2 (168689664-337379328) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO  : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001: Copied (new)
2021-08-15 00:15:34 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi finished
Transferred:      333.398Mi / 356.554 MiByte, 94%, 194.424 MiByte/s, ETA 0s
Transferred:            6 / 7, 86%
Elapsed time:         2.0s
Transferring:

NOTE: it broke up the larger object into chunks, and you can see that it downloaded 2 chunks simultaneously.  At the end you can see the file that it was in the middle of transferring.

Conclusion.

rclone is great alternative to the OCI CLI to manage your objects and download them.  It has  more intuitive commands (like "rclone ls").  And the best part is that it doesn't require python and special privleges to install.