Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Wednesday, February 14, 2024

DB Script management through pre-authenticated URLs

 Pre-authenticated URLs in OCI are fast becoming one of my favorite features of using object storage.  In this blog post I will explain how I am using them for both:

  • Storing the current copy of my backup scripts and dynamically pulling it from my central repository
  • uploading all my logs files to a central location
Pre-authenticated URL creation

PROBLEM:


The problem I was trying to solve, is that I wanted to create a script to run on all my database nodes to create a weekly archival backup.
Since I have databases that are both Base DB databases, and ExaCS I found that I was continuously making changes to my backup script.  Sometimes it was checking something different in my environment, and sometimes it was improving the error checking.
Each time I made a change, I was going out to every DB host and copying the new copy of my script.
Eventually I realized that Pre-authenticated URLs could not only help me ensure all my DB hosts are running the current copy of my backup script, they could be the central log location.

Solution:


Solution #1 - Script repository


The first problem I wanted to solve, was that I wanted to configure a script repository that I could dynamically pull the most current copy of my scripts from. Since I am running in OCI, I was looking for a "Cloud Native" solution rather than using NFS mounts that are shared across all my DB hosts.
To complicate things, I have databases that are running in different tenancies.

Step #1 - Store scripts in a bucket

The first step was to create a bucket in OCI to store both the scripts and logs.  Within that bucket, under "More Actions" I chose "Create New Folder" and I created 2 new folders, "logs" and "scripts".
Then within the scripts folder I uploaded by current scripts
  • rman.sh - My executable script that will set the environment and call RMAN
  • backup.rman - My RMAN script that contains the RMAN command to backup my database.

Step #2 - Create a Pre-Authenticated Request

The next step was to create a Pre-Authenticated request on the "scripts" folder.  Next to the scripts folder I clicked on the  3 dots and chose "Create Pre-Authenticated Request".
On the window that came up, I changed the expiration to be 1 year in the future (the default is 7 days).  I chose the "Objects with prefix" box so that I could download any scripts that I put in this folder to the DB hosts.  I also made sure the "Access Type" is "Permit object reads on those with specified prefix".
I did not chose "Enable Object Listing".
These settings will allow me to download the scripts from this bucket using the Pre-Authenticated URL only.  From this URL you will not be able to list the objects, or upload any changes.


Step #3 - Create wrapper script to download current scripts

Then using the Pre-Authenticated URL in a wrapper script, I download the current copies of the scripts to the host and then executed my execution script (rman.sh) with a parameter.

Below you can see that I am using curl to download my script (rman.sh) and storing it my local script directory (/home/oracle/archive_backups/scripts).  I am doing the same thing for the RMAN command file.
Once I download the current scripts, I am executing the shell script (rman.sh) .


curl -X GET https://{my tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{actual URL is removed }/n/id20skavsofo/b/bgrenn/o/scripts/rman.sh --output /home/oracle/archive_backups/scripts/rman.sh
curl -X GET https://{my tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{actual URL is removed }/n/id20skavsofo/b/bgrenn/o/scripts/backup.rman --output /home/oracle/archive_backups/scripts/backup.rman


/home/oracle/archive_backups/scripts/rman.sh $1


Solution #2 - Log repository

The second problem I wanted to solve was to make it easy review the execution of my scripts.  I don't want to go to each DB host and look at the log file.  I want to have the logs stored in a central location that I can check.  Again Pre-Authenticated URLs to the rescue !

Step #1 - Create the Pre-Authenticated URL

In the previous steps I already create a "logs" folder within the bucket. In this step I want to create a Pre-Authenticated URL like I did for the scripts, but in this case I want to use it to store the logs.
Like before I chose "Create Pre-Authenticated Request" for the "logs" folder.
This time, I am choosing "Permit object writes to those with the specified prefix". This will allow me to write my log files to this folder in the bucket, but not list the logs, or download any logs.


Step #2 - Upload the log output from my script

The nice thing was once I implemented Solution #1, and had all of my DB nodes already downloading the current script.  I updated the script to add an upload to object storage of the log file and they will all use my new script.
In my script I already had 2 variables set
  • NOW - The current date in "yyyymmdd" format
  • LOGFILE - The name of the output log file from my RMAN backup.
Now all I had to do was to add a curl command to upload my log file to the bucket.

Note I ma using the NOW variable to create a new folder under "logs" with the data so that my script executions are organized by date.

curl --request PUT --upload-file /home/oracle/archive_backups/logs/${LOGFILE} https://{My tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{URL removed}/n/id20skavsofo/b/bgrenn/o/logs/${NOW}/${LOGFILE}

BONUS


If I wanted to get fancy I could have put my LOGS in a new bucket, and configured  a lifecycle management rule to automatically delete logs after a period of time from the bucket.

Tuesday, December 19, 2023

ZFSSA can be used to share data from your Oracle Database

 Data Sharing has become a big topic recently, and Oracle Cloud has added some new services to allow you to share data from an Autonomous Database.  But how do you do this with your on-premises database ? In this post I show you how to use ZFS as your data sharing platform.


Data Sharing

Being able to securely share data between applications is a critical feature in todays world.  The Oracle Database is often used to consolidate and summarize collected data, but is not always the platform for doing analysis.  The Oracle Database does have the capability to analyze data, but tools such as Jupyter Notebooks, Tableu, Power Bi, etc are typically the favorites of Data Scientists and data analysts.

The challenge is how to give access to specific pieces of data in the database without providing access to the database itself.  The most common solution is to use object storage and pre-authenticated URLs.  Data is extracted from the database based on the user and stored in object storage in a sharable format (JSON for example).  With this paradigm, and pictured above, you can create multiple datasets that contain a subset of the data specific to the users needs and authorization.  The second part is the use of a pre-authenticated URL.  This is a dynamically created URL that allows access to the object without authentication. Because it contains a long string of random characters, and is only valid for a specified amount of time, it can be securely shared with the user.

My Environment

For this post, I started with an environment I had previously configured to use DBMS_CLOUD.  My database is a 19.20 database.  In that database I used the steps specified in the MOS note and my blog (information can be found here) to use DBMS_CLOUD.

My ZFSSA environment is using 8.8.63, and I did all of my testing in OCI using compute instances.

For preparation to test I had

  • Installed DBMS_CLOUD packages into my database using MOS note #2748362.1
  • Downloaded the certificate for my ZFS appliance using my blog post and added them to wallet.
  • Added the DNS/IP addresses to the DBMS_CLOUD_STORE table in the CDB.
  • Created a user in my PDB with authority to use DBMS_CLOUD
  • Created a user on ZFS to use for my object storage authentication (Oracle).
  • Configured the HTTP service for OCI
  • Added my public RSA key from my key par to the OCI service for authentication.
  • Created a bucket
NOTE:  In order to complete the full test, there were 2 other items I needed to do.

1) Update the ACL to also access port 80.  The DBMS_CLOUD configuration sets ACLs to access websites using port 443.  During my testing I used port 80 (http vs https).
2) I granted execute on DBMS_CRYPTO to my database user for the testing.

Step #1 Create Object

The first step was to create an object from a query in the database.  This simulated pulling a subset of data (based on the user) and writing it to a object so that it could be shared.  To create the object I used the DBMS_CLOUD.EXPORT_DATA package.  Below is the statement I executed.

BEGIN
 DBMS_CLOUD.EXPORT_DATA(
       credential_name => 'ZFS_OCI2',  
    file_uri_list =>'https://zfs-s3.zfsadmin.vcn.oraclevcn.com/n/zfs_oci/b/db_back/o/shareddata/customer_sales.json',
    format => '{"type" : "JSON" }',
   query => 'SELECT OBJECT_NAME,OBJECT_TYPE,STATUS FROM user_objects');
END;
/

In this example:
  • CREDENTIAL_NAME - Refers to my authentication credentials I had previously created in my database.
  • FILE_URI_LIST - The name and location of the object I want to create on the ZFS object storage.
  • FORMAT - The output is written in JSON format
  • QUERY - This is the query you want to execute and store the results in the object storage.  
As you can see, it would be easy to create multiple objects that contain specific data by customizing the query, and naming the object appropriately.

In order to get the  proper name of the object I then selected the list of objects from object storage.

set pagesize 0
SET UNDERLINE =
col object_name format  a25
col created format  a20
select object_name,to_char(created,'MM/DD/YY hh24:mi:ss') created,bytes/1024  bytes_KB 
       from dbms_cloud.list_objects('ZFS_OCI2', 'https://zfs-s3.zfsadmin.vcn.oraclevcn.com/n/zfs_oci/b/db_back/o/shareddata/');


 customer_sales.json_1_1_1.json 12/19/23 01:19:51    3.17382813


From the output, I can see that my JSON file was named 'customer_sales.json_1_1_1.json'.


Step #2 Create Pre-authenticated URL

The Package I ran to do this is below. I am going to break down the pieces into multiple sections. Below is the full code.



Step #2a Declare variables

The first part of the pl/sql package declares the variables that will be used in the package. Most of the variables are normal VARCHAR variables, but there a re a few other variable types that are specific to the packages used to encrypt and send the URL request.

  • sType,kType - These are constants used to sign the URL request with  RSA 256 encryption
  • utl_http.req,utl_http.resp - These are request and response types used when accessing the object storage
  • json_obj - This type is used to extract the url from the resulting JSON code returned from the object storage call. 

Step #2b Set variables

In this section of code I set the authentication information along with the host, and the private key part of my RSA public/private key pair. 
I also set a variable with the current date time, in the correct GMT format.

NOTE: This date time stamp is compared with the date time on the ZFSSA. It must be within 5 minutes of the ZFSSA date/time or the request will be rejected.

Step #2c Set JSON body

In this section of code, I build the actual request for the pre-authenticated URL.  The parameters for this are...
  • accessType - I set this to "ObjectRead" which allows me to create a URL that points to a specific object.  Other options are Write, and ReadWrite.
  • bucketListingAction - I set this to "Deny",  This disallows the listing of objects.
  • bucketName - Name of the bucket
  • name - A name you give the request so that it can be identified later
  • namespace/namepaceName - This is the ZFS share
  • objectName - This is the object name on the share that I want the request to refer to. 
  • timeExpires - This is when the request expires.
NOTE: I did not spend the time to do a lot of customization to this section.  You could easily make the object name a parameter that is passed to the package along with the bucketname etc. You could also dynamically set the expiration time based on sysdate.  For example you could have the request only be valid for 24 hours by dynamically setting the timeExpires.

The last step in this section of code is to create a sha256 digest of the JSON "body" that I am sending with the request.  I created it using the dbms_crypto.hash.

Step #2d Create the signing string

This section builds the signing string that is encrypted.  This string is set in a very specific format.  The string that is build contains.

(request-target): post /oci/n/{share name}/b/{bucket name}/p?compartment={compartment} 
date: {date in GMT}
host: {ZFS host}
x-content-sha256: {sha256 digest of the JSON body parameters}
content-type: application/json
content-length: {length of the JSON body parameters}

NOTE: This signing string has to be created with the line feeds.

The final step in this section is sign the signing string with the private key.
In order to sign the string the DBMS_CRYPTO.SIGN package is used.


Step #2e Build the signature from the signing string


This section takes the signed string that was built in the prior step and encodes the string in Base 64.  This section uses the utl_encode.base64_encode package to sign the raw string and it is then converted to a varchar.

Note: The resulting base64 encoded string is broken into 64 character sections.  After creating the encoded string, I loop through the string, and combine the 64 character sections into a single string.
This took the most time to figure out.

Step #2f Create the authorization header

This section dynamically builds the authorization header that is sent with the call.  This section includes the authentication parameters (tenancy OCID, User OCID, fingerprint), the headers (these must be in the order they are sent), and the signature that was created in the last 2 steps.

Step #2g Send a post request and header information


The next section sends the post call to the ZFS object storage followed by each piece of header information.  After header parameters are sent, then the JSON body is sent using the utl_http.write_text call.  

Step #2h Loop through the response

This section gets the response from the POST call, and loops through the response.  I am using the json_object_t.parse call to create a JSON type, and then use the json_obj.get to retrieve the unique request URI that is created.
Finally I display the resulting URI that can be used to retrieve the object itself.


Documentation

There were a few documents that I found very useful to give me the correct calls in order to build this package.

Signing request documentation : This document gave detail on the parameters needed to send get or post requests to object storage.  This document was very helpful to ensure that I had created the signature properly.

Http message signature format : This document gives detail on the signature itself and the format.

OCI rest call walk through : This post was the most helpful as it gave an example of a GET call and a PUT call. I was first able to create a GET call using this post, and then I built on it to create a GET call. 


Wednesday, November 22, 2023

Oracle Database Backup Cloud Service Primer

 One topic that has been coming a lot as customers look at options for offsite protected backups, is the use of the Oracle Database Backup Cloud Service.  This service can be used either directly from the database itself leveraging an RMAN tape library, or by performing a copy-to-cloud from the ZDLRA.  In this post I will try to consolidate all the information I can find on this topic to get you started.


Overview

The best place to start is by downloading, and reading through this technical brief

This document walks you through what the service is and how to implement it. Before you go forward with the Backup Cloud Service I suggest you download the install package and go through how to install it.

The key points I saw in this document are

  • RMAN encryption is mandatory - In this brief you will see that the backups being sent to OCI MUST be encrypted, and the brief explains how to create an encrypted backup.  Included in the Backup Cloud Service is the use of encryption and compression (beyond basic compression) without requiring the ASO, or ACO license.
  • How to install the client files - The brief explains the parameters that are needed to install the client files, and what the client files are that get installed. I will go into more detail later on explaining additional features that have been added recently.
  • Config file settings including host - The document explains the contents of the configuration file used by the Backup Cloud Service library. It also explains how to determine the name of the host (OCI endpoint) based on the region you are sending the backups to.
  • Channel configuration example - There is an example channel configuration to show you how to connect to the service.
  • Best practices - The document includes sample scripts and best practices to use when using the Backup Cloud Service.
  • Lifecycle policies and storage tiers - This is an important feature of using the Backup Cloud Service, especially for long term archival backups.  You most likely want have backups automatically moved to low cost archival storage after uploading to OCI.
NOTE: When using lifecycle polies to manage the storage tiers it is best to set the "-enableArchiving" and "-archiveAfterBackup" parameters when installing the backup module for a new bucket.  There are small metadata files that MUST remain in standard storage, and the installation module creates a lifecycle rule with the bucket that properly archives backup pieces, leaving the metadata in standard storage.


Download

The version of the library on OTN (at the time I am writing this) is NOT the current release of the library, and that version does not support retention lock of objects.

Please download the library from this location.

Documentation on the newer features can be found here, using retention lock can be found here, and there is a oci_readme.txt file that contains all the parameters available.


Updates

There were a few updates since the tech brief was written, and I will summarize the important ones here.  I also spoke the PM who is working on an updated brief that will contain this new information.

  • newRSAKeyPair - The installer is now able to generate the key pair for you making it much easier to generate new key pair. In order to have the installer ONLY create a new key pair pair, just pass the installer the "walletDir" parameter.  The installer will generate both a public and private key, and place them in the walletDir (see below).

 /u01/app/oracle/product/19c/dbhome_1/jdk/bin/java -jar oci_install.jar -newRSAKeyPair -walletDir /home/oracle/oci/wallet 
Oracle Database Cloud Backup Module Install Tool, build 19.18.0.0.0DBBKPCSBP_2023-09-21
OCI API signing keys are created:
  PRIVATE KEY --> /home/oracle/oci/wallet/oci_pvt
  PUBLIC  KEY --> /home/oracle/oci/wallet/oci_pub
Please upload the public key in the OCI console.

Once you generate the public/private key, you can upload the public key to the OCI console. This will show you the fingerprint, and you can execute the installer using the private key file.

  • "immutable-bucket" and "temp-metadata-bucket" - The biggest addition to library is the ability to support the use of retention rules on buckets containing backups.  The uploading of backups is monitored by using a "heartbeat" file, and this file is deleted when the upload is successful.  Because all objects in a bucket are locked, the "heartbeat" object must be managed from a second bucket without retention rules.  This is the temp-metadata-bucket.  When using retention rules you MUST have both buckets set in the config file.

NOTE

I ran into 2 issues when executing this script.

1)  When trying to execute the jar file, I used the default java version in my OCI tenancy that is located in "/user/bin". The installer received a java error

"java.lang.NoClassDefFoundError: javax/xml/bind/DatatypeConverter"

In order to properly execute the installer, I used the java executable located in $ORACLE_HOME/jdk/bin

2) When executing the jar file with my own RSA key that I had been previously used with OCI object storage, I received a java error.

Exception in thread "main" java.lang.RuntimeException: Could not produce a private key
at oracle.backup.util.FileDownload.encode(FileDownload.java:823)
at oracle.backup.util.FileDownload.addBmcAuthHeader(FileDownload.java:647)
at oracle.backup.util.FileDownload.addHttpAuthHeader(FileDownload.java:169)
at oracle.backup.util.FileDownload.addHttpAuthHeader(FileDownload.java:151)
at oracle.backup.opc.install.BmcConfig.initBmcConnection(BmcConfig.java:437)
at oracle.backup.opc.install.BmcConfig.initBmcConnection(BmcConfig.java:428)
at oracle.backup.opc.install.BmcConfig.testConnection(BmcConfig.java:393)
at oracle.backup.opc.install.BmcConfig.doBmcConfig(BmcConfig.java:250)
at oracle.backup.opc.install.BmcConfig.main(BmcConfig.java:242)
Caused by: java.security.spec.InvalidKeySpecException: java.security.InvalidKeyException: IOException : algid parse error, not a sequence

I found that this was caused by the PKCS format. I was using a PKCS1 key, and the java installer was looking for a PKCS8 key.  The header in my private key file contained "BEGIN RSA PRIVATE KEY".
In order to convert my private PKCS1 key "oci_api_key.pem" to a PKCS8 key "pkcs8.key" I ran.

openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in oci_api_key.pem -out pkcs8.key

Executing the install

The next step is to execute the install. For my install I also wanted configure a lifecycle rule that would archive backups after 14 days.  In order to implement this, I had the script create a new bucket "bsgtest".  Below is parameters I used (note I used "..." to obfuscate the OCIDs).

$ORACLE_HOME/jdk/bin/java -jar oci_install.jar -pvtKeyFile /home/oracle/oci/wallet/pkcs8.key -pubFingerPrint .... -tOCID  ocid1.tenancy.oc1... -host https://objectstorage.us-ashburn-1.oraclecloud.com -uOCID ocid1.user.oc1.... -bucket bsgtest -cOCID ocid1.compartment.oc1... -walletDir /home/oracle/oci/wallet -libDir /home/oracle/oci/lib -configFile /home/oracle/oci/config/backupconfig.ora -enableArchiving TRUE -archiveAfterBackup "14 days"

This created a new bucket "bsgtest" containing a lifecycle rule.

I then added a 14 day retention rule to this bucket, and created a second bucket "bsgtest_meta" for the temporary metadata. If you want to make this rule permanent you enable retention rule lock which I highlighted on the screenshot below.




I then updated the config file to use the metadata bucket because I set a retention rule on the main bucket. Note that there is also a parameter that determines how long archival objects are cached in standard storage before they are returned to archival storage.


OPC_CONTAINER=bsgtest
OPC_TEMP_CONTAINER=bsgtest_meta
OPC_AUTH_SCHEME=BMC
retainAfterRestore=48 HOURS


Testing

Once you execute the installer you will be able to begin backing up to OCI object storage.  Don't forget that you need to:
  • Change the default device type to SBT_TAPE
  • Change the compression algorithm. I recommend "medium" compression.
  • Configure encryption for database ON.
  • Configure the device type SBT_TAPE to send COMPRESSED BACKUPSET to optimize throughput and storage in OCI.
  • Create a default channel configuration for SBT_TAPE (or allocate channels manually) that use the library that was downloaded, and point to the configuration file for the database.
  • If you do not use ACO and don't have a wallet , manually set an encryption password in your session.
I recommend sending a "small" backup piece first to ensure that everything is properly configured.  My favorite command is

RMAN>backup incremental level 0 datafile 1;

Datafile 1 is always the system tablespace.

Below is what my configuration looks like for RMAN specifically for what I changed to use the Backup Cloud Service.

CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/oci/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/oci/config/backupconfig.ora)';
CONFIGURE ENCRYPTION FOR DATABASE ON;
CONFIGURE ENCRYPTION ALGORITHM 'AES256'; # default
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

Network Performance

One of the big areas that comes up with using the Backup Cloud Service, is understanding the network capabilities.
The best place to start is with this MOS note

RMAN> run {
2> allocate channel foo device type sbt  PARMS  'SBT_LIBRARY=/home/oracle/oci/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/oci/config/backupconfig.ora)';
3>  send channel foo 'NETTEST 1000M';
4> }

allocated channel: foo
channel foo: SID=431 device type=SBT_TAPE
channel foo: Oracle Database Backup Service Library VER=19.0.0.1

released channel: foo
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of send command at 11/22/2023 14:12:04
ORA-19559: error sending device command: NETTEST 1000M
ORA-19557: device error, device type: SBT_TAPE, device name:
ORA-27194: skgfdvcmd: sbtcommand returned error
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
   KBHS-00402: NETTEST sucessfully completed
KBHS-00401: NETTEST RESTORE: 1048576000 bytes received in 15068283 microseconds
KBHS-00400: NETTEST BACKUP: 1048576000 bytes sent


Executing Backups

Now to put it all together I am going to execute a backup of datafile 1.  My database is encrypted, so I am going to set a password along with the encryption key.



 set encryption on identified by oracle;

executing command: SET encryption

RMAN>  backup incremental level 0 datafile 1;

Starting backup at 22-NOV-23
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=404 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=494 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=599 device type=SBT_TAPE
channel ORA_SBT_TAPE_3: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_4
channel ORA_SBT_TAPE_4: SID=691 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=19.0.0.1
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ACMEDBP/system01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 22-NOV-23
channel ORA_SBT_TAPE_1: finished piece 1 at 22-NOV-23
piece handle=8t2c4fmi_1309_1_1 tag=TAG20231122T150554 comment=API Version 2.0,MMS Version 19.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:35
Finished backup at 22-NOV-23

Starting Control File and SPFILE Autobackup at 22-NOV-23
piece handle=c-1654679317-20231122-01 comment=API Version 2.0,MMS Version 19.0.0.1
Finished Control File and SPFILE Autobackup at 22-NOV-23


Restoring

Restoring is very easy as long as you have the entries in your controlfile. If you don't then there is a 
 script included in the installation that can catalog the backup pieces and I go through that process here.
This also allows you to display what's in the bucket.

Buckets 1 vs many

If you look at what created when executing backup you will see that there is a set format for the backup pieces. Below are the 2 backup pieces that I created

  • 8t2c4fmi_1209_1_1 - This is the backup of datafile 1 for my database ACMEDBP
  • c-16546791317-20231122-01 - This is the controlfile backup for this database
Notice that the DB name is not in the name of the backup pieces, or in the visible nesting.
If you think about a medium sized database (let's say 100 datafiles), that has 2 weeks of backups (14 days), you would have 1,400 different backup pieces for the datafiles within the "sbt_catalog" directory.

My recommendation is to group small databases together in the same bucket (keeping the amount of backup pieces to a manageable level).
For large database (1,000+ datafiles), you can see where a 30 day retention could become 30,000+ backup pieces.

Having a large number of objects within a bucket increases the time to report the available backup pieces.  There is no way to determine which database the object is a member of without looking at the metadata.

Keep this in mind when considering how many buckets to create.





Monday, October 23, 2023

Oracle Recovery Service now offers retention lock

 Oracle DB Recovery Service recently added a new feature to protect backups from being prematurely deleted, even by a tenancy administrator.  This new feature adds a retention lock to the Backup Retention Period at the policy level. The image below shows the new settings that you see within the protection policy.

Enabling retention lock

The recovery service comes with some default policies that appear as "oracle defined" policy types

Name            Backup retention period
Platinum            46 days
Gold                   65 days
Silver                 35 days
Bronze               14 days

These policies can't' be changed, and they do not enable retention lock.

In order to implement a retention lock you need to create a new protection policy or  update an existing user defined protection policy.

Step #1 Set/Adjust "Backup retention period"

If you are creating a new "user defined" protection policy, you need to set the backup retention to a number of days between 14 and 95.  You should also take this opportunity to adjust the backup retention of an existing policy, if appropriate, before it is locked.

NOTE: Once a retention lock on the protection policy is activated (discussed in step #3), the backup retention period cannot be decreased, it can only be increased.

Step #2 Click on "enable retention lock"

This step is pretty straightforward. But the most important item to know is that the retention lock is not immediately in effect.  Much like the "retention lock" that is set on object storage, there is a minimum period of at least 14 days before the lock is "active".

 Note: Once the grace period has expired for the policy (explained later in this blog post) the  "retention lock"  is permanent and cannot be removed.


Step #3 Set "Scheduled lock time"

As I said in the previous step, the lock isn't immediately active. In this step you set the future date/time  that the lock time becomes active, and this Date/Time must be at least 14 days in the future.  This provides a grace period that delays when the lock on the policy becomes active. You have up until the lock activation date/time to adjust the scheduled lock time further into the future if it becomes necessary to further day lock activation.

Grace Period 

I wanted to make sure I explain what happens with this grace period so that you can plan accordingly.

  • If you change an existing "user defined" policy to enable the retention lock, any databases that are a member of this policy will not have locked backups until the scheduled lock date/time activates the lock.  
  • If you add databases to a protection policy that has a retention lock enabled, the backups will not be locked until whichever time is farther in the future.
    • Scheduled lock time for the policy if the retention lock has not yet activated.
    • 14 days after the database is added to the protection policy.
  • Databases can be removed from a retention locked protection policy during this grace period.
  • If the policy itself is still within it's grace period from activating, the backup retention period can be adjusted down for the protection policy.
NOTE: This 14 day grace period allows you to review the estimated space needed.  On the protected database summary page, for each database, you can see the "projected space for policy"  in the Space Usage section.  This value can be used to estimate the "locked backup" utilization.


What happens with a retention lock ?

Once the grace period expires the backups for the protected database are time locked and can't be prematurely deleted.  

The backups are protected by the following rules.

1. The database cannot be moved to another policy. No user within the tenancy, including an administrator can remove a database from it's retention enabled policy.  If it becomes necessary to move a database to another policy , an SR needs to raised, and security policies are followed to ensure that this is an approved change.


2.  There is always a 14 day grace period in which changes can be made before the backups become locked. This is your window to verify the backup storage usage required before the lock activates.

3. Even if you check the "72 hour termination option" on the database, backups are locked throughout the retention window.


Comments:

This is a great new feature that protects backups from being deleted by anyone in the tenancy, including tenancy administrators.  This provides an extra layer of security from an attack with compromised credentials.  Because the lock is permanent, always use the 14 day grace period to ensure the usage and duration is appropriate for you database.






Tuesday, September 5, 2023

Creating dynamic KEEP archival backups from ZDLRA

 This post covers how to utilize the new package DBMS_RA.CREATE_ARCHIVAL_BACKUP to dynamically create KEEP archival backups from a ZDLRA.

When using this package to schedule KEEP backups, I recommend creating restore points with every incremental backup.  Read this blog post to find out why.

PROCEDURE CREATE_ARCHIVAL_BACKUP(
   db_unique_name IN VARCHAR2,
   from_tag IN VARCHAR2 DEFAULT NULL,
   compression_algorithm IN VARCHAR2 DEFAULT NULL,
   encryption_algorithm IN VARCHAR2 DEFAULT NULL,
   restore_point IN VARCHAR2 DEFAULT NULL,
   restore_until_scn      IN VARCHAR2 DEFAULT NULL,
   restore_until_time     IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   attribute_set_name     IN VARCHAR2,
   format                 IN VARCHAR2 DEFAULT NULL,
   autobackup_prefix      IN VARCHAR2 DEFAULT NULL,
   restore_tag            IN VARCHAR2 DEFAULT NULL,
   keep_until_time        IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   max_redo_to_apply      IN INTEGER DEFAULT 14                    --> Added in 21.1 June PSU
   comments IN VARCHAR2 DEFAULT NULL);

NOTE: This blog post was updated to include the MAX_REDO_TO_APPLY parameter which is not documented as of writing this post.

 The documentation can be found here.  

These archival KEEP backups can be sent to either

  • TAPE - Using the copy-to-tape process you can send archival backups to physical tape, virtual tape, or any media manager that uses a "TAPE" backup type.
  • CLOUD - Using the copy-to-cloud process you can send archival backups to an OCI object store bucket which can be either on a local ZFSSA (using the OCI API protocol), or to the Oracle Cloud directly.



NOTE: When sending backups to a cloud location, retention rules can be set on the bucket LOCKING the cloud backups to ensure that they are immutable.  This is integrated with the new compliance settings on the RA21.



How to use this package

1. Identify the Database

Because this is more of an on demand process, you have to execute the package for each database separately (rather than by using a protection policy), and identify for each database the point-in-time you want to use for recovery..

2. Set Archival Restore Point

Because the archival backup is dynamically created using existing backups the restore point works differently than if you create the KEEP backup on demand from the protected database. 


When you create a KEEP backup from the protected database, the backup contains 

    • Full backup of all datafiles
    • Backup of spfile and controlfile
    • Backup of archive logs created during the backup starting with a log switch at the beginning of the backup.
    • Final archive logs created by performing a log switch at the end of the backup.

 When you create an Archival backup from the ZDLRA , the backup contains

    • Most current virtual full backup of each datafile prior to the point in time for recovery that you choose. 
    • Backup of spfile and controlfile 
    • Backup of the active archive logs generated when the oldest virtual full datafile backup started, up to the archive logs needed to recover until the point in time chosen for recovery.

As you can see a normal KEEP backup generated by the protected database is a a "self-contained" backup that can be recovered only to the point in time that the backup completed.  You can increase the recover point by adding additional KEEP archival log backups after the backup.

The dynamically created KEEP backup generated by the ZDLRA is also a "self-contained" backup that can be recovered to any point in time after the last datafile backup completed, but it also includes any point in time up to the restore point identified.  

Choices for a dynamic restore point 

 There are 3 options to choose a specific restore point. If you do not set one of these options, the KEEP backup will be created using the current restore point of the database.  

  •  RESTORE_POINT - If you set a unique restore point in the database immediately following an incremental backup (or  at a later point in time), you can create a KEEP backup that will recover to that point-in-time.  When using this process, after creating the restore point you should ensure that you also perform a log switch, and a log sweep to backup the archive logs.  This restore point name is used as the default RESTORE_TAG, and should be unique.  The recommended name (because it is the default KEEP restore tag) is "<KEEP_BACKUP_><yyyyMMddHH24miSS>".  BUT- in order to better identify the restore point, I would use a shorter name that just contains the date (assuming you are only performing an single daily incremental backup), for example "KEEP_BACKUP_MMDDYY".  By using a restore point, you can better control the amount of archive logs necessarily to recover the database.

 

    • Incremental forever backups ensure that the duration of the backup is much shorter than a typical full KEEP backup limiting the amount of archive logs necessary to have a recovery point.
    • Setting a restore point immediately following the backup ensures that the recovery window following the last datafile backup piece is short also limiting the amount of archive logs necessary.

  • RESTORE_UNTIL_SCN or RESTORE_UNTIL_TIME I am grouping these 2 choices together, because they are so similar.  Unlike using a restore point that is preset, using either of these options will create the KEEP archive backup with a recover point as the SCN number given or the UNTIL TIME given (using the databases timezone). 


FROM_TAG - The documentation states that only backups containing the FROM_TAG will be considered if a FROM_TAG is set. I am thinking this would make sense if you let the restore point default to the current time, and you want to choose which backup pieces to include.  I am not sure of the full use of this option however.


WARNING: This process only looks back 14 days for a full backup to start the KEEP backupset with.  If you do not have a full backup within the 14 day window this can be over ridden with the  MAX_REDO_TO_APPLY parameter in the package call. This was added in the 21.1 June PSU to allow customers to set a window farther than 14 days.

 RECOMMENDATIONS 

  •  Because you can create up to 2048 RESTORE_POINTs in a database, and normal restore points are automatically dropped when necessary, I would recommend creating a restore point following each incremental backup with the format mentioned above, This will allow you to create a self-contained FULL KEEP backup from any incremental backup as needed. This can be used to easily create an end-of-month KEEP backup (for example).

 

  • I would use the RESTORE_UNTIL options when it is necessary to create a KEEP backup as of a specific point-in-time regardless of when the backup completed. This would be used if the recovery point is critical.

WARNING

Before creating the archival backup, ensure you have the archive logs backed up that are needed to support the recover point, and ensure there is enough time for the incremental backups to virtualize.  You many need perform a log switch and execute an additional log sweep prior to scheduling the archival backup.

3. Set Archival Options


COMPRESSION_ALGORITHM
-  The default is no compression, and if the backup piece is already compressed, it will not try to compress the backup again.  The documentation does a good job of going through the options, and why you would chose one or the other.  Keep in mind that if your database uses TDE for all the datafiles, there will be no gain with compression, and the extra resources required for compression may slow down the restore.  Also, the compression is performed by the ZDLRA (RMAN compression), but the de-compression is performed by the protected database during restore.

 ENCRYPTION_ALGORITHM - The default is no encryption, but it is important to understand that any copy-to-cloud processing MUST have encryption set.  It is also important to understand that the ZDLRA must be using OKV (Oracle Key Vault) to store the encryption keys when encryption is set. The list of algorithms can be found in the documentation.

 

4. Set Archival Location and Name

ATTRIBUTE_SET_NAME - This must be specified, and this identifies the backup location to send the archival backups.

FORMAT - By default the  backup pieces are given handles automatically generated by the ZDLRA, this setting allows you to change the default backup piece format using normal RMAN formatting options.

AUTOBACKUP_PREFIX - - By default the autobackup pieces will retain the original names, but  you can add a prefix to the original autobackup names. 

 

5. Set Restore TAG

 By default the RESTORE_TAG defaults to  "<KEEP_BACKUP_><yyyyMMddHH24miSS>". This can be overridden to give the backup a more meaningful tag. For example, the end-of-month backup could be tagged as "MONTHLY_12_2023", making it easier to automate finding specific KEEP backups.

 RECOMMENDATIONS 

I would set the Restore Tag to a set format that makes the KEEP backups easy to find. You can see the example above. 

6. Set KEEP_UNTIL time

The default KEEP_UNTIL time is "FOREVER". In most cases you want to set an end date for the backup, allowing the ZDLRA to automatically remove the backup when it expires.  This date-time is based on the timezone of the protected database. 



 SUMMARY 

 If using this functionality to dynamically create Archival KEEP backups...

  • I would set a Restore Point in each database immediately following every incremental backup.  
  • I would schedule this procedure to create the archival backup with a formatted restore tag to make the backup easy to find.
  • If backing up to a CLOUD location, I would use retention rules to ensure the backups are immutable until they expire.

 

 

Wednesday, August 2, 2023

ZDLRA - Copy-to-cloud steps by step explained

 One of the best features of the ZDLRA is the ability to dynamically create a full Keep backup and send it to Cloud (ZFSSA or OCI) for archival storage.

Here is a great article by Oracle Product Manager Marco Calmasini that explains how to use this feature.



In this blog post, I will go through the RACLI steps that you execute, and explain what is happening with each step

The documentation I am started with is the 21.1 administrators Guide which can be here.  If you are on a more current release, then you can find the steps in chapter named "Archiving Backups to Cloud".


Deploying the OKV Client Software

To ensure that all the backup pieces are encrypted, you must use OKV (Oracle Key Vault) to manage the encryption keys that are being used by the ZDLRA.  Even if you are using TDE for the datafiles, the copy-to-cloud process encrypts ALL backup pieces including the backup of the controlfile, and spfile which aren't already encrypted.

I am not going to go through the detailed steps that are in the documentation to configure OKV, but I will just go through the high level processes.

The most important items to note on this sections are

  • Both nodes of the ZDLRA are added as endpoints, and they should have a descriptive name that identifies them, and ties them together.
  • A new endpoint group should be created with a descriptive name, and both nodes should be added to the new endpoint group.
  • A new virtual wallet is created with a descriptive name, and this needs to both associated with the 2 endpoints, and be the default wallet for the endpoints.
  • Both endpoints of the ZDLRA are enrolled through OKV and during the enrollment process a unique enrollment token file is created for each node. It is best to immediately rename the files to identify the endpoint it is associated with using the format <myhost>-okvclient.jar.
  • Copy the enrollment token files to the /radump directory on the appropriate host.
NOTE: It is critical that you follow these directions exactly, and that each node has the appropriate enrollment token with the appropriate name before continuing.

#1 Add credential_wallet

racli add credential_wallet


Fri Jan 1 08:56:27 2018: Start: Add Credential Wallet
Enter New Keystore Password: <OKV_endpoint_password>
Confirm New Keystore Password:
Enter New Wallet Password: <ZDLRA_credential_wallet_password> 
Confirm New Wallet Password:
Re-Enter New Wallet Password:
Fri Jan 1 08:56:40 2018: End: Add Credential Wallet

The first step to configure the ZDLRA to talk to OKV is to have the ZDLRA create a password protected SEPS wallet file that contains the OKV password.
This step asks for 2 new passwords when executing
  1. New Keystore Password - This password is the OKV endpoint password.  This password is used to communicate with OKV by the database, and can be used with okvutil to interact with OKV directly
  2. New Wallet Password - This password is used to protect the wallet file itself that will contain the OKV keystore password.
This password file is shared across both nodes.

Update contents      -  "racli add credential"
Change password    - "racli alter credential_wallet"

#2 Add keystore

racli add keystore --type hsm --restart_db

RecoveryAppliance/log/racli.log
Fri Jan 1 08:57:03 2018: Start: Configure Wallets
Fri Jan 1 08:57:04 2018: End: Configure Wallets
Fri Jan 1 08:57:04 2018: Start: Stop Listeners, and Database
Fri Jan 1 08:59:26 2018: End: Stop Listeners, and Database
Fri Jan 1 08:59:26 2018: Start: Start Listeners, and Database
Fri Jan 1 09:02:16 2018: End: Start Listeners, and Database

The second step to configure the ZDLRA to talk to OKV is to have the ZDLRA database be configured to communicate with OKV. The Database on the ZDLRA will be configured to use the OKV wallet for encryption keys which requires a bounce of the database.  


Backout         - "racli remove keystore" 
Status            - "racli status keystore"
Update          - "racli alter keystore"
Disable          - "racli disable keystore"
Enable            - "racli enable keystore"

#3 Install okv_endpoint (OKV client software)

racli install okv_endpoint

23 20:14:40 2018: Start: Install OKV End Point [node01]
Wed August 23 20:14:43 2018: End: Install OKV End Point [node01]
Wed August 23 20:14:43 2018: Start: Install OKV End Point [node02]
Wed August 23 20:14:45 2018: End: Install OKV End Point [node02]

The third step to configure the ZDLRA to talk to OKV is to have the ZDLRA nodes (OKV endpoints) enrolled in OKV.  This step will install the OKV software on both nodes of the ZDLRA, and complete the enrollment of the 2 ZDLRA nodes with OKV.  The password that entered in step #1 for OKV is used during the enrollment process.

Status            - "racli status okv_endpoint"

NOTE: At the end of this step, the status command should return a status of online from both nodes.

Node: node02
Endpoint: Online
Node: node01
Endpoint: Online

#4 Open the Keystore

racli enable keystore

The fourth step to configure the ZDLRA to talk to OKV is to have the ZDLRA nodes open the encryption wallet in the database. This step will use the saved passwords from step #1 and open up the encryption wallet.

NOTE: This will need to be executed after any restarts of the database on the ZDLRA.

#5 Create a TDE master key for the ZDLRA in the Keystore

racli alter keystore --initialize_key

The final step to configure the ZDLRA to talk to OKV is to have the ZDLRA create the master encryption for the ZDLRA in the wallet.

Creating Cloud Objects for Copy-to-Cloud

These steps create the cloud objects necessary to send backups to a cloud location.

NOTE: If you are configuring multiple cloud locations, you may go through these steps for each location.

Configure public/private key credentials

Authentication with the object storage is done using an X.509 certificate.  The ZDLRA steps outlined in the documentation will generate a new pair of API signing keys and register the new set of keys.
You can also use any set of API keys that you previously generated by putting your private key in the shared location on the ZDLRA nodes..
In OCI each user can only have 3 sets of API keys, but the ZFSSA has no restrictions on the number of API signing keys that can be created.
Each "cloud_key" represents an API signing key pair, and each cloud_key contains 
  1. pvt_key_path - Shared location on the ZDLRA where the private key is located
  2. fingerprint      - fingerprint associated with the private key to identify which key to use.
You can use the same "cloud_key" to authenticate to multiple buckets, and even different cloud locations.

Documentation steps to create new key pair

#1 Add Cloud_key


racli add cloud_key --key_name=sample_key

Tue Jun 18 13:22:07 2019: Using log file /opt/oracle.RecoveryAppliance/log/racli.log
Tue Jun 18 13:22:07 2019: Start: Add Cloud Key sample_key
Tue Jun 18 13:22:08 2019: Start: Creating New Keys
Tue Jun 18 13:22:08 2019: Oracle Database Cloud Backup Module Install Tool, build 19.3.0.0.0DBBKPCSBP_2019-06-13
Tue Jun 18 13:22:08 2019: OCI API signing keys are created:
Tue Jun 18 13:22:08 2019:   PRIVATE KEY --> /raacfs/raadmin/cloud/key/sample_key/oci_pvt
Tue Jun 18 13:22:08 2019:   PUBLIC  KEY --> /raacfs/raadmin/cloud/key/sample_key/oci_pub
Tue Jun 18 13:22:08 2019: Please upload the public key in the OCI console.
Tue Jun 18 13:22:08 2019: End: Creating New Keys
Tue Jun 18 13:22:09 2019: End: Add Cloud Key sample_key

This step is used to generate a new set of API signing keys,
The output of this step is a shared set of files on the ZLDRA which are stored in:
/raacfs/raadmin/cloud/key/{key_name)/

In order to complete the cloud_key information, you need to add the public key to OCI, or to the ZFS and save the fingerprint that is associated with the public key. The fingerprint is used in the next step.

#2 racli alter cloud_key


racli alter cloud_key --key_name=sample_key --fingerprint=12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef

The fingerprint that is associated with the public key (from the previous step) is added to the ZDLRA cloud_key information so that it can be used for authentication.  
Both the private key, and the fingerprint are need to use the API signing key for credentials.

Using your own API signing key pair

#1 Add cloud_key

racli add cloud_key --key_name=KEY_NAME [--fingerprint=PUBFINGERPRINT --pvt_key_path=PVTKEYFILE]

You can add your own API signing keys to the ZDLRA by  using the "add cloud_key" command identifying both the private key file location (it is best to follow the format and location in the automated steps) and the fingerprint associated with the API signing keys.
It is assumed that the public key has already been added to OCI, or to the ZFSSA.

Status        - racli list cloud_key
Delete        - racli remove cloud_key
Update       - racli alter cloud_key

Documentation steps to create a new cloud_user 

This step is used to create the wallet entry on the ZDLRA that is used for authenticating to the object store.
This step combines the "cloud_key", which contains the API signing keys, the user login information and the compartment (on ZFSSA the compartment is the share ).
The cloud_user can be used for authentication with multiple buckets/locations that are identified as cloud_locations as long as they are within the same compartment (share on ZFSSA).

The format of the command to create a new cloud_user is below

racli add cloud_user 
--user_name=sample_user
--key_name=sample_key
--user_ocid=ocid1.user.oc1..abcedfghijklmnopqrstuvwxyz0124567901
--tenancy_ocid=ocid1.tenancy.oc1..abcedfghijklmnopqrstuvwxyz0124567902
--compartment_ocid=ocid1.compartment.oc1..abcedfghijklmnopqrstuvwxyz0124567903

The parameters for this command are

  • user_name        - This is the username that is associated with the cloud_user to unique identify it.
  • key_name         - This is name of the "cloud_key" identifying the API signing keys to be used.
  • user_ocid          - This is the Username for authentication. In OCI this is the users OCID, in ZFS, this combines the ocid format with the username on the ZFSSA that owns the share.
  • tenancy_ocid    - this is the tenancy OCID in OCI, on ZFSSA it is ignored
  • compartment_ocid - this is the OCID, on ZFSSA it is the share
For more information on configuring the ZFSSA see
How to configure Zero Data Loss Recovery Appliance to use ZFS OCI Object Storage as a cloud repository (Doc ID 2761114.1)


List                - racli list  cloud_user
Delete            - racli remove  cloud_user
Update           - racli alter cloud_user

Documentation steps to create a new cloud_location 

This step is used to associate the cloud_user (used for authentication) with both the location and the bucket that is going to be used for backups.

racli add cloud_location
--cloud_user=<CLOUD_USER_NAME>
--host=https://<OPC_STORAGE_LOCATION>
--bucket=<OCI_BUCKET_NAME>
--proxy_port=<HOST_PORT>
--proxy_host=<PROXY_URL>
--proxy_id=<PROXY_ID>
--proxy_pass=<PROXY_PASS>
--streams=<NUM_STREAMS>
[--enable_archive=TRUE]
--archive_after_backup=<number>:[YEARS | DAYS]
[--retain_after_restore=<number_hours>:HOURS]
--import_all_trustcert=<X509_CERT_PATH>
--immutable
--temp_metadata_bucket=<metadata_bucket>  


 

I am going to go through the key items that need to be entered here.  I am going to skip over the PROXY information and certificate.

  • cloud_user - This is the object store authentication information that was created in the previous steps.
  • host - This the URL for the object storage location. On ZFS the namespace in the URL is the "share"
  • bucket - This is the bucket where the backups will be sent. The bucket will be created if it doesn't exists. 
  • streams - The maximum number of channels to use when sending backups to the cloud
  • enable_archive - Not used with ZFS. With OCI the default TRUE allows you to set an archival strategy, FALSE will automatically put backups in archival storage.
  • archive_after_restore - Not used with ZFS. Automatically configures an archival strategy in OCI
  • retain_after_restore - Not used with ZFS. Sets the period of time that backups will remain in standard storage before returning to archival storage.
  • immutable - This allows you to set retention rules on the bucket by using the <metadata_bucket> for temporary files that need to be deleted after the backup. When using immutable you must also have a temp_metadata_bucket
  • temp_metadata_bucket - This is used with immutable to configure backups to go to 2 buckets, and this bucket will only contain a temporary object that gets deleted after the backup completes.
This command will create multiple attribute sets (between 1 and the number of streams) for the cloud_location that can be used for sending archival backups to the cloud with different numbers of channels.
The format of <copy_cloud_name> is a combination of  <bucket name> and <cloud_user>.
The format of the attributes used for the copy jobs is <Cloud_location_name>_<stream number>


Update          - racli alter cloud_location
Disable          - racli disable cloud_location  - This will pause all backups going to this location
Enable           - racli enable cloud_location  - This unpauses all backups going to this location
List                - racli list  cloud_location
Delete            - racli remove cloud_location

NOTE: There are quite a few items to note in this section.
  • When configuring backups to go to ZFSSA use the documentation previously mentioned to ensure the parameters are correct.
  • When executing this step with ZFSSA, make sure that the default OCI location on the ZFSSA is set to the share that you are currently configuring. If you are using multiple shares for buckets, then you will have to change the ZFSSA settings as you add cloud locations.
  • When using OCI for archival ensure that you configure the archival rules using this command. This ensures that the metadata objects, which can't be archived are excluded as part of the lifecycle management rules created during this step.


Create the job template using the documentation.