Showing posts with label oracle. Show all posts
Showing posts with label oracle. Show all posts

Tuesday, December 19, 2023

ZFSSA can be used to share data from your Oracle Database

 Data Sharing has become a big topic recently, and Oracle Cloud has added some new services to allow you to share data from an Autonomous Database.  But how do you do this with your on-premises database ? In this post I show you how to use ZFS as your data sharing platform.


Data Sharing

Being able to securely share data between applications is a critical feature in todays world.  The Oracle Database is often used to consolidate and summarize collected data, but is not always the platform for doing analysis.  The Oracle Database does have the capability to analyze data, but tools such as Jupyter Notebooks, Tableu, Power Bi, etc are typically the favorites of Data Scientists and data analysts.

The challenge is how to give access to specific pieces of data in the database without providing access to the database itself.  The most common solution is to use object storage and pre-authenticated URLs.  Data is extracted from the database based on the user and stored in object storage in a sharable format (JSON for example).  With this paradigm, and pictured above, you can create multiple datasets that contain a subset of the data specific to the users needs and authorization.  The second part is the use of a pre-authenticated URL.  This is a dynamically created URL that allows access to the object without authentication. Because it contains a long string of random characters, and is only valid for a specified amount of time, it can be securely shared with the user.

My Environment

For this post, I started with an environment I had previously configured to use DBMS_CLOUD.  My database is a 19.20 database.  In that database I used the steps specified in the MOS note and my blog (information can be found here) to use DBMS_CLOUD.

My ZFSSA environment is using 8.8.63, and I did all of my testing in OCI using compute instances.

For preparation to test I had

  • Installed DBMS_CLOUD packages into my database using MOS note #2748362.1
  • Downloaded the certificate for my ZFS appliance using my blog post and added them to wallet.
  • Added the DNS/IP addresses to the DBMS_CLOUD_STORE table in the CDB.
  • Created a user in my PDB with authority to use DBMS_CLOUD
  • Created a user on ZFS to use for my object storage authentication (Oracle).
  • Configured the HTTP service for OCI
  • Added my public RSA key from my key par to the OCI service for authentication.
  • Created a bucket
NOTE:  In order to complete the full test, there were 2 other items I needed to do.

1) Update the ACL to also access port 80.  The DBMS_CLOUD configuration sets ACLs to access websites using port 443.  During my testing I used port 80 (http vs https).
2) I granted execute on DBMS_CRYPTO to my database user for the testing.

Step #1 Create Object

The first step was to create an object from a query in the database.  This simulated pulling a subset of data (based on the user) and writing it to a object so that it could be shared.  To create the object I used the DBMS_CLOUD.EXPORT_DATA package.  Below is the statement I executed.

BEGIN
 DBMS_CLOUD.EXPORT_DATA(
       credential_name => 'ZFS_OCI2',  
    file_uri_list =>'https://zfs-s3.zfsadmin.vcn.oraclevcn.com/n/zfs_oci/b/db_back/o/shareddata/customer_sales.json',
    format => '{"type" : "JSON" }',
   query => 'SELECT OBJECT_NAME,OBJECT_TYPE,STATUS FROM user_objects');
END;
/

In this example:
  • CREDENTIAL_NAME - Refers to my authentication credentials I had previously created in my database.
  • FILE_URI_LIST - The name and location of the object I want to create on the ZFS object storage.
  • FORMAT - The output is written in JSON format
  • QUERY - This is the query you want to execute and store the results in the object storage.  
As you can see, it would be easy to create multiple objects that contain specific data by customizing the query, and naming the object appropriately.

In order to get the  proper name of the object I then selected the list of objects from object storage.

set pagesize 0
SET UNDERLINE =
col object_name format  a25
col created format  a20
select object_name,to_char(created,'MM/DD/YY hh24:mi:ss') created,bytes/1024  bytes_KB 
       from dbms_cloud.list_objects('ZFS_OCI2', 'https://zfs-s3.zfsadmin.vcn.oraclevcn.com/n/zfs_oci/b/db_back/o/shareddata/');


 customer_sales.json_1_1_1.json 12/19/23 01:19:51    3.17382813


From the output, I can see that my JSON file was named 'customer_sales.json_1_1_1.json'.


Step #2 Create Pre-authenticated URL

The Package I ran to do this is below. I am going to break down the pieces into multiple sections. Below is the full code.



Step #2a Declare variables

The first part of the pl/sql package declares the variables that will be used in the package. Most of the variables are normal VARCHAR variables, but there a re a few other variable types that are specific to the packages used to encrypt and send the URL request.

  • sType,kType - These are constants used to sign the URL request with  RSA 256 encryption
  • utl_http.req,utl_http.resp - These are request and response types used when accessing the object storage
  • json_obj - This type is used to extract the url from the resulting JSON code returned from the object storage call. 

Step #2b Set variables

In this section of code I set the authentication information along with the host, and the private key part of my RSA public/private key pair. 
I also set a variable with the current date time, in the correct GMT format.

NOTE: This date time stamp is compared with the date time on the ZFSSA. It must be within 5 minutes of the ZFSSA date/time or the request will be rejected.

Step #2c Set JSON body

In this section of code, I build the actual request for the pre-authenticated URL.  The parameters for this are...
  • accessType - I set this to "ObjectRead" which allows me to create a URL that points to a specific object.  Other options are Write, and ReadWrite.
  • bucketListingAction - I set this to "Deny",  This disallows the listing of objects.
  • bucketName - Name of the bucket
  • name - A name you give the request so that it can be identified later
  • namespace/namepaceName - This is the ZFS share
  • objectName - This is the object name on the share that I want the request to refer to. 
  • timeExpires - This is when the request expires.
NOTE: I did not spend the time to do a lot of customization to this section.  You could easily make the object name a parameter that is passed to the package along with the bucketname etc. You could also dynamically set the expiration time based on sysdate.  For example you could have the request only be valid for 24 hours by dynamically setting the timeExpires.

The last step in this section of code is to create a sha256 digest of the JSON "body" that I am sending with the request.  I created it using the dbms_crypto.hash.

Step #2d Create the signing string

This section builds the signing string that is encrypted.  This string is set in a very specific format.  The string that is build contains.

(request-target): post /oci/n/{share name}/b/{bucket name}/p?compartment={compartment} 
date: {date in GMT}
host: {ZFS host}
x-content-sha256: {sha256 digest of the JSON body parameters}
content-type: application/json
content-length: {length of the JSON body parameters}

NOTE: This signing string has to be created with the line feeds.

The final step in this section is sign the signing string with the private key.
In order to sign the string the DBMS_CRYPTO.SIGN package is used.


Step #2e Build the signature from the signing string


This section takes the signed string that was built in the prior step and encodes the string in Base 64.  This section uses the utl_encode.base64_encode package to sign the raw string and it is then converted to a varchar.

Note: The resulting base64 encoded string is broken into 64 character sections.  After creating the encoded string, I loop through the string, and combine the 64 character sections into a single string.
This took the most time to figure out.

Step #2f Create the authorization header

This section dynamically builds the authorization header that is sent with the call.  This section includes the authentication parameters (tenancy OCID, User OCID, fingerprint), the headers (these must be in the order they are sent), and the signature that was created in the last 2 steps.

Step #2g Send a post request and header information


The next section sends the post call to the ZFS object storage followed by each piece of header information.  After header parameters are sent, then the JSON body is sent using the utl_http.write_text call.  

Step #2h Loop through the response

This section gets the response from the POST call, and loops through the response.  I am using the json_object_t.parse call to create a JSON type, and then use the json_obj.get to retrieve the unique request URI that is created.
Finally I display the resulting URI that can be used to retrieve the object itself.


Documentation

There were a few documents that I found very useful to give me the correct calls in order to build this package.

Signing request documentation : This document gave detail on the parameters needed to send get or post requests to object storage.  This document was very helpful to ensure that I had created the signature properly.

Http message signature format : This document gives detail on the signature itself and the format.

OCI rest call walk through : This post was the most helpful as it gave an example of a GET call and a PUT call. I was first able to create a GET call using this post, and then I built on it to create a GET call. 


Monday, July 24, 2023

RMAN - Create weekly archival backup from weekly full backups

 This blog post demonstrates a process to create KEEP archival backups dynamically by using backups pieces within a  weekly full/daily incremental backup strategy.  

Thanks to Battula Surya Shiva Prasad and Kameswara RaoIndrakanti for coming up process of doing this.


KEEP backups

First let's go through what keep a keep backup is and how it affects your backup strategy.

  1. A KEEP backup is a self-contained backupset.  The archive logs needed to de-fuzzy the database files are automatically included in the backupset.  
  2. The archive logs included in the backup are only the archive logs needed to de-fuzzy.
  3. The backup pieces in the KEEP backup (both datafile backups and included archive log pieces) are ignored in the normal incremental backup strategy, and in any log sweeps.
  4. When a recovery window is set in RMAN, KEEP backup pieces are ignored in any "Delete Obsolete" processing.
  5. KEEP backup pieces, once past the "until time" are removed using the "Delete expired" command.

Normal  process to create an archival KEEP backup.

  • Perform a weekly full backup and a daily incremental backup that are deleted using an RMAN recovery window.
  • Perform archive log backups with the full/incremental backups along with log sweeps. These are also deleted using the an RMAN recovery window.
  • One of these processes are used to create an archival KEEP backup.
    • A separate full KEEP backup is performed along with the normal weekly full backup
    • The weekly full backup (and archive logs based on tag) are copied to tape with "backup as backupset" and marked as "KEEP" backup pieces.

Issues with this process

  • The process of copying the full backup to tape using "backup as backupset" requires 2 copies of the same backup for a period of time.  You don't want to wait until the end of retention to copy it to tape.
  • If the KEEP full backups are stored on disk, along with the weekly full backups you cannot use the backup as backupset, you must perform a  second, separate backup.

Proposal to create a weekly KEEP backup

Problems with simple solution

The basic idea is that you perform a weekly full backup, along with daily incremental backups that are kept for 30 days. After the 30 day retention, just the full backups (along with archive logs to defuzzy) are kept for an additional 30 days.

The most obvious way to do this is to

  •  Set the RMAN retention 30 days
  • Create a weekly full backup that is a KEEP backup with an until time of 60 days in the future.
  • Create a daily incremental backup that NOT a keep backup.
  • Create archive backups as normal.
  • Allow delete obsolete to remove the "non-KEEP" backups after 30 days.
.
Unfortunately when you create an incremental backups, and there is only KEEP backups proceeding it, the incremental Level 1 backup is forced into an incremental level 0 backups.  And with delete obsolete, if you look through MOS note "RMAN Archival (KEEP) backups and Retention Policy (Doc ID 986382.1)" you find that the incremental backups and archive logs are kept for 60 days because there is no proceeding non-KEEP backup.


Solution

The solution is to use tags, mark the weekly full as a keep after a week, and use the "delete backups completed before tag='xx'" command.

Weekly full backup scripts

run
{
   backup archivelog all filesperset=20  tag ARCHIVE_ONLY delete input;
   change backup tag='INC_LEVEL_0'  keep until time 'sysdate+53';
   backup incremental level 0 database tag='INC_LEVEL_0' filesperset=20  plus archivelog filesperset=20 tag='INC_LEVEL_0';

  delete backup completed before 'sysdate-61' tag= 'INC_LEVEL_0';
  delete backup completed before 'sysdate-31' tag= 'INC_LEVEL_1';
  delete backup completed before 'sysdate-31' tag= 'ARCHIVE_ONLY';
}

Daily Incremental backup scripts

run
{
  backup incremental level 1 database tag='INC_LEVEL_1'  filesperset=20 plus archivelog filesperset=20 tag='INC_LEVEL_1';
}

Archive log sweep backup scripts

run
{
  backup archivelog all tag='ARCHIVE_ONLY' delete input;
}


Example

I then took these scripts, and built an example using a 7 day recovery window.  My full backup commands are below.
run
{
   backup archivelog all filesperset=20  tag ARCHIVE_ONLY delete input;
   change backup tag='INC_LEVEL_0'  keep until time 'sysdate+30';
   backup incremental level 0 database tag='INC_LEVEL_0' filesperset=20  plus archivelog filesperset=20 tag='INC_LEVEL_0';

  delete backup completed before 'sysdate-30' tag= 'INC_LEVEL_0';
  delete backup completed before 'sysdate-8' tag= 'INC_LEVEL_1';
  delete backup completed before 'sysdate-8' tag= 'ARCHIVE_ONLY';
}


First I am going to perform a weekly backup and incremental backups for 7 days to see how the settings affect the backup pieces in RMAN.

for Datafile #1.


 File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            NO                              INC_LEVEL_0
     3 06-02-23 00:00:03          1   3334337   3334513 INCR1           NO                              INC_LEVEL_1
     3 06-03-23 00:00:03          1   3334513   3334665 INCR1           NO                              INC_LEVEL_1
     3 06-04-23 00:00:03          1   3334665   3334805 INCR1           NO                              INC_LEVEL_1
     3 06-05-23 00:00:03          1   3334805   3334949 INCR1           NO                              INC_LEVEL_1
     3 06-06-23 00:00:03          1   3334949   3335094 INCR1           NO                              INC_LEVEL_1
     3 06-07-23 00:00:03          1   3335094   3335234 INCR1           NO                              INC_LEVEL_1

for  archive logs

Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      625    3333260   3334274 15-JUN-23         NO                              ARCHIVE_ONLY
      626    3334274   3334321 01-JUN-23         NO                              INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         NO                              INC_LEVEL_0
      628    3334375   3334440 01-JUN-23         NO                              ARCHIVE_ONLY
      629    3334440   3334490 01-JUN-23         NO                              INC_LEVEL_1
      630    3334490   3334545 02-JUN-23         NO                              INC_LEVEL_1
      631    3334545   3334584 02-JUN-23         NO                              ARCHIVE_ONLY
      632    3334584   3334633 02-JUN-23         NO                              INC_LEVEL_1
      633    3334633   3334695 03-JUN-23         NO                              INC_LEVEL_1
      634    3334695   3334733 03-JUN-23         NO                              ARCHIVE_ONLY
      635    3334733   3334782 03-JUN-23         NO                              INC_LEVEL_1
      636    3334782   3334839 04-JUN-23         NO                              INC_LEVEL_1
      637    3334839   3334876 04-JUN-23         NO                              ARCHIVE_ONLY
      638    3334876   3334926 04-JUN-23         NO                              INC_LEVEL_1
      639    3334926   3334984 05-JUN-23         NO                              INC_LEVEL_1
      640    3334984   3335023 05-JUN-23         NO                              ARCHIVE_ONLY
      641    3335023   3335072 05-JUN-23         NO                              INC_LEVEL_1
      642    3335072   3335124 06-JUN-23         NO                              INC_LEVEL_1
      643    3335124   3335162 06-JUN-23         NO                              ARCHIVE_ONLY
      644    3335162   3335211 06-JUN-23         NO                              INC_LEVEL_1
      645    3335211   3335273 07-JUN-23         NO                              INC_LEVEL_1
      646    3335273   3335311 07-JUN-23         NO                              ARCHIVE_ONLY


Next I'm going to execute the weekly full backup script that changes the last backup to a keep backup to see how the settings affect the backup pieces in RMAN.

for Datafile #1.
 File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-02-23 00:00:03          1   3334337   3334513 INCR1           NO                              INC_LEVEL_1
     3 06-03-23 00:00:03          1   3334513   3334665 INCR1           NO                              INC_LEVEL_1
     3 06-04-23 00:00:03          1   3334665   3334805 INCR1           NO                              INC_LEVEL_1
     3 06-05-23 00:00:03          1   3334805   3334949 INCR1           NO                              INC_LEVEL_1
     3 06-06-23 00:00:03          1   3334949   3335094 INCR1           NO                              INC_LEVEL_1
     3 06-07-23 00:00:03          1   3335094   3335234 INCR1           NO                              INC_LEVEL_1
     3 06-08-23 00:00:07          0         0   3335715 FULL            NO                              INC_LEVEL_0


for archive logs


Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      625    3333260   3334274 15-JUN-23         NO                              ARCHIVE_ONLY
      626    3334274   3334321 01-JUN-23         YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      628    3334375   3334440 01-JUN-23         NO                              ARCHIVE_ONLY
      629    3334440   3334490 01-JUN-23         NO                              INC_LEVEL_1
      630    3334490   3334545 02-JUN-23         NO                              INC_LEVEL_1
      631    3334545   3334584 02-JUN-23         NO                              ARCHIVE_ONLY
      632    3334584   3334633 02-JUN-23         NO                              INC_LEVEL_1
      633    3334633   3334695 03-JUN-23         NO                              INC_LEVEL_1
      634    3334695   3334733 03-JUN-23         NO                              ARCHIVE_ONLY
      635    3334733   3334782 03-JUN-23         NO                              INC_LEVEL_1
      636    3334782   3334839 04-JUN-23         NO                              INC_LEVEL_1
      637    3334839   3334876 04-JUN-23         NO                              ARCHIVE_ONLY
      638    3334876   3334926 04-JUN-23         NO                              INC_LEVEL_1
      639    3334926   3334984 05-JUN-23         NO                              INC_LEVEL_1
      640    3334984   3335023 05-JUN-23         NO                              ARCHIVE_ONLY
      641    3335023   3335072 05-JUN-23         NO                              INC_LEVEL_1
      642    3335072   3335124 06-JUN-23         NO                              INC_LEVEL_1
      643    3335124   3335162 06-JUN-23         NO                              ARCHIVE_ONLY
      644    3335162   3335211 06-JUN-23         NO                              INC_LEVEL_1
      645    3335211   3335273 07-JUN-23         NO                              INC_LEVEL_1
      646    3335273   3335311 07-JUN-23         NO                              ARCHIVE_ONLY
      647    3335311   3335652 07-JUN-23         NO                              ARCHIVE_ONLY
      648    3335652   3335699 08-JUN-23         NO                              INC_LEVEL_0
      649    3335699   3335760 08-JUN-23         NO                              INC_LEVEL_0
      650    3335760   3335833 08-JUN-23         NO                              ARCHIVE_ONLY


Finally I'm going to execute the weekly full backup script that changes the last backup to a keep backup and this time it will delete the older backup pieces to see how the settings affect the backup pieces in RMAN.

for Datafile #1.

File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-08-23 00:00:07          0         0   3335715 FULL            YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-09-23 00:00:03          1   3335715   3336009 INCR1           NO                              INC_LEVEL_1
     3 06-10-23 00:00:03          1   3336009   3336183 INCR1           NO                              INC_LEVEL_1
     3 06-11-23 00:00:03          1   3336183   3336330 INCR1           NO                              INC_LEVEL_1
     3 06-12-23 00:00:03          1   3336330   3336470 INCR1           NO                              INC_LEVEL_1
     3 06-13-23 00:00:03          1   3336470   3336617 INCR1           NO                              INC_LEVEL_1
     3 06-14-23 00:00:04          1   3336617   3336757 INCR1           NO                              INC_LEVEL_1
     3 06-15-23 00:00:07          0         0   3336969 FULL            NO                              INC_LEVEL_0



for archive logs

Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      626    3334274   3334321 01-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      647    3335311   3335652 07-JUN-23         NO                              ARCHIVE_ONLY
      648    3335652   3335699 08-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      649    3335699   3335760 08-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      650    3335760   3335833 08-JUN-23         NO                              ARCHIVE_ONLY
      651    3335833   3335986 08-JUN-23         NO                              INC_LEVEL_1
      652    3335986   3336065 09-JUN-23         NO                              INC_LEVEL_1
      653    3336065   3336111 09-JUN-23         NO                              ARCHIVE_ONLY
      654    3336111   3336160 09-JUN-23         NO                              INC_LEVEL_1
      655    3336160   3336219 10-JUN-23         NO                              INC_LEVEL_1
      656    3336219   3336258 10-JUN-23         NO                              ARCHIVE_ONLY
      657    3336258   3336307 10-JUN-23         NO                              INC_LEVEL_1
      658    3336307   3336359 11-JUN-23         NO                              INC_LEVEL_1
      659    3336359   3336397 11-JUN-23         NO                              ARCHIVE_ONLY
      660    3336397   3336447 11-JUN-23         NO                              INC_LEVEL_1
      661    3336447   3336506 12-JUN-23         NO                              INC_LEVEL_1
      662    3336506   3336544 12-JUN-23         NO                              ARCHIVE_ONLY
      663    3336544   3336594 12-JUN-23         NO                              INC_LEVEL_1
      664    3336594   3336639 13-JUN-23         NO                              INC_LEVEL_1
      665    3336639   3336677 13-JUN-23         NO                              ARCHIVE_ONLY
      666    3336677   3336734 13-JUN-23         NO                              INC_LEVEL_1
      667    3336734   3336819 14-JUN-23         NO                              INC_LEVEL_1
      668    3336819   3336857 14-JUN-23         NO                              ARCHIVE_ONLY
      669    3336857   3336906 14-JUN-23         NO                              ARCHIVE_ONLY
      670    3336906   3336953 15-JUN-23         NO                              INC_LEVEL_0
      671    3336953   3337041 15-JUN-23         NO                              INC_LEVEL_0
      672    3337041   3337113 15-JUN-23         NO                              ARCHIVE_ONLY


Result

For my datafiles, I still have the weekly full backup, and it is a keep backup. For my archive logs, I still have the archive logs that were part of the full backup which are needed to de-fuzzy my backup.


Restore Test


Now for the final test using the next chg# on the June 1st archive logs 3334375;


RMAN> restore database until scn=3334375;

Starting restore at 15-JUN-23
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=259 device type=DISK
...
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/da1tiok6_1450_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1
...
channel ORA_DISK_1: reading from backup piece /u01/ocidb/backups/db1tiola_1451_1_1
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/db1tiola_1451_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1

RMAN> recover database until scn=3334375;
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=627
channel ORA_DISK_1: reading from backup piece /u01/ocidb/backups/dd1tiom8_1453_1_1
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/dd1tiom8_1453_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/oracle/product/19c/dbhome_1/dbs/arch1_627_1142178912.dbf thread=1 sequence=627
media recovery complete, elapsed time: 00:00:00
Finished recover at 15-JUN-23
RMAN> alter database open resetlogs;

Statement processed



Success !

Wednesday, May 10, 2023

ZDLRA real-time redo demonstrated

 One of the key features of the ZDLRA is the ability to capture changes from the database "real-time" just like a standby database does. In this blog post I am going to demonstrate what is happening during this process so that you can get a better understanding of how it works.

ZDLRA Real-time Redo


If you look at the GIF above, I will explain what is happening, and show what happens with a demo of the process.

The ZDLRA uses the same process as a standby database.  In fact if you look at the flow of the real-time redo you will notice the redo blocks are sent to BOTH the local redo log files, AND to the staging area on the ZDLRA.  The staging area on the ZDLRA acts just like a standby redo does on a standby database.

As the ZDLRA receives the REDO blocks from the protected database they are validated to ensure that they are valid Oracle Redo block information.  This ensures that a man-in-the-middle attack does not change any of the backup information.  The validation process also assures that if the database is attacked by ransomware (changing blocks), the redo received is not tainted.


The next thing that happens during the process is the logic when a LOG SWITCH occurs.  As we all know, when a log switch occurs on a database instance, the contents of the redo log are written to an archive log.  With real-time redo, this causes the contents of the redo staging area on the ZDLRA (picture a standby redo log) to become a backup set of an archive log.  The RMAN catalog on the ZDLRA is then updated with the internal location of the backup set.


Log switch operation

I am going to go through a demo of what you see happen when this process occurs.

ZDLRA is configured as a redo destination

Below you can see that my database has a "Log archive destination" 3 configured.  The destination itself is the database on the ZDLRA (zdl9), and also notice that the log information will be sent for ALL_ROLES, which will send the log information regardless if it is a primary database or a standby database.
Archive Dest


List backup of recent archive logs from RMAN catalog


Before I demonstrate what happens with the RMAN catalog, I am going to list out the current archive log backup. Below you see that the current archive log backed up to the ZDLRA has the "SEQUENCE #10".

archive log backups prior

Perform a log switch

As you see in the animation at the top of the post, when a log switch occurs, the contents of the redo log in the "redo staging area" are used to create an archive log backup that is stored and cataloged.  I am going to perform a log switch to force this process.

Log switch


List backup of archive logs from RMAN catalog

Now that the log switch occurred, you can see below that there is a new backup set created from the redo staging area.
There are a couple of interesting items to note when you look at the backup set created.

archive logs after


  1. The backup of the archive log is compressed.  As part of the policy on the ZDLRA you have the option to have the backup of the archive log compressed when it is created from the "staged redo". This does NOT require the ACO (Advanced Compression) license. The compressed archive log will be sent back to the DB compressed during a restore operation, and the DB host will uncompress it.  This is the default option (standard compression) and I recommend changing it.  If you decide to compress, then MEDIUM or Low is recommended. Keep this in mind that he this may put more workload on the client to uncompress  the backup sets which may affect recovery times.  NOTE: When using TDE, there will be little to no compression possible.
  2. The TAG is automatically generated. By looking at the timestamp in the RMAN catalog information, you can see that the TAG is automatically generated using the timestamp to make it unique.
  3. The handle begins with "$RSCN_", this is because the backup piece was generated by the ZDLRA itself, and archivelog backup sets will begin with these characters.

Restore and Recovery using partial log information


Now I am going to demonstrate what happens when the database crashes, and there is no time for the database to perform a log switch.

List the active redo log and current SCN

Below you can see that my currently active redo log is sequence # 12.  This is where I am going to begin my test.

begin test


Create a table 

To demonstrate what happens when the database crashes I am going to create a new table. In the table I am going to store the current date, and the current SCN. Using the current SCN we will be able to determine the redo log that contains the table creation.

table create


Abort the database


As you probably know, if I shut down the database gracefully, the DB will automatically clean out the redo logs and archive it's contents. Because I want to demonstrate what happens with crash I am going to shut the database down with an ABORT to ensure the log switch doesn't occur.  Then start the database mount so I can look at the current redo log information

abort


Verify that the log switch did not occur


Next I am going to look at the REDO Log information and verify that my table creation (SCN 32908369) is still in the active redo log and did not get archived during the shutdown.

Log switch doesn't occur

Restore the database


Next I am going to restore the database from backup.


restore

Recover the database


This is where the magic occurs so I am going to show that happens step by step.

Recover using archive logs on disk


The first step the database does is to use the current archive logs to recover the database. You can see in the screenshot below that the database recovers the database using archive logs on disk up to sequence #11 for thread 1.  This contains all the changes for this thread, but does not include what is in the REDO log sequence #12.  Sequence #12 contains the create table we are interested in.

archives on disk

Recover using partial redo log


This step is where the magic of the ZDLRA occurs.  You can see from the screen shot below that the RMAN catalog on the ZDLRA returns the redo log information for Sequence #12 even though it was never archived. The ZDLRA was able to create an archive log backup from the partial contents it had in the Redo Staging area.

rtr recovery

Open the database and display table contents.


This is where it all comes together.  Using the partial redo log information from Redo Log sequence #12, you can see that when the database is opened, the table creation transaction is indeed in the database even though the redo did not become an archive log.
'


Conclusion : I am hoping this post gives you a better idea of how Real-time redo works on the ZDLRA, and how it handles recovering transactions after a database crash

Friday, July 15, 2022

File Retention Lock on ZFSSA

File Retention Lock was recently released on ZFSSA and I wanted to take the time to explain how to set the retention time and view the retention of locked files. Below is an example of what happens. You can see that the files are locked until January 1st 2025

ZFS Retention Lock


The best place to start for information on how this works is by looking at my last blog post on authorizations.

First I will go through the settings that available at the share/project level


Grace period

The grace period is used to automatically lock a file when there has not been updates to the file for this period of time.
If the automatic file retention grace period is "0" seconds, then the default retention is NOT in effect.




NOTE: even with a grace period of "0" seconds files can be locked by manually setting a retention period. 
 Also, once a grace period is set (> "0") it cannot be increased or disabled.
Finally, if you set the grace period to a long period (to ensure all writes are to a file are completed), you can lock the file by removing the write bit. This does the same thing as expiring the grace period.

Below is an example

chmod ugo-w *

Running the "chmod" will  remove the write bit, and immediate cause all files to lock.

Default retention

The most common method to implement file retention is by using the default retention period. This causes the file to be locked for the default retention when the grace period expires for a file.
Note that the file is locked as of the time the grace period expires. For example, if I have a grace period of 1 day (because I want the ability to clean up a failed backup) and a default file retention period of 14 days, the file will be locked for 14 days AFTER the 1 day grace period. The lock on the file will expire 15 days after the file was last written to.

zfs file retention lock


In the example above you can see that all files created on this share are created with a default retention of 1 day (24 hours).

NOTE: If the grace period is not > "0' these settings will be ignored and files will not be locked by default.

Minimum/Maximum File retention

The second settings you see on the image above are the "minimum file retention period" and the "maximum file retention period".

These control the retention settings on files which follows the rules below.

  • The default retention period for files MUST be at least the minimum file retention period, and not greater than the maximum file retention period.

  • If the retention date is set manually on a file, the retention period must fall within the minimum and maximum retention period.

Display current Lock Expirations.

In order to display the lock expiration on Linux the first thing you need to do is to change the share/project setting "Update access time on read" to off . Through the CLI this is "set atime=false" on the share.


zfssa file retention lock

Once this settings is made, the client will then display the lock time as the "atime". In my example at the top of the blog, you can see by executing "ls -lu" the file lock time is displayed.

NOTE: you can also use the find command to search for files using the "atime" This will allow to find all the locked files.

Below is an example of using the find command to list files that have an lock expiration time in the future.


export CURRENT_DATE=`date +"%y-%m-%d %H:%M:%S"`
find . -type f -newerat "$CURRENT_DATE" -printf '%h\t%AD%AH:%AM:%AS\t%s \n'



Manually setting a retention date


It is possible to set a specific date/time that a file is locked until. You can even set the retention date on a file that currently locked (it must be a date beyond the current lock data).

NOTE: If you try to change the retention date on a specific file, the new retention date has to be greater than current retention date (and less than or equal to the maximum file retention period). This makes sense.  You cannot lower the retention period for a locked file.

Now how do you manually set the retention date ?  Below is an example of how it is set for a file.

Setting File retention lock

There are 3 steps that are needed to lock the file with a specific lock expiration date.

1. Touch the file and set the access date. This can be done with
    • "-a" to change the access date/time
    • "-d" or "-t" to specify the date format
 2. Remove the write bit with chmod guo-2

3.  execute a cmod to make the file read only.

Below is an example where I am taking a file that does not contain retention, and setting the date to January 1, 2025.


First I am going to create a file and touch it setting the atime to a future data.

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$rm myfile3.txt
$ls -lu myfile3.txt
ls: cannot access myfile3.txt: No such file or directory


You can see that I set the "atime" and it display a future date, but I was still able to delete the file.

Now I am going to move to  remove the write bit before deleting.

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$chmod ugo-w  myfile3.txt
$rm myfile3.txt
ls: cannot access myfile3.txt: No such file or directory


Still, I am able to delete the file.. Finally I am going to do all three 

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$chmod ugo-w  myfile3.txt
$chmod a=r  myfile3.txt
#$rm myfile3.txt
rm: remove write-protected regular file ‘myfile3.txt’? y
rm: cannot remove ‘myfile3.txt’: Operation not permitted


Summary to manually set the lock on a file

If the file is NOT current locked  (the grace period is "0" or the grace period has not expired).


The commands below will lock the file "myfile.txt" until 01/01/25 12:00.

touch -a -t "2501011200" myfile.txt
chmod ugo-w  myfile.txt
chmod a=r  myfile.txt


If the file is already locked 

The commands below will adjust the lock on the file "myfile.txt" until 01/01/25 12:00.


touch -a -t "2501011200" myfile.txt


Tuesday, June 21, 2022

Migrate a large oracle database to OCI from disk backup

 Migrating an Oracle database from on-premise to OCI is especially challenging when the database is quite large.  In this blog post I will walk through the steps to migrate to OCI leveraging an on-disk local backup copied to object storage.

migrate Oracle database to OCI


The basic steps to perform this task are on on the image above.

Step #1 - Upload backup pieces to object storage.

The first step to migrate my database (acmedb) is to copy the RMAN backup pieces to the OCI object storage using the OCI Client tool.

In order to make this easier, I am breaking this step into a few smaller steps.

Step #1A - Take a full backup to a separate location on disk 


This can also be done by moving the backup pieces, or creating them with a different backup format.  By creating the backup pieces in a separate directory, I am able to take advantage of the bulk upload feature of the OCI client tool. The alternative is to create an upload statement for each backup piece.

For my RMAN backup example (acmedb) I am going to change the location of the disk backup and perform a disk backup.  I am also going to compress my backup using medium compression (this requires the ACO license).  Compressing the backup sets allows me to make the backup pieces as small as possible when transferring to the OCI object store.

Below is the output from my RMAN configuration that I am using for the backup.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ACMEDBP are:


CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/acmedb/ocimigrate/backup_%d_%U';
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

I created a new level 0 backup including archive logs and below is the "list backup summary" output showing the backup pieces.

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4125    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141019
4151    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141201
4167    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4168    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4169    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4170    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4171    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4172    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4173    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4174    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4175    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4176    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4208    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141309
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310



From the output you can see that there are a total of 14 backup pieces
  • 3 Archive log backup sets (two created before the backup of datafiles, and one after).
    • TAG20220621T141019
    • TAG20220621T141201
    • TAG20220621T141309
  • 10 Level 0 datafile backups
    • TAG20220621T141202
  • 1 controlfile backup 
    • TAG20220621T141310

Step #1B - Create the bucket in OCI and configure OCI Client

Now we need a bucket to upload the 14 RMAN backup pieces to. 

Before I can upload the objects, I need to download and configure the OCI Client tool. You can find the instructions to do this here.

Once the client tool is installed I can create the bucket and verify that the OCI Client tool is configured correctly.

The command to create the bucket is.



Below is the output when I ran it for my compartment and created the bucket "acmedb_migrate"

 oci os bucket create --namespace id2avsofo --name acmedb_migrate --compartment-id ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "auto-tiering": null,
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"created-by": "ocid1.user.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"defined-tags": { "Oracle-Tags": { "CreatedBy": "oracleidentitycloudservice/john.smith@oracle.com", "CreatedOn": "2022-06-21T14:36:19.680Z" } }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893", "freeform-tags": {}, "id": "ocid1.bucket.oc1.iad.aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"is-read-only": false, "kms-key-id": null, "metadata": {}, "name": "acmedb_migrate", "namespace": "id2avsofo",
"object-events-enabled": false, "object-lifecycle-policy-etag": null, "public-access-type": "NoPublicAccess", "replication-enabled": false, "storage-tier": "Standard", "time-created": "2022-06-21T14:36:19.763000+00:00", "versioning": "Disabled" }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893" }

Step #1C - Upload the backup pieces to Object Storage in OCI


The next step is to upload all the backup pieces that are in the directory "/acmedb/ocimigrate" to OCI using the bulk upload feature.



Below is the output of the upload - Notice I used a parallelism of 14 to ensure a quick upload.

 oci os object bulk-upload --namespace-name id20skavsofo    --bucket-name acmedb_migrate --src-dir /acmedb/ocimigrate/ --parallel-upload-count 10

Uploaded backup_RADB_3u10k6hj_126_1_1  [####################################]  100%
Uploaded backup_RADB_4710k6jl_135_1_1  [####################################]  100%
Uploaded backup_RADB_4610k6jh_134_1_1  [####################################]  100%
Uploaded backup_RADB_3n10k6b0_119_1_1  [####################################]  100%
Uploaded backup_RADB_3m10k6b0_118_1_1  [####################################]  100%
Uploaded backup_RADB_3r10k6ec_123_1_1  [####################################]  100%
Uploaded backup_RADB_4510k6jh_133_1_1  [####################################]  100%
Uploaded backup_RADB_4010k6hj_128_1_1  [####################################]  100%
Uploaded backup_RADB_3v10k6hj_127_1_1  [####################################]  100%
Uploaded backup_RADB_4110k6hk_129_1_1  [####################################]  100%
Uploaded backup_RADB_4210k6id_130_1_1  [####################################]  100%
Uploaded backup_RADB_4310k6ie_131_1_1  [####################################]  100%
Uploaded backup_RADB_3l10k6b0_117_1_1  [####################################]  100%
Uploaded backup_RADB_4410k6ie_132_1_1  [####################################]  100%
Uploaded backup_RADB_3k10k6b0_116_1_1  [####################################]  100%
Uploaded backup_RADB_3t10k6hj_125_1_1  [####################################]  100%

{
  "skipped-objects": [],
  "upload-failures": {},
  "uploaded-objects": {
    "backup_RADB_3k10k6b0_116_1_1": {
      "etag": "ab4a1017-3ba7-46e2-a2ee-3f4cd9a82ad3",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "W0hYIzfAWUVzACWNudcQDg==-3"
    },
    "backup_RADB_3l10k6b0_117_1_1": {
      "etag": "a620076e-975f-4d8c-87e8-394c4cf966cd",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-multipart-md5": "zapGBx8Imcdk91JM2+gORQ==-3"
    },
    "backup_RADB_3m10k6b0_118_1_1": {
      "etag": "a96c35c0-4c0b-4646-ae38-723f92c8496e",
      "last-modified": "Tue, 21 Jun 2022 14:57:32 GMT",
      "opc-content-md5": "vNAsU3vLcjzp6OwEeLXGgA=="
    },
    "backup_RADB_3n10k6b0_119_1_1": {
      "etag": "8f565894-5097-4ebb-9569-fdd31cc0c22d",
      "last-modified": "Tue, 21 Jun 2022 14:57:31 GMT",
      "opc-content-md5": "aSUSQWv5b+EfoLy9L9UBYQ=="
    },
    "backup_RADB_3r10k6ec_123_1_1": {
      "etag": "120dead4-c8ae-44de-9d27-39e1c28a2c48",
      "last-modified": "Tue, 21 Jun 2022 14:57:33 GMT",
      "opc-content-md5": "4wHBrgZXuIMlYWriBbs1ng=="
    },
    "backup_RADB_3s10k6hh_124_1_1": {
      "etag": "07d74b7f-68d6-4a77-9c4d-42f78c51c692",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "uzRd51bAKvFjhbbsfL1YAg=="
    },
    "backup_RADB_3t10k6hj_125_1_1": {
      "etag": "e5d3225b-a687-47e1-ad31-f4270ce31ddd",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "aZIirf98ZNqwBAlIeWzuhQ==-3"
    },
    "backup_RADB_3u10k6hj_126_1_1": {
      "etag": "5f5cc5ad-4aa3-4c3a-8848-16b3442a1e2c",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "dT6EYLv1yzf6LZCn1/Dsvw=="
    },
    "backup_RADB_3v10k6hj_127_1_1": {
      "etag": "297daece-be72-475f-b40d-982fb7115cd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "Zt3h5YfHU6F771ahltYhDQ=="
    },
    "backup_RADB_4010k6hj_128_1_1": {
      "etag": "9d723f2a-962e-4d03-9283-fc8a68f53af8",
      "last-modified": "Tue, 21 Jun 2022 14:57:35 GMT",
      "opc-content-md5": "KuNzVyUQrrSsA/kgioq9oA=="
    },
    "backup_RADB_4110k6hk_129_1_1": {
      "etag": "16f7f02a-e5ae-48a2-a7d2-b6d1dedc82ad",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "24SzzZwg7iu7PV8TBpMXEg=="
    },
    "backup_RADB_4210k6id_130_1_1": {
      "etag": "0584e14f-53dc-4251-8bad-907f357a283e",
      "last-modified": "Tue, 21 Jun 2022 14:57:37 GMT",
      "opc-content-md5": "sjPsmoeFsMhZISAmaVN0vQ=="
    },
    "backup_RADB_4310k6ie_131_1_1": {
      "etag": "176aea41-dd31-4404-99f4-ffd59c521fd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:40 GMT",
      "opc-content-md5": "2ksAQ2UuU/75YyRKujlLXg=="
    },
    "backup_RADB_4410k6ie_132_1_1": {
      "etag": "766c7585-3837-490b-8563-f3be3d24c98e",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-content-md5": "sh4CFUC/vnxjmMZ5mfgT3Q=="
    },
    "backup_RADB_4510k6jh_133_1_1": {
      "etag": "2de62d73-e44c-4f25-a41d-d45c556054dd",
      "last-modified": "Tue, 21 Jun 2022 14:57:34 GMT",
      "opc-content-md5": "4tVrHqwYG57STn9W6c2Mqw=="
    },
    "backup_RADB_4610k6jh_134_1_1": {
      "etag": "4667419d-9555-4edb-bd6d-749a1ee7660b",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "/MVdDn/vA2IXUcCmtdgKnw=="
    },
    "backup_RADB_4710k6jl_135_1_1": {
      "etag": "d467810a-d62e-42b3-bf7b-019913707312",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "hq8PEQ3PUwyTMWyUBfW4ew=="
    }
  }
}


Step #2 - Create the manifest for the backup pieces.


The next step covers creating the "metadata.xml" for each object which is the manifest the the RMAN library uses to read the backup pieces.

Again this is broken down into a few different steps.

Step #2A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

I executed the jar file which downloads/created the following files.
  • libopc.so - This is the library used by the Cloud Backup module, and I downloaded it into  "/home/oracle/ociconfig/lib/" on my host
  • acmedb.ora - This is the configuration file for my database backup. This was created in "/home/oracle/ociconfig/config/" on my host
This information is used to allocate the channel in RMAN for the manifest.

Step #2b - Generate the manifest create for each backup piece.

The next step is to dynamically create the script to build the manifest for each backup piece. This needs to be done for each backup piece, and the command is

"send channel t1 'export backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #2c - Execute the script with an allocated channel.

The next step is to execute the script in RMAN within a run block after allocating a channel to the bucket in object storage. This needs to be done for each backup piece. You create a run block with one channel allocation followed by "send" commands.

NOTE: This does not have be executed on the host that generated the backups.  In the example below, I set my ORACLE_SID to "dummy" and performed create manifest with the "dummy" instance started up nomount.


Below is an example of allocating a channel to the object storage and creating the manifest for one of the backup pieces.



export ORACLE_SID=dummy
 rman target /
RMAN> startup nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/19c/dbhome_1/dbs/initdummy.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area    1073737792 bytes

Fixed Size                     8904768 bytes
Variable Size                276824064 bytes
Database Buffers             780140544 bytes
Redo Buffers                   7868416 bytes

RMAN> run {
          allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
       send channel t1 'export backuppiece backup_RADB_3r10k6ec_123_1_1';
        }
2> 3> 4>
allocated channel: t1
channel t1: SID=19 device type=SBT_TAPE
channel t1: Oracle Database Backup Service Library VER=23.0.0.1

sent command to channel: t1
released channel: t1


Step #2d - Validate the manifest is created.

I logged into the OCI console, and I can see that there is a directory called "sbt_catalog". This is the directory containing the manifest files. Within this directory you will find a subdirectory for each backup piece. And within those subdirectories you will find a "metadata.xml" object containing the manifest.

Step #3 - Catalog the backup pieces.


The next step covers cataloging the backup pieces in OCI. You need to download the controlfile backup from OCI and start up mount the database.

Again this is broken down into a few different steps.

Step #3A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

Again, you need to configure the backup module (or you can copy the files from your on-premise host).

Step #3b - Catalog each backup piece.

The next step is to dynamically create the script to build the catalog each backup piece. This needs to be done for each backup piece, and the command is

"catalog device type 'sbt_tape'  backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #3c - Execute the script with a configured channel.

I created a configure channel command, and cataloged the backup pieces that in the object store.


RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';


  run {
           catalog device type 'sbt_tape' backuppiece 'backup_RADB_3r10k6ec_123_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3s10k6hh_124_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3t10k6hj_125_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3u10k6hj_126_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3v10k6hj_127_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4010k6hj_128_1_1';
          catalog device type 'sbt_tape' backuppiece ' backup_RADB_4110k6hk_129_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4210k6id_130_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4310k6ie_131_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4410k6ie_132_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4510k6jh_133_1_1';
        }

old RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN>
RMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13>
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=406 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=22 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=407 device type=SBT_TAPE
...
...
...
channel ORA_SBT_TAPE_4: SID=23 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: cataloged backup piece
backup piece handle=backup_RADB_4510k6jh_133_1_1 RECID=212 STAMP=1107964867

RMAN>


Step #3d - List the backups pieces cataloged

I performed a list backup summary to view the newly cataloged tape backup pieces.


RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310
4258    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141019
4270    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141201
4282    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4292    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4303    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4315    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4446    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4468    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4490    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4514    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4539    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202

RMAN>


Step #4 - Restore the database.


The last step is restore the cataloged backup pieces. Remember you might have to change the location of the datafiles.



The process above can be used to upload and catalog both additional archive logs (to bring the files forward) and incremental backups to bring the database forward.