Wednesday, November 16, 2022

ZDLRA - Quick Start Guide

 This post is intended to be a Quick Start Guide for those who are new to ZDLRA (RA for short).  I spend part of time working with customers who are new the RA and often the same topics/questions come up.  I wanted to put together a "Quick Start" guide that they can use to learn more about these common topics.


ZDLRA Quick Start


The steps I would follow for anyone new to the RA are.


  1. Read through the section on configuring users and security settings for the RA. Decide which compliance settings make sense for the RA and come with a plan to implement them.
  2. Identify the users, both OS users (if you are disabling direct root access), and users within the databases that will mange and/or monitor the RA. OS users can be added with "racli add admin_user". Database users can be added with "racli add db_user"
  3. Create protection policies that contain the recovery window(s) that you want to set for the databases. You will also set compliance windows when creating policies. This can be done manually using the package DBMS_RA.CREATE_PROTECTION_POLICY.
  4. Identify the VPC user(s) needed to manage the database. Is it a single DBA team, or different teams requiring multiple VPC users? Create the VPC user using "racli add vpc_user"
  5. Add databases to be backed up to the RA, associate the database with both a protection policy and a VPC user who will be managing the database. NOTE that you should look at the Reserved Space, and adjust it as needed.  Databases can be added manually by using two PL/SQL calls. DBMS_RA.ADD_DB will add the database to the RA. DBMS_RA.GRANT_DB_ACCESS will allow the VPC user to manage the database.
  6. Configure the database to be backed up to the RA either by using OEM, or manually. The manual steps would be
    • Create a wallet on the DB client that contains the VPC credentials to connect to the RA.
    • Update the sqlnet.ora file to point to this wallet
    • Connect to the RMAN catalog on the RA from the DB client
    • Register the database to the RA
    • Configure the channel configuration to point to the RA
    • Configure Block change tracking (if it is not configured).
    • Configure the redo destination to point to the RA if you want to configure real-time redo.
    • Change the RMAN retention to be "applied to all standby" if using real-time redo, or "backed up 1 time" if not.
    • Update OEM to have the database point to the RMAN catalog on the ZDLRA.

Documentation

The documentation can be found here. Within the documentation there are several sections that will help you manage the RA.

Get Started 

The get started section contains some subtopics to delve into

Install and configure the Recovery Appliance

The links in this section cover all the details about the installation and configuration of the RA.  I won't be talking about those sections in the post, but be aware this is where to look for general maintenance/patching/expanding information.

Learn about the Recovery Appliance.

This section covers an overview of the RA, and is mostly marketing material. If you are not familiar with the RA, or want an overview this is the place to turn.

Administer the Recovery Appliance


These sections are going to be a lot more helpful to get you started. This section of the documentation covers 

Managing Protect Policies - Protection policies is the place to start when configuring an RA. Protection policies group databases together and it is critical to make sure you have the correct protection policies in place before adding databases to be backed up.

Copying Backups to Tape - This section is useful if you plan on creating backups (either point in time or archival) that will be sent externally from the RA. This can be either to physical/virtual tape, or to an external media manager.

Archiving Backups to the Cloud - This section covers how to configure the RA to send backups to an OCI compatible object storage.  This can either be OCI, or it can be an on-premises ZFS that has a project configured as OCI object storage.

Accessing Recovery Appliance Reports - This section covers how to access all the reports available to you.  You will find these reports are priceless to manage the RA over time. Some examples of the areas these reports cover are.
  • Storage Capacity Planning reports with future usage projections
  • Recovery Window Summary reports to validate backups are available
  • Active incident reports to manage any alerts
  • API History Report to audit any changes to the RA
NOTE : If you are using the RA in a charge backup model to your internal business units, there is specific reporting that can be used for this. Talk your Oracle team find out more.

Monitoring the Recovery Appliance - This section covers how to monitor the RA and set up alerts. This will allow you identify any issues that would affect the recovery of the backups including space issues, and missing backups.


Administer the Recovery Appliance

Configure Protected Databases - This section goes through how to configure databases to be backed up to the recovery appliance and includes instructions for both using OEM, and adding databases using the command line.

Backup Protected Databases - This section covers how to backup a database from either OEM, or from the traditional RMAN command line. I would also recommend looking at the MOS note to ensure that you are using the current best practices for backups. "RMAN best practice recommendations for backing up to the Recovery Appliance (Doc ID 2176686.1)".

Recover Databases - This section covers how to recover databases from the RA. This section also covers information about cloning databases. Cloning copies of production is a common use case for the RA, and this section is very useful to help you with this process.


Books

This section contains the documentation you look at regularly to manage the RA and answer questions that you may have on managing it.  I am only going to point the sections that you find most useful.


Deployment

The one important section under deployment is the Zero Data Loss Recovery Appliance Owners Guide.

Zero Data Loss Recovery Appliance Owners Guide - This guide contains information on configuring users on the RA, and the most critical sections to look at are

  •  "Part III Security and Maintenance of Recovery Appliance".   If you are using the RA to manage immutable backups, it is important to go through this section to understand how users will be managed for maximum protection of your backups.
  • Part IV Command Reference - This section covers the CLI commands you will use the manage the RA.

Administration

This is probably the most important guide in the documentation. It covers many of the areas of you will be managing as you configure databases to be backed up.  The most critical sections are

Part I Managing Recovery appliance - This section covers
  • Implementing Immutable Backups
  • Securing the Recovery Appliance operations
  • Managing Protection Policies
  • Configuring replication and replication concepts
  • Additional High Availability strategies
Part III Recovery Appliance Reference - This section covers
  • DBMS_RA packages to manage the RA through commands
  • Recovery Appliance View Reference to see what views are available

MOS Notes

There are number of useful MOS notes that you will want to bookmark

  • Zero Data Loss Recovery Appliance (ZDLRA) Information Center (Doc ID 2673011.2)
  • How to Backup and Recover the Zero Data Loss Recovery Appliance (Doc ID 2048074.1)
  • Zero Data Loss Recovery Appliance Supported Versions (Doc ID 1927416.1)
  • Zero Data Loss Recovery Appliance Software Updates Guide (Doc ID 2028931.1)
  • Cross Platform Database Migration using ZDLRA (Doc ID 2460552.1)
  • How to Move RMAN Catalog To A Different Database (Doc ID 351918.1)

Helpful Blogs

Fernando Simon

Fernando has a number of helpful blog entries. Be aware he has been blogging for a long time on the RA, and some of the management processes have changed. One example is RACLI is now used to create VPC users. Some of the Blogs to note are

Bryan Grenn


I have a number of blog posts on features of the ZDLRA.









Thursday, October 6, 2022

Estimated space for Compliance Window on RA

 In this post  I will go through how to estimate how much space you need to store backups on the Recovery Appliance to meet your Compliance Window.

This is critical to understand, since compliance protected backups cannot be removed from the RA, and if all space is utilized to meet Compliance Windows, new backups will be refused.


First a bit about Compliance window.


COMPLIANCE WINDOW

Compliance Window is set at the Policy level.  All databases within that policy will inherit the Compliance Window going forward.  Below is some more detail you need to know on Compliance Window.

  • The Compliance Window cannot be greater than the Recovery Window Goal
  • You cannot set the Policy to "Auto Tune" reserve space when setting a Compliance Window. You must manage the reserve space as you did in the past.
  • The Compliance Window can be adjusted up or down once set, but this will not affect any previous backups. Backups previously created observe the Compliance Window in effect when the backup was created.
  • The RA does not have to be in Compliance Mode (disabled direct root access) in order to set the a Compliance Window.

Space management for Compliance Window

Reserved Space

If you are familiar with reserved space, then you understand how that can help.  Reserved space is set for each database, and is the estimate of how much is needed to meet the Recovery Window Goal.  The major points to understand with reserved space are
  • The sum of all reserved space cannot be greater than the usable space on the RA.
  • Reserved space is used during space pressure to determine which databases will not be able to keep their recovery window goal. Databases with reserved space less than what is needed will have their older backups purged first.
  • Reserved space should be either
    • About 10% greater than the space needed to meet the recovery window goal
    • The high water mark of space needed during large volume updates (Black Friday sales for example).
By setting the reserved space for each database to be 10% larger than the space needed to meet the recovery window goal, you can alert when the Recovery Appliance cannot accept new databases to be backed up.  If all reserved space is allocated, then the Recovery Appliance is 90% full.

Recovery Window Goal

Within each policy you set a recovery window goal. This is a "goal" and if you run into space pressure, backups can be deleted from databases with insufficient reserved space (noted in the previous section).
The recommendation is to set the Compliance Window smaller than Recovery Window Goal if all databases are being protected.
By setting the recovery window goal smaller, you can alert when the required space to meet the recovery window goal is not available on the Recovery Appliance.  This will give you time to determine the space issue and take corrective action.


Compliance Window


Within each policy you can set a Compliance Window. This will lock any backups for the protected databases from being deleted, and will disable the database from being from the Recovery Appliance as long as it has backups that fall under compliance.  Since these backups cannot be removed, and the database cannot be removed, it is critical that you do not reach the point where all storage is utilized by compliant backups.

ESTIMATING COMPLIANCE SPACE

As you can tell by reading through how this works, it is critical to understand the space needed for compliant backups. 
The recommendation to estimate the space needed is to utilize the DBMS_RA.ESTIMATE_SPACE package.
Unfortunately with release 21.1 you cannot call this package from within a SQL statement. You will receive the following error.

Select dbms_ra.estimate_space ('TIMSP' , numtodsinterval(45,'day')) from dual
       *
ERROR at line 1:
ORA-14551: cannot perform a DML operation inside a query
ORA-06512: at "RASYS.DBMS_RA_MISC", line 5092
ORA-06512: at "RASYS.DBMS_RA", line 1204
ORA-06512: at line 1


In order to help everyone calculate the space needed, I came up with a code snippet that can give you the data you need.
Using the snippet below, and setting the variable for compliance window you can create an HTML report that will show you the estimate for space needed.




What the output looks like is below.  Note you can adjust the compliance window you want to look at.






This should allow you to look at the effect of setting a compliance window and compare it to the reserved space, and the RWG database by database, policy by policy, and as a whole.




Thursday, September 29, 2022

ZFSSA File Retention and Snapshot Retention provide protection for RMAN incremental merge backups.

File Retention Lock and Snapshot Retention Lock are great new features on ZFSSA that can help protect your backups from deletion and help you meet regulatory requirements. Whether it be an accidental deletion or a bad actor attempting to corrupt your backups they are protected.

In this post I am going to walk through how to implement File Retention and Snapshot Retention together to protect an RMAN incremental merge backup from being deleted . 

 Why do I need both? 

The first question you might have is why do I need both File Retention and Snapshot Retention to protect my backups ? RMAN incremental merge backups consists of 3 types of backup pieces.

 FILE IMAGE COPIES - Each day when the backup job is executed the same image copy of each datafile file is updated by recovering the datafile with an incremental backup. This moves the image copy of each datafile forward one day using the changed blocks from the incremental backup. The backup files containing the image copy of the datafiles needs to be updatable by RMAN.

INCREMENTAL BACKUP - Each day a new incremental backup (differential) is taken. This incremental backup contains the blocks that changed in the database files since the previous incremental backup. Once created this file does not change. 

 ARCHIVE LOG BACKUPS - Multiple times a day, archive log backups (also known as log sweeps) are taken. These backup files contain the change records for the database and do not change once written. 


 How to leverage both retention types 


 SNAPSHOT RETENTION can be used to create a periodic restorable copy of a share/project by saving the unique blocks as of the "snapshot" time a new snapshot is taken. Each of these periodic snapshots can be scheduled on a regular basis. With snapshot retention, the snapshots are locked from being deleted, and the schedule itself is locked to prevent tampering with the snapshots. This is perfect for ensuring we have a restorable copy of the datafile images each time they are updated by RMAN.

FILE RETENTION can be used to lock both the incremental backups and the archive log backups. Both types of backup files do not change once created and should be locked to prevent removal or tampering with for the retention period. 


 How do I implement this ? 

First I am going create a new project for my backups named "DBBACKUPS". Of course you could create 2 different projects. Within this project I am going to create 2 shares with different retention settings. 

 FULLBACKUP - Snapshot retention share 

 My image copy backups are going to a share that is protected with snapshot retention. The documentation on where to start with snapshot retention can be found here. In the example below I am keeping 5 days of snapshots, and I am locking the most recent 3 days of snapshots. This configuration will ensure that I have locked image copies of my database files for the last 3 days. 

 NOTE: Snapshots only contain the unique blocks since the last snapshot, but still provide a FULL copy of each datafile. The storage used to keep each snapshots is similar to the storage needed for each incremental backup. 

ZFSSA snapshot retention settings for /fullbackup




 DAILYBACKUPS - File Retention share 

My incremental backups and archivelog backups are going to a share with File Retention. The files (backup pieces) stored on this share will be locked from being modified or deleted. The documentation on where to start with File Retention can be found here

 NOTE: I chose the "Privileged override" file retention policy. I could have chosen "Mandatory" file retention policy if I wanted to lock down the backup pieces even further. 

 In the example below I am retaining all files for 6 days. 

ZFSSA file retention settings for /dailybackups



DAILY BACKUP SCRIPT 


Below is the daily backup script I am using to perform the incremental backup, and the recovery of the image copy datafiles with the changed blocks. You can see that I am allocating channels to "/fullbackup" which is the share configured with Snapshot Retention, and the image copy backups are going to this share. The incremental backups are going to "/dailybackups" which is protected with File Retention. 

run {
  ALLOCATE CHANNEL Z1 TYPE DISK  format '/fullbackup/radb/DATA_%N_%f.dbf';
  ALLOCATE CHANNEL Z2 TYPE DISK  format '/fullbackup/radb/DATA_%N_%f.dbf';
  ALLOCATE CHANNEL Z3 TYPE DISK  format '/fullbackup/radb/DATA_%N_%f.dbf';
  ALLOCATE CHANNEL Z4 TYPE DISK  format '/fullbackup/radb/DATA_%N_%f.dbf';
  ALLOCATE CHANNEL Z5 TYPE DISK  format '/fullbackup/radb/DATA_%N_%f.dbf';
  ALLOCATE CHANNEL Z6 TYPE DISK  format '/fullbackup/radb/DATA_%N_%f.dbf';
  
  backup
    section size 32G
    incremental level 1
    for recover of copy with tag 'DEMODBTEST' database FORMAT='/dailybackups/radb/FRA_%d_%T_%U.bkp';
  recover copy of database with tag 'DEMODBTEST' ;
  RELEASE CHANNEL Z1;
  RELEASE CHANNEL Z2;
  RELEASE CHANNEL Z3;
  RELEASE CHANNEL Z4;
  RELEASE CHANNEL Z5;
  RELEASE CHANNEL Z6;
}


 ARCHIVELOG BACKUP SCRIPT 

Below is the log sweep script that will perform the periodic backup of archive logs and send them to the "/dailybackups" share which has File Retention configured. 

run {
  ALLOCATE CHANNEL Z1 TYPE DISK  format '/dailybackups/radb/ARCH_%U.bkup';
  ALLOCATE CHANNEL Z2 TYPE DISK  format '/dailybackups/radb/ARCH_%U.bkup';
  ALLOCATE CHANNEL Z3 TYPE DISK  format '/dailybackups/radb/ARCH_%U.bkup';
  ALLOCATE CHANNEL Z4 TYPE DISK  format '/dailybackups/radb/ARCH_%U.bkup';
  ALLOCATE CHANNEL Z5 TYPE DISK  format '/dailybackups/radb/ARCH_%U.bkup';
  ALLOCATE CHANNEL Z6 TYPE DISK  format '/dailybackups/radb/ARCH_%U.bkup';

  
  backup
    section size 32G
    filesperset 32
    archivelog all;
  RELEASE CHANNEL Z1;
  RELEASE CHANNEL Z2;
  RELEASE CHANNEL Z3;
  RELEASE CHANNEL Z4;
  RELEASE CHANNEL Z5;
  RELEASE CHANNEL Z6;
}




 RESULT: 

This strategy will ensure that I have 5 days of untouched full backups available for recovery. It also ensures that I have 6 days of untouched archive logs, and incremental backups that can be applied if necessary. This will protect my RMAN incremental merge backups using a combination of Snapshot Retention for backup pieces that need to be updated, and File Retention for backup pieces that will not change.

Friday, July 29, 2022

OCI Database backups with retention lock

 OCI Object Storage provides both lifecycle rules and retention lock.  How to take advantage of both these features isn't always as easy as it looks.

 In this post I will go through an example customer request and how to implement a backup strategy to accomplish the requirements.

OCI Buckets

This image above gives you an idea of what they are looking to accomplish.

Requirements

  • RMAN retention is to keep a 14 day point in time recovery window
  • All long term backups beyond 14 days are cataloged as KEEP backups
  • All buckets are protected with a retention rule to prevent backups from being deleted before they become obsolete
  • Backups are moved to lower tier storage when appropriate to save costs.

Backup strategy

  • A full backup is taken every Sunday at 5:30 PM and this backup is kept for 6 weeks.
  • Incremental backups are taken Monday through Saturday at 5:30 PM and are kept for 14 days
  • Archive log sweeps are taken 4 times a day and are kept for 14 days
  • A backup is taken the 1st day of the month at 5:30 PM and this backup is kept for 13 months.
  • A full backup is taken following the Tuesday morning bi-weekly payroll run and is kept for 7 years
This sounds easy enough.  If you look at the image above you can what this strategy looks like in general. I took this strategy and mapped it to the 4 buckets, how they would be configured, and what they would contain. This is the image below.

OCI Object rules


Challenges


As I walked through this strategy I found that it involved some challenges. My goal was limit the number of full backups to take advantage of current backups.  Below are the challenges I realized exist with this schedule
  • The weekly full backup taken every Sunday is kept for longer than the incremental backups and archive logs. This caused 2 problems
    1. I wanted to make this backup a KEEP backup that is kept for 6 weeks before becoming obsolete.  Unfortunately KEEP backups are ignored as part of an incremental backup strategy. I could not create a weekly full backup that was both a KEEP backup and also be used as part of  incremental backup strategy.
    2. Since the weekly full backup is kept longer than the archive logs, I need to ensure that this backup contains the archive logs needed to defuzzy the backup without containing too many unneeded archive logs
  • The weekly full backup could fall on the 1st of the month. If this is the case it needs to be kept for 13 months otherwise it needs to be kept for 6 weeks.
  • I want the payrun backups to be immediately placed in archival storage to save costs.  When doing a restore I want to ignore these backups as they will take longer to restore.
  • When restoring and recovering the database within the 14 day window I need to include channels allocated to all the buckets that could contain those buckets. 14_DAY, 6_WEEK,  and 13_MONTH.

Solutions

I then worked through how I would solve each issue.

  1. Weekly full backup must be both a normal incremental backup and KEEP backup - After doing some digging I found the best way to handle this issue was to CHANGE the backup to be a KEEP backup with either a 6 week retention, or a 13 month retention from the normal NOKEEP type. By using tags I can identify the backup I want change after it is no longer needed as part of the 14 day strategy.
  2. Weekly full backup contains only archive logs needed to defuzzy - The best way to accomplish this task is to perform an archive log backup to the 14_DAY bucket immediately before taking the weekly full backup
  3. Weekly full backup requires a longer retention - This can be accomplished by checking if the the full backup is being executed on the 1st of the month. If it is the 1st, the full backup will be placed in the 13_MONTH bucket.  If it is not the 1st, this backup will be placed in the 6_WEEK bucket.  This backup will be created with a TAG with a format that can be used to identify it later.
  4. Ignore bi-weekly payrun backups that are in archival storage - I found that if I execute a recovery and do not have any channels allocated to the 7_YEAR bucket, it will may try to restore this backup, but it will not find it and move to the next previous backup. Using tags will help identify that a restore from the payrun backup was attempted and ultimately bypassed.
  5. Include all possible buckets during restore - By using a run block within RMAN I can allocate channels to different buckets and ultimately include channels from all 3 appropriate buckets.
Then as a check I drew out a calendar to walk through what this strategy would look like.

OCI backup schedule


Backup examples

Finally I am including examples of what this would look like.

Mon-Sat 5:30 backup job



dg=$(date +%Y%m%d)
rman <<EOD
run {
ALLOCATE CHANNEL daily1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL daily2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
backup incremental level 1 database tag="incr_backup_${dg}" plus archivelog tag="arch_backup_${dg}";
   }
exit
EOD

Sat 5:30 backup job schedule

1) Clean up archive logs first



dg=$(date +%Y%m%d:%H)
rman <<EOD
run {
ALLOCATE CHANNEL daily1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL daily2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
backup archivelog tag="arch_backup_${dg}";
   }
exit
EOD

2a) If this 1st of the month then execute this script to send the full backup to the 13_MONTH bucket


dg=$(date +%Y%m%d)
rman <<EOD
run {
ALLOCATE CHANNEL monthly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
ALLOCATE CHANNEL monthly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
backup incremental level 1 database tag="full_backup_${dg}" plus archivelog tag="full_backup_${dg}";
   }
exit
EOD


2b) If this is NOT the 1st of the month execute this script and send the full backup to the 6_WEEK bucket

dg=$(date +%Y%m%d)
rman <<EOD
run {
ALLOCATE CHANNEL weekly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
ALLOCATE CHANNEL weekly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
backup incremental level 1 database tag="full_backup_${dg}" plus archivelog tag="full_backup_${dg}";
   }
exit
EOD


3a) If today is the 15th then change the  full backup to a 13 month retention


dg=$(date --date "-14 days" +%Y%m%d)
rman <<EOD
CHANGE BACKUPSET TAG="full_backup_${dg}" keep until time 'sysdate + 390';
EOD

3b) If today is NOT the 14th then change the  full backup to a 6 week retention


dg=$(date --date "-14 days" +%Y%m%d)
rman <<EOD
CHANGE BACKUPSET TAG="full_backup_${dg}" keep until time 'sysdate + 28';
EOD

Tuesday after payrun backup job 

1) Clean up archive logs first


dg=$(date +%Y%m%d:%H)
rman <<EOD
run {
ALLOCATE CHANNEL daily1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL daily2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
backup archivelog tag="arch_backup_${dg}";
   }
exit
EOD

2) Execute the keep backup


dg=$(date +%Y%m%d)
rman <<EOD
run {
ALLOCATE CHANNEL yearly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/7_YEAR.ora)';
ALLOCATE CHANNEL yearly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/7_YEAR.ora)';
backup database tag="payrun_backup_${dg}" plus archivelog tag="full_backup_${dg}" keep until time 'sysdate + 2555';
   }
exit
EOD


Restore example

Now in order to restore, I need to allocate channels to all the possible buckets. Below is the script I used  to validate this with a "restore database validate" command.


run {
ALLOCATE CHANNEL daily1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL daily2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL weekly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
ALLOCATE CHANNEL weekly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
ALLOCATE CHANNEL monthly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
ALLOCATE CHANNEL monthly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
restore database validate;
    }


Below is what I am seeing in the RMAN log because I picked a point in time where I want it to ignore the 7_YEAR backups.

In this case you can see that it tried to retrieve the Payrun backup but failed back to the previous backup with tag "FULL_073122". This is the backup I want.


channel daily1: starting validation of datafile backup set
channel daily1: reading from backup piece h613o4a4_550_1_1
channel daily1: ORA-19870: error while restoring backup piece h613o4a4_550_1_1
ORA-19507: failed to retrieve sequential file, handle="h613o4a4_550_1_1", parms=""
ORA-27029: skgfrtrv: sbtrestore returned error
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
   KBHS-07502: File not found
KBHS-01404: See trace file /u01/app/oracle/diag/rdbms/acmedbp/acmedbp/trace/sbtio_4819_140461854265664.log for det
failover to previous backup

channel daily1: starting validation of datafile backup set
channel daily1: reading from backup piece gq13o3rm_538_1_1
channel daily1: piece handle=gq13o3rm_538_1_1 tag=FULL_073122
channel daily1: restored backup piece 1
channel daily1: validation complete, elapsed time: 00:00:08


That's all there is to it. Tags are very help helpful to identify the correct backups.



Thursday, July 28, 2022

ZFSSA replicating locked snaphots to OCI for offsite backup

ZFSSA replication can be used to create locked offsite backups. In this post I will show you how to take advantage of the new "Locked Snapshot" feature of ZFSSA and the ZFS Image in OCI to create an offsite backup strategy to OCI.

ZFSSA Snapshot Replication
If you haven't heard of the locked snapshot feature of ZFSSA I blogged about here.  In this post I am going to take advantage of this feature and show you how you can leverage it to provide a locked backup in the Oracle Cloud using the ZFS image available in OCI.

In order to demonstrate this I will start by following the documentation to create a ZFS image in OCI as my destination.  Here is a great place to start with creating the virtual ZFS appliance in OCI.

Step 1 - Configure remote replication from source ZFSSA to ZFS appliance in OCI. 


By enabling the "Remote Replication" service with a named destination, "downstream_zfs" in my example, I can now replicate to my ZFS appliance in OCI.

zfssa remote replication


Step 2 -  Ensure the source project/share has "Enable retention policy for Scheduled Snapshots" turned on


For my example I created a new project "Blogtest".  On the "snapshots" tab I put a checkmark next to 
"Enable retention policy for Scheduled Snapshots".  By checking this, the project will adhere to preventing the deletion of any locked snapshots.  This property is replicated to the downstream and will cause the replicated project shares to also adhere to locking snapshots.  This can also be set at the individual share level if you wish to control the configuration of locked snapshots for individual shares.

Below you can see where this is enabled for snapshots created within the project.

ZFSSA Enable Snapshot Retention


Step 3 -  Create a snapshot schedule with "locked" snapshots


The next step is to create locked snapshots. This can be done at the project level (affecting all shares) or at the share level. In my example below I gave the scheduled snapshots a label "daily_snaps".  Notice for my example I am only keeping only 1 snapshot and I am locking the snapshot at the source. In order for the snapshot to be locked at the destination
  • Retention Policy MUST be enabled for the share (or inherited from the project).
  • The source snapshot MUST be locked when it is created
zfssa create snapshots

Step 4 -  Add replication to downstream ZFS in OCI

The next step is to add replication to the project  configuration to replicate the shares to my ZFS in OCI. Below you can see the target is my "downstream_zfs" that I configured in the "Remote Replication" service.
You can also see that I am telling the replication to "include snapshots", which are my locked snapshots, and also to "Retain user snapshots on target".  Under "Disaster Recovery" you can see that I am telling the downstream to keep a 30 day recovery point.  Even though I am only keeping 1 locked snapshot on the source, I want to keep 30 days of recovery on the downstream in OCI.

ZFSSA add replication

Step 5 -  Configure snapshots to replicate

In this step I am updating the replication action to replicate the locked scheduled snapshot to the downstream.  Notice that I changed the number of snapshots from 1 (on the source) to 30 on the destination, and I am keeping the snapshot retention locked. This will ensure that the daily locked snapshot taken on the source will replicate to the destination as a locked snapshot, and 30 snapshots on the destination will remain locked.  The 31st snapshot is no longer needed.

ZFSSA Autosnap replication


Step 6 -  Configure the replication schedule

The last step is to configure the replication schedule. This ensures that on a daily basis the snapshots that are configured to be replicated will be replicated regularly to the downstream. You can make this more aggressive than daily if you wish the downstream to be more in sync in the primary.  In my example below I configured the replication to occur every 10 minutes. This means that the downstream should have all updates as of 10 minutes ago or less. If I need to go back in time, I will have daily snapshots for the last 30 days that are locked and cannot be removed.

ZFSSA Replication Schedule

Step 7 -  Validate the replication


Now that I have everything configured I am going to take a look at the replicated snapshots on my destination.  I navigate to "shares" and I look under "replicat" and find my share. By clicking on the pencil and looking at the "snapshots" tab I can see my snapshot replicated over.

zfssa downstream copy

And when I click on the pencil next to the snapshot I can see that the snapshot is locked and I can't unlock it.

zfssa downstream locked



From there I can clone the snap and create a local snapshot, back it up to object storage, or reverse the replication if needed.



Friday, July 15, 2022

File Retention Lock on ZFSSA

File Retention Lock was recently released on ZFSSA and I wanted to take the time to explain how to set the retention time and view the retention of locked files. Below is an example of what happens. You can see that the files are locked until January 1st 2025

ZFS Retention Lock


The best place to start for information on how this works is by looking at my last blog post on authorizations.

First I will go through the settings that available at the share/project level


Grace period

The grace period is used to automatically lock a file when there has not been updates to the file for this period of time.
If the automatic file retention grace period is "0" seconds, then the default retention is NOT in effect.




NOTE: even with a grace period of "0" seconds files can be locked by manually setting a retention period. 
 Also, once a grace period is set (> "0") it cannot be increased or disabled.
Finally, if you set the grace period to a long period (to ensure all writes are to a file are completed), you can lock the file by removing the write bit. This does the same thing as expiring the grace period.

Below is an example

chmod ugo-w *

Running the "chmod" will  remove the write bit, and immediate cause all files to lock.

Default retention

The most common method to implement file retention is by using the default retention period. This causes the file to be locked for the default retention when the grace period expires for a file.
Note that the file is locked as of the time the grace period expires. For example, if I have a grace period of 1 day (because I want the ability to clean up a failed backup) and a default file retention period of 14 days, the file will be locked for 14 days AFTER the 1 day grace period. The lock on the file will expire 15 days after the file was last written to.

zfs file retention lock


In the example above you can see that all files created on this share are created with a default retention of 1 day (24 hours).

NOTE: If the grace period is not > "0' these settings will be ignored and files will not be locked by default.

Minimum/Maximum File retention

The second settings you see on the image above are the "minimum file retention period" and the "maximum file retention period".

These control the retention settings on files which follows the rules below.

  • The default retention period for files MUST be at least the minimum file retention period, and not greater than the maximum file retention period.

  • If the retention date is set manually on a file, the retention period must fall within the minimum and maximum retention period.

Display current Lock Expirations.

In order to display the lock expiration on Linux the first thing you need to do is to change the share/project setting "Update access time on read" to off . Through the CLI this is "set atime=false" on the share.


zfssa file retention lock

Once this settings is made, the client will then display the lock time as the "atime". In my example at the top of the blog, you can see by executing "ls -lu" the file lock time is displayed.

NOTE: you can also use the find command to search for files using the "atime" This will allow to find all the locked files.

Below is an example of using the find command to list files that have an lock expiration time in the future.


export CURRENT_DATE=`date +"%y-%m-%d %H:%M:%S"`
find . -type f -newerat "$CURRENT_DATE" -printf '%h\t%AD%AH:%AM:%AS\t%s \n'



Manually setting a retention date


It is possible to set a specific date/time that a file is locked until. You can even set the retention date on a file that currently locked (it must be a date beyond the current lock data).

NOTE: If you try to change the retention date on a specific file, the new retention date has to be greater than current retention date (and less than or equal to the maximum file retention period). This makes sense.  You cannot lower the retention period for a locked file.

Now how do you manually set the retention date ?  Below is an example of how it is set for a file.

Setting File retention lock

There are 3 steps that are needed to lock the file with a specific lock expiration date.

1. Touch the file and set the access date. This can be done with
    • "-a" to change the access date/time
    • "-d" or "-t" to specify the date format
 2. Remove the write bit with chmod guo-2

3.  execute a cmod to make the file read only.

Below is an example where I am taking a file that does not contain retention, and setting the date to January 1, 2025.


First I am going to create a file and touch it setting the atime to a future data.

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$rm myfile3.txt
$ls -lu myfile3.txt
ls: cannot access myfile3.txt: No such file or directory


You can see that I set the "atime" and it display a future date, but I was still able to delete the file.

Now I am going to move to  remove the write bit before deleting.

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$chmod ugo-w  myfile3.txt
$rm myfile3.txt
ls: cannot access myfile3.txt: No such file or directory


Still, I am able to delete the file.. Finally I am going to do all three 

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$chmod ugo-w  myfile3.txt
$chmod a=r  myfile3.txt
#$rm myfile3.txt
rm: remove write-protected regular file ‘myfile3.txt’? y
rm: cannot remove ‘myfile3.txt’: Operation not permitted


Summary to manually set the lock on a file

If the file is NOT current locked  (the grace period is "0" or the grace period has not expired).


The commands below will lock the file "myfile.txt" until 01/01/25 12:00.

touch -a -t "2501011200" myfile.txt
chmod ugo-w  myfile.txt
chmod a=r  myfile.txt


If the file is already locked 

The commands below will adjust the lock on the file "myfile.txt" until 01/01/25 12:00.


touch -a -t "2501011200" myfile.txt


Tuesday, July 5, 2022

ZFSSA File Retention Authorizations

ZFS File Retention authorizations is important to understand if you plan on implementing retention lock on ZFS. This feature was added in release OS8.8.46. and there is a MOS note explaining how it works (2867335.1 - Understanding ZFS Appliance File Retention Policy).
In order to start using the new features, you need to grant some new authorizations that manage who can administer the new settings.  Be aware that these new authorizations are NOT granted to the administrator role.  You must add them to the administrator role or create an additional role.



ZFS file retention authorizations

The image above shows the File Retention Policies that can be set and which authorization is needed to administer each setting.

NOTE: The share must be created with file retention in order to have these settings take effect.  You cannot add file retention to an existing Project/Share.


Now let's go through the 3 Authorizations and what they allow the administrator to do.

retentionPeriods



When an administrator is granted the "retentionPeriods" authorization they are given the authority to administer 3 of the setting for file retention

  • "Minimum file retention period" - This is the minimum amount of time in the future that you can set a file retention to be. If you set the file retention date manually the retention time must be at least this far if not longer in the future. If you set the "Default file retention period", it must be at least the "Minimum file retention period" if not longer.  The default value for this setting is "0 seconds".
  • "Maximum file retention period"- This is the maximum amount of time in the future that you can set a file retention to be. If you set the file retention date manually the retention time must at most this far if not shorter in the future. If you set the "Default file retention period", it must be at most  the "maximum file retention period" if not shorter. The default value for this setting is "5 years".
  • "Default file retention period"- This is the default amount of time in the future that you can set a file retention to be.  This value has to fall within the minimum and maximum file retention period.  Unless this value is set to a value greater than "0 seconds" no files are locked by default.

NOTE : The most common method used to lock files is to set the "Default file retention period" to a value greater than '0 seconds". When this is set (and file retention is turned on) any files created will be locked for this period of time.

retentionAuto



When an administrator is granted the "retentionAuto" authorization they are given the authority to set the Automatic file retention grace period.
This value controls how long after the last access time the ZFS waits to lock the file.  The default setting is "0 seconds".  Until this value is set to a value greater than "0 seconds" no files are automatically locked (using the Default file retention period).  The only method to lock files when this value is left as "0", the default, is to manually lock files.

NOTE: A very important item to understand is that the ZFS locks the file once it has not been updated for this period of time. If you have a process that holds a file open without writing to it, for example an RMAN channel, it may lock the file before it is closed.
Be sure to set the grace period to be longer than the amount of time a process may pause writing to a file.  DO NOT set it too short.  If you wish to lock a file immediately after you have finished writing to it (because you have a long grace period) you can remove the "w" bit from the files using chmod. This will bypass the grace period.
If the share is configured with mandatory retention, the automatic grace period cannot be increased, it can only be lowered.

retentionMandatory



When an administrator is granted the "retentionMandatory" authorization they are given the authority to create a share with a "mandatory (no override)" file retention.  This authorization is not necessary to create a "privileged override" file system.
Be aware that in order to create a file system with "mandatory" file retention the ZFS must be configured with the following settings. The "file retention" service must be running, and the file system needs to be a mirrored configuration

  • Remote root user login via the BUI/REST needs to be turned off in the HTTPS service
  • Remote root login via SSH needs to be turned off in the SSH service
  • NTP sync needs to be configured in the NTP service
  • NTP service needs to be on-line.

NOTE : You must ensure that the ZFS administrator is granted these authorizations before attempting to configure file retention. If the administration user is not granted the proper authorization you will permission errors like below.



"You are not authorized to perform this action. If you wish to proceed, contact an administrator to obtain the proper credentials.






Tuesday, June 21, 2022

Migrate a large oracle database to OCI from disk backup

 Migrating an Oracle database from on-premise to OCI is especially challenging when the database is quite large.  In this blog post I will walk through the steps to migrate to OCI leveraging an on-disk local backup copied to object storage.

migrate Oracle database to OCI


The basic steps to perform this task are on on the image above.

Step #1 - Upload backup pieces to object storage.

The first step to migrate my database (acmedb) is to copy the RMAN backup pieces to the OCI object storage using the OCI Client tool.

In order to make this easier, I am breaking this step into a few smaller steps.

Step #1A - Take a full backup to a separate location on disk 


This can also be done by moving the backup pieces, or creating them with a different backup format.  By creating the backup pieces in a separate directory, I am able to take advantage of the bulk upload feature of the OCI client tool. The alternative is to create an upload statement for each backup piece.

For my RMAN backup example (acmedb) I am going to change the location of the disk backup and perform a disk backup.  I am also going to compress my backup using medium compression (this requires the ACO license).  Compressing the backup sets allows me to make the backup pieces as small as possible when transferring to the OCI object store.

Below is the output from my RMAN configuration that I am using for the backup.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ACMEDBP are:


CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/acmedb/ocimigrate/backup_%d_%U';
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

I created a new level 0 backup including archive logs and below is the "list backup summary" output showing the backup pieces.

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4125    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141019
4151    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141201
4167    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4168    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4169    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4170    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4171    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4172    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4173    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4174    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4175    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4176    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4208    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141309
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310



From the output you can see that there are a total of 14 backup pieces
  • 3 Archive log backup sets (two created before the backup of datafiles, and one after).
    • TAG20220621T141019
    • TAG20220621T141201
    • TAG20220621T141309
  • 10 Level 0 datafile backups
    • TAG20220621T141202
  • 1 controlfile backup 
    • TAG20220621T141310

Step #1B - Create the bucket in OCI and configure OCI Client

Now we need a bucket to upload the 14 RMAN backup pieces to. 

Before I can upload the objects, I need to download and configure the OCI Client tool. You can find the instructions to do this here.

Once the client tool is installed I can create the bucket and verify that the OCI Client tool is configured correctly.

The command to create the bucket is.



Below is the output when I ran it for my compartment and created the bucket "acmedb_migrate"

 oci os bucket create --namespace id2avsofo --name acmedb_migrate --compartment-id ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "auto-tiering": null,
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"created-by": "ocid1.user.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"defined-tags": { "Oracle-Tags": { "CreatedBy": "oracleidentitycloudservice/john.smith@oracle.com", "CreatedOn": "2022-06-21T14:36:19.680Z" } }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893", "freeform-tags": {}, "id": "ocid1.bucket.oc1.iad.aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"is-read-only": false, "kms-key-id": null, "metadata": {}, "name": "acmedb_migrate", "namespace": "id2avsofo",
"object-events-enabled": false, "object-lifecycle-policy-etag": null, "public-access-type": "NoPublicAccess", "replication-enabled": false, "storage-tier": "Standard", "time-created": "2022-06-21T14:36:19.763000+00:00", "versioning": "Disabled" }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893" }

Step #1C - Upload the backup pieces to Object Storage in OCI


The next step is to upload all the backup pieces that are in the directory "/acmedb/ocimigrate" to OCI using the bulk upload feature.



Below is the output of the upload - Notice I used a parallelism of 14 to ensure a quick upload.

 oci os object bulk-upload --namespace-name id20skavsofo    --bucket-name acmedb_migrate --src-dir /acmedb/ocimigrate/ --parallel-upload-count 10

Uploaded backup_RADB_3u10k6hj_126_1_1  [####################################]  100%
Uploaded backup_RADB_4710k6jl_135_1_1  [####################################]  100%
Uploaded backup_RADB_4610k6jh_134_1_1  [####################################]  100%
Uploaded backup_RADB_3n10k6b0_119_1_1  [####################################]  100%
Uploaded backup_RADB_3m10k6b0_118_1_1  [####################################]  100%
Uploaded backup_RADB_3r10k6ec_123_1_1  [####################################]  100%
Uploaded backup_RADB_4510k6jh_133_1_1  [####################################]  100%
Uploaded backup_RADB_4010k6hj_128_1_1  [####################################]  100%
Uploaded backup_RADB_3v10k6hj_127_1_1  [####################################]  100%
Uploaded backup_RADB_4110k6hk_129_1_1  [####################################]  100%
Uploaded backup_RADB_4210k6id_130_1_1  [####################################]  100%
Uploaded backup_RADB_4310k6ie_131_1_1  [####################################]  100%
Uploaded backup_RADB_3l10k6b0_117_1_1  [####################################]  100%
Uploaded backup_RADB_4410k6ie_132_1_1  [####################################]  100%
Uploaded backup_RADB_3k10k6b0_116_1_1  [####################################]  100%
Uploaded backup_RADB_3t10k6hj_125_1_1  [####################################]  100%

{
  "skipped-objects": [],
  "upload-failures": {},
  "uploaded-objects": {
    "backup_RADB_3k10k6b0_116_1_1": {
      "etag": "ab4a1017-3ba7-46e2-a2ee-3f4cd9a82ad3",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "W0hYIzfAWUVzACWNudcQDg==-3"
    },
    "backup_RADB_3l10k6b0_117_1_1": {
      "etag": "a620076e-975f-4d8c-87e8-394c4cf966cd",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-multipart-md5": "zapGBx8Imcdk91JM2+gORQ==-3"
    },
    "backup_RADB_3m10k6b0_118_1_1": {
      "etag": "a96c35c0-4c0b-4646-ae38-723f92c8496e",
      "last-modified": "Tue, 21 Jun 2022 14:57:32 GMT",
      "opc-content-md5": "vNAsU3vLcjzp6OwEeLXGgA=="
    },
    "backup_RADB_3n10k6b0_119_1_1": {
      "etag": "8f565894-5097-4ebb-9569-fdd31cc0c22d",
      "last-modified": "Tue, 21 Jun 2022 14:57:31 GMT",
      "opc-content-md5": "aSUSQWv5b+EfoLy9L9UBYQ=="
    },
    "backup_RADB_3r10k6ec_123_1_1": {
      "etag": "120dead4-c8ae-44de-9d27-39e1c28a2c48",
      "last-modified": "Tue, 21 Jun 2022 14:57:33 GMT",
      "opc-content-md5": "4wHBrgZXuIMlYWriBbs1ng=="
    },
    "backup_RADB_3s10k6hh_124_1_1": {
      "etag": "07d74b7f-68d6-4a77-9c4d-42f78c51c692",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "uzRd51bAKvFjhbbsfL1YAg=="
    },
    "backup_RADB_3t10k6hj_125_1_1": {
      "etag": "e5d3225b-a687-47e1-ad31-f4270ce31ddd",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "aZIirf98ZNqwBAlIeWzuhQ==-3"
    },
    "backup_RADB_3u10k6hj_126_1_1": {
      "etag": "5f5cc5ad-4aa3-4c3a-8848-16b3442a1e2c",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "dT6EYLv1yzf6LZCn1/Dsvw=="
    },
    "backup_RADB_3v10k6hj_127_1_1": {
      "etag": "297daece-be72-475f-b40d-982fb7115cd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "Zt3h5YfHU6F771ahltYhDQ=="
    },
    "backup_RADB_4010k6hj_128_1_1": {
      "etag": "9d723f2a-962e-4d03-9283-fc8a68f53af8",
      "last-modified": "Tue, 21 Jun 2022 14:57:35 GMT",
      "opc-content-md5": "KuNzVyUQrrSsA/kgioq9oA=="
    },
    "backup_RADB_4110k6hk_129_1_1": {
      "etag": "16f7f02a-e5ae-48a2-a7d2-b6d1dedc82ad",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "24SzzZwg7iu7PV8TBpMXEg=="
    },
    "backup_RADB_4210k6id_130_1_1": {
      "etag": "0584e14f-53dc-4251-8bad-907f357a283e",
      "last-modified": "Tue, 21 Jun 2022 14:57:37 GMT",
      "opc-content-md5": "sjPsmoeFsMhZISAmaVN0vQ=="
    },
    "backup_RADB_4310k6ie_131_1_1": {
      "etag": "176aea41-dd31-4404-99f4-ffd59c521fd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:40 GMT",
      "opc-content-md5": "2ksAQ2UuU/75YyRKujlLXg=="
    },
    "backup_RADB_4410k6ie_132_1_1": {
      "etag": "766c7585-3837-490b-8563-f3be3d24c98e",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-content-md5": "sh4CFUC/vnxjmMZ5mfgT3Q=="
    },
    "backup_RADB_4510k6jh_133_1_1": {
      "etag": "2de62d73-e44c-4f25-a41d-d45c556054dd",
      "last-modified": "Tue, 21 Jun 2022 14:57:34 GMT",
      "opc-content-md5": "4tVrHqwYG57STn9W6c2Mqw=="
    },
    "backup_RADB_4610k6jh_134_1_1": {
      "etag": "4667419d-9555-4edb-bd6d-749a1ee7660b",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "/MVdDn/vA2IXUcCmtdgKnw=="
    },
    "backup_RADB_4710k6jl_135_1_1": {
      "etag": "d467810a-d62e-42b3-bf7b-019913707312",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "hq8PEQ3PUwyTMWyUBfW4ew=="
    }
  }
}


Step #2 - Create the manifest for the backup pieces.


The next step covers creating the "metadata.xml" for each object which is the manifest the the RMAN library uses to read the backup pieces.

Again this is broken down into a few different steps.

Step #2A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

I executed the jar file which downloads/created the following files.
  • libopc.so - This is the library used by the Cloud Backup module, and I downloaded it into  "/home/oracle/ociconfig/lib/" on my host
  • acmedb.ora - This is the configuration file for my database backup. This was created in "/home/oracle/ociconfig/config/" on my host
This information is used to allocate the channel in RMAN for the manifest.

Step #2b - Generate the manifest create for each backup piece.

The next step is to dynamically create the script to build the manifest for each backup piece. This needs to be done for each backup piece, and the command is

"send channel t1 'export backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #2c - Execute the script with an allocated channel.

The next step is to execute the script in RMAN within a run block after allocating a channel to the bucket in object storage. This needs to be done for each backup piece. You create a run block with one channel allocation followed by "send" commands.

NOTE: This does not have be executed on the host that generated the backups.  In the example below, I set my ORACLE_SID to "dummy" and performed create manifest with the "dummy" instance started up nomount.


Below is an example of allocating a channel to the object storage and creating the manifest for one of the backup pieces.



export ORACLE_SID=dummy
 rman target /
RMAN> startup nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/19c/dbhome_1/dbs/initdummy.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area    1073737792 bytes

Fixed Size                     8904768 bytes
Variable Size                276824064 bytes
Database Buffers             780140544 bytes
Redo Buffers                   7868416 bytes

RMAN> run {
          allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
       send channel t1 'export backuppiece backup_RADB_3r10k6ec_123_1_1';
        }
2> 3> 4>
allocated channel: t1
channel t1: SID=19 device type=SBT_TAPE
channel t1: Oracle Database Backup Service Library VER=23.0.0.1

sent command to channel: t1
released channel: t1


Step #2d - Validate the manifest is created.

I logged into the OCI console, and I can see that there is a directory called "sbt_catalog". This is the directory containing the manifest files. Within this directory you will find a subdirectory for each backup piece. And within those subdirectories you will find a "metadata.xml" object containing the manifest.

Step #3 - Catalog the backup pieces.


The next step covers cataloging the backup pieces in OCI. You need to download the controlfile backup from OCI and start up mount the database.

Again this is broken down into a few different steps.

Step #3A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

Again, you need to configure the backup module (or you can copy the files from your on-premise host).

Step #3b - Catalog each backup piece.

The next step is to dynamically create the script to build the catalog each backup piece. This needs to be done for each backup piece, and the command is

"catalog device type 'sbt_tape'  backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #3c - Execute the script with a configured channel.

I created a configure channel command, and cataloged the backup pieces that in the object store.


RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';


  run {
           catalog device type 'sbt_tape' backuppiece 'backup_RADB_3r10k6ec_123_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3s10k6hh_124_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3t10k6hj_125_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3u10k6hj_126_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3v10k6hj_127_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4010k6hj_128_1_1';
          catalog device type 'sbt_tape' backuppiece ' backup_RADB_4110k6hk_129_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4210k6id_130_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4310k6ie_131_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4410k6ie_132_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4510k6jh_133_1_1';
        }

old RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN>
RMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13>
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=406 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=22 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=407 device type=SBT_TAPE
...
...
...
channel ORA_SBT_TAPE_4: SID=23 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: cataloged backup piece
backup piece handle=backup_RADB_4510k6jh_133_1_1 RECID=212 STAMP=1107964867

RMAN>


Step #3d - List the backups pieces cataloged

I performed a list backup summary to view the newly cataloged tape backup pieces.


RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310
4258    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141019
4270    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141201
4282    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4292    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4303    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4315    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4446    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4468    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4490    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4514    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4539    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202

RMAN>


Step #4 - Restore the database.


The last step is restore the cataloged backup pieces. Remember you might have to change the location of the datafiles.



The process above can be used to upload and catalog both additional archive logs (to bring the files forward) and incremental backups to bring the database forward.