Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Wednesday, August 2, 2023

ZDLRA - Copy-to-cloud steps by step explained

 One of the best features of the ZDLRA is the ability to dynamically create a full Keep backup and send it to Cloud (ZFSSA or OCI) for archival storage.

Here is a great article by Oracle Product Manager Marco Calmasini that explains how to use this feature.



In this blog post, I will go through the RACLI steps that you execute, and explain what is happening with each step

The documentation I am started with is the 21.1 administrators Guide which can be here.  If you are on a more current release, then you can find the steps in chapter named "Archiving Backups to Cloud".


Deploying the OKV Client Software

To ensure that all the backup pieces are encrypted, you must use OKV (Oracle Key Vault) to manage the encryption keys that are being used by the ZDLRA.  Even if you are using TDE for the datafiles, the copy-to-cloud process encrypts ALL backup pieces including the backup of the controlfile, and spfile which aren't already encrypted.

I am not going to go through the detailed steps that are in the documentation to configure OKV, but I will just go through the high level processes.

The most important items to note on this sections are

  • Both nodes of the ZDLRA are added as endpoints, and they should have a descriptive name that identifies them, and ties them together.
  • A new endpoint group should be created with a descriptive name, and both nodes should be added to the new endpoint group.
  • A new virtual wallet is created with a descriptive name, and this needs to both associated with the 2 endpoints, and be the default wallet for the endpoints.
  • Both endpoints of the ZDLRA are enrolled through OKV and during the enrollment process a unique enrollment token file is created for each node. It is best to immediately rename the files to identify the endpoint it is associated with using the format <myhost>-okvclient.jar.
  • Copy the enrollment token files to the /radump directory on the appropriate host.
NOTE: It is critical that you follow these directions exactly, and that each node has the appropriate enrollment token with the appropriate name before continuing.

#1 Add credential_wallet

racli add credential_wallet


Fri Jan 1 08:56:27 2018: Start: Add Credential Wallet
Enter New Keystore Password: <OKV_endpoint_password>
Confirm New Keystore Password:
Enter New Wallet Password: <ZDLRA_credential_wallet_password> 
Confirm New Wallet Password:
Re-Enter New Wallet Password:
Fri Jan 1 08:56:40 2018: End: Add Credential Wallet

The first step to configure the ZDLRA to talk to OKV is to have the ZDLRA create a password protected SEPS wallet file that contains the OKV password.
This step asks for 2 new passwords when executing
  1. New Keystore Password - This password is the OKV endpoint password.  This password is used to communicate with OKV by the database, and can be used with okvutil to interact with OKV directly
  2. New Wallet Password - This password is used to protect the wallet file itself that will contain the OKV keystore password.
This password file is shared across both nodes.

Update contents      -  "racli add credential"
Change password    - "racli alter credential_wallet"

#2 Add keystore

racli add keystore --type hsm --restart_db

RecoveryAppliance/log/racli.log
Fri Jan 1 08:57:03 2018: Start: Configure Wallets
Fri Jan 1 08:57:04 2018: End: Configure Wallets
Fri Jan 1 08:57:04 2018: Start: Stop Listeners, and Database
Fri Jan 1 08:59:26 2018: End: Stop Listeners, and Database
Fri Jan 1 08:59:26 2018: Start: Start Listeners, and Database
Fri Jan 1 09:02:16 2018: End: Start Listeners, and Database

The second step to configure the ZDLRA to talk to OKV is to have the ZDLRA database be configured to communicate with OKV. The Database on the ZDLRA will be configured to use the OKV wallet for encryption keys which requires a bounce of the database.  


Backout         - "racli remove keystore" 
Status            - "racli status keystore"
Update          - "racli alter keystore"
Disable          - "racli disable keystore"
Enable            - "racli enable keystore"

#3 Install okv_endpoint (OKV client software)

racli install okv_endpoint

23 20:14:40 2018: Start: Install OKV End Point [node01]
Wed August 23 20:14:43 2018: End: Install OKV End Point [node01]
Wed August 23 20:14:43 2018: Start: Install OKV End Point [node02]
Wed August 23 20:14:45 2018: End: Install OKV End Point [node02]

The third step to configure the ZDLRA to talk to OKV is to have the ZDLRA nodes (OKV endpoints) enrolled in OKV.  This step will install the OKV software on both nodes of the ZDLRA, and complete the enrollment of the 2 ZDLRA nodes with OKV.  The password that entered in step #1 for OKV is used during the enrollment process.

Status            - "racli status okv_endpoint"

NOTE: At the end of this step, the status command should return a status of online from both nodes.

Node: node02
Endpoint: Online
Node: node01
Endpoint: Online

#4 Open the Keystore

racli enable keystore

The fourth step to configure the ZDLRA to talk to OKV is to have the ZDLRA nodes open the encryption wallet in the database. This step will use the saved passwords from step #1 and open up the encryption wallet.

NOTE: This will need to be executed after any restarts of the database on the ZDLRA.

#5 Create a TDE master key for the ZDLRA in the Keystore

racli alter keystore --initialize_key

The final step to configure the ZDLRA to talk to OKV is to have the ZDLRA create the master encryption for the ZDLRA in the wallet.

Creating Cloud Objects for Copy-to-Cloud

These steps create the cloud objects necessary to send backups to a cloud location.

NOTE: If you are configuring multiple cloud locations, you may go through these steps for each location.

Configure public/private key credentials

Authentication with the object storage is done using an X.509 certificate.  The ZDLRA steps outlined in the documentation will generate a new pair of API signing keys and register the new set of keys.
You can also use any set of API keys that you previously generated by putting your private key in the shared location on the ZDLRA nodes..
In OCI each user can only have 3 sets of API keys, but the ZFSSA has no restrictions on the number of API signing keys that can be created.
Each "cloud_key" represents an API signing key pair, and each cloud_key contains 
  1. pvt_key_path - Shared location on the ZDLRA where the private key is located
  2. fingerprint      - fingerprint associated with the private key to identify which key to use.
You can use the same "cloud_key" to authenticate to multiple buckets, and even different cloud locations.

Documentation steps to create new key pair

#1 Add Cloud_key


racli add cloud_key --key_name=sample_key

Tue Jun 18 13:22:07 2019: Using log file /opt/oracle.RecoveryAppliance/log/racli.log
Tue Jun 18 13:22:07 2019: Start: Add Cloud Key sample_key
Tue Jun 18 13:22:08 2019: Start: Creating New Keys
Tue Jun 18 13:22:08 2019: Oracle Database Cloud Backup Module Install Tool, build 19.3.0.0.0DBBKPCSBP_2019-06-13
Tue Jun 18 13:22:08 2019: OCI API signing keys are created:
Tue Jun 18 13:22:08 2019:   PRIVATE KEY --> /raacfs/raadmin/cloud/key/sample_key/oci_pvt
Tue Jun 18 13:22:08 2019:   PUBLIC  KEY --> /raacfs/raadmin/cloud/key/sample_key/oci_pub
Tue Jun 18 13:22:08 2019: Please upload the public key in the OCI console.
Tue Jun 18 13:22:08 2019: End: Creating New Keys
Tue Jun 18 13:22:09 2019: End: Add Cloud Key sample_key

This step is used to generate a new set of API signing keys,
The output of this step is a shared set of files on the ZLDRA which are stored in:
/raacfs/raadmin/cloud/key/{key_name)/

In order to complete the cloud_key information, you need to add the public key to OCI, or to the ZFS and save the fingerprint that is associated with the public key. The fingerprint is used in the next step.

#2 racli alter cloud_key


racli alter cloud_key --key_name=sample_key --fingerprint=12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef

The fingerprint that is associated with the public key (from the previous step) is added to the ZDLRA cloud_key information so that it can be used for authentication.  
Both the private key, and the fingerprint are need to use the API signing key for credentials.

Using your own API signing key pair

#1 Add cloud_key

racli add cloud_key --key_name=KEY_NAME [--fingerprint=PUBFINGERPRINT --pvt_key_path=PVTKEYFILE]

You can add your own API signing keys to the ZDLRA by  using the "add cloud_key" command identifying both the private key file location (it is best to follow the format and location in the automated steps) and the fingerprint associated with the API signing keys.
It is assumed that the public key has already been added to OCI, or to the ZFSSA.

Status        - racli list cloud_key
Delete        - racli remove cloud_key
Update       - racli alter cloud_key

Documentation steps to create a new cloud_user 

This step is used to create the wallet entry on the ZDLRA that is used for authenticating to the object store.
This step combines the "cloud_key", which contains the API signing keys, the user login information and the compartment (on ZFSSA the compartment is the share ).
The cloud_user can be used for authentication with multiple buckets/locations that are identified as cloud_locations as long as they are within the same compartment (share on ZFSSA).

The format of the command to create a new cloud_user is below

racli add cloud_user 
--user_name=sample_user
--key_name=sample_key
--user_ocid=ocid1.user.oc1..abcedfghijklmnopqrstuvwxyz0124567901
--tenancy_ocid=ocid1.tenancy.oc1..abcedfghijklmnopqrstuvwxyz0124567902
--compartment_ocid=ocid1.compartment.oc1..abcedfghijklmnopqrstuvwxyz0124567903

The parameters for this command are

  • user_name        - This is the username that is associated with the cloud_user to unique identify it.
  • key_name         - This is name of the "cloud_key" identifying the API signing keys to be used.
  • user_ocid          - This is the Username for authentication. In OCI this is the users OCID, in ZFS, this combines the ocid format with the username on the ZFSSA that owns the share.
  • tenancy_ocid    - this is the tenancy OCID in OCI, on ZFSSA it is ignored
  • compartment_ocid - this is the OCID, on ZFSSA it is the share
For more information on configuring the ZFSSA see
How to configure Zero Data Loss Recovery Appliance to use ZFS OCI Object Storage as a cloud repository (Doc ID 2761114.1)


List                - racli list  cloud_user
Delete            - racli remove  cloud_user
Update           - racli alter cloud_user

Documentation steps to create a new cloud_location 

This step is used to associate the cloud_user (used for authentication) with both the location and the bucket that is going to be used for backups.

racli add cloud_location
--cloud_user=<CLOUD_USER_NAME>
--host=https://<OPC_STORAGE_LOCATION>
--bucket=<OCI_BUCKET_NAME>
--proxy_port=<HOST_PORT>
--proxy_host=<PROXY_URL>
--proxy_id=<PROXY_ID>
--proxy_pass=<PROXY_PASS>
--streams=<NUM_STREAMS>
[--enable_archive=TRUE]
--archive_after_backup=<number>:[YEARS | DAYS]
[--retain_after_restore=<number_hours>:HOURS]
--import_all_trustcert=<X509_CERT_PATH>
--immutable
--temp_metadata_bucket=<metadata_bucket>  


 

I am going to go through the key items that need to be entered here.  I am going to skip over the PROXY information and certificate.

  • cloud_user - This is the object store authentication information that was created in the previous steps.
  • host - This the URL for the object storage location. On ZFS the namespace in the URL is the "share"
  • bucket - This is the bucket where the backups will be sent. The bucket will be created if it doesn't exists. 
  • streams - The maximum number of channels to use when sending backups to the cloud
  • enable_archive - Not used with ZFS. With OCI the default TRUE allows you to set an archival strategy, FALSE will automatically put backups in archival storage.
  • archive_after_restore - Not used with ZFS. Automatically configures an archival strategy in OCI
  • retain_after_restore - Not used with ZFS. Sets the period of time that backups will remain in standard storage before returning to archival storage.
  • immutable - This allows you to set retention rules on the bucket by using the <metadata_bucket> for temporary files that need to be deleted after the backup. When using immutable you must also have a temp_metadata_bucket
  • temp_metadata_bucket - This is used with immutable to configure backups to go to 2 buckets, and this bucket will only contain a temporary object that gets deleted after the backup completes.
This command will create multiple attribute sets (between 1 and the number of streams) for the cloud_location that can be used for sending archival backups to the cloud with different numbers of channels.
The format of <copy_cloud_name> is a combination of  <bucket name> and <cloud_user>.
The format of the attributes used for the copy jobs is <Cloud_location_name>_<stream number>


Update          - racli alter cloud_location
Disable          - racli disable cloud_location  - This will pause all backups going to this location
Enable           - racli enable cloud_location  - This unpauses all backups going to this location
List                - racli list  cloud_location
Delete            - racli remove cloud_location

NOTE: There are quite a few items to note in this section.
  • When configuring backups to go to ZFSSA use the documentation previously mentioned to ensure the parameters are correct.
  • When executing this step with ZFSSA, make sure that the default OCI location on the ZFSSA is set to the share that you are currently configuring. If you are using multiple shares for buckets, then you will have to change the ZFSSA settings as you add cloud locations.
  • When using OCI for archival ensure that you configure the archival rules using this command. This ensures that the metadata objects, which can't be archived are excluded as part of the lifecycle management rules created during this step.


Create the job template using the documentation.


Thursday, March 23, 2023

Why DBCS (Oracle Base Database Service) in OCI can make a DBA's life much easier (even with BYOL)

DBCS (now named Oracle Base Database service, but I will call it DBCS throughout this post) in OCI  can help make a DBA's life easier.  When I was testing the new Autonomous Recovery Service for Oracle Database in OCI, I created a LOT of different DBCS systems to test backup and recovery.  Along the way I learned a lot about the workings of DBCS, and I came to appreciate how it makes sense, even if you are a BYOL (bring your own license) customer.




I'm more of a an "old school" DBA, preferring command line, and scripting processes myself.  I am typically not a fan of automation.  When using DBCS I was surprised by all the things it would do for me that I would have to do manually.

Install oracle software and create a database

Having installed oracle software hundreds of times, and having created test databases, I didn't think I would care much about automation that did this for me.

Central Software image management

What I found in OCI, is that you can create your own software images that can be used to ensure each new database environment is consistent.  OCI gives you ability to create your own set of release images (which can include patches).  This ensures each time I create a new DBCS environment, and choose my custom image, it's running the same version in all environments. No more installing base release, then patches, and then then any possible one-off patches.  This makes the installation of the database software much, much easier, and ensures consistency.


Easy Database creation

Recently I've gotten familiar with performing a silent database creation, as using dbca isn't always easy to configure.  The tooling provided by DBCS will not only create a database for you, but will also configure TDE encryption (with a local wallet, or using OCI vault).  It can even create a RAC database across 2 nodes.  And don't forget, it can create the standby for me also.


Configure ASM storage

Now this is the most interesting piece I found when using DBCS.  Not only does the DBCS service create a disk group, but it automatically stripes multiple block volumes together maximizing performance.  This is a HUGE help in ensuring I am getting the best performance.
When I was going through what the configuration did, I tried to build tables showing how the different storage sizes translate to the storage configurations.
There were 2 configurations and DB data storage sizes, one for Flex, and one for Standard shapes.

Flex


First I looked at flex, and regardless of the performance level these were the sizes.


Then within Flex, I looked at the "Balanced performance" configuration.

Balanced Performance configuration





You can see that as the DB storage available goes up, the number of disks goes up also allowing for a higher  possible IOPS than you would get from a single Block Storage device.

Below is the chart for "High Performance"

High Performance configuration



You can see that the IOPS is even higher, and it is using even more disks to get that performance.

Standard


Next looked at standard shapes, and regardless of the performance level these were the sizes. Note that with Standard shapes, there were many more options for configurations.


Balanced Performance configuration





High Performance configuration






Benefits of DBCS

I also went through what some of the other benefits of DBCS are, and below is the list I came up with.

  • When using the DBCS service,  the storage cost is based on the Block Storage cost. This is the same cost as you would pay in an IaaS service.  Having the storage striped and configured for maximum IOPS makes this a huge plus.

  • DBCS allows you purchase licenses if you don't have enough licenses to use the BYOL option.

  • The DBCS service price is based on OCPU and is the same regardless of the shape. Memory is included in the OCPU cost.

  • DBCS automatically configures RAC if you choose it.

  • DBCS provides tooling that automatically configures backups, can apply patches, and rotate encryption keys.

  • DBCS allows you to automate the cloning of your database, and automate any restores.

  • DBCS includes TDE, and relieves you of having to own the ASO license.  

Conclusion:

DBCS offers a lot more than you realize. Take a deep dive into what it can do for you to save time as DBA and you also might realize that sometimes tooling along with automation has it's benefits.


Friday, July 29, 2022

OCI Database backups with retention lock

 OCI Object Storage provides both lifecycle rules and retention lock.  How to take advantage of both these features isn't always as easy as it looks.

 In this post I will go through an example customer request and how to implement a backup strategy to accomplish the requirements.

OCI Buckets

This image above gives you an idea of what they are looking to accomplish.

Requirements

  • RMAN retention is to keep a 14 day point in time recovery window
  • All long term backups beyond 14 days are cataloged as KEEP backups
  • All buckets are protected with a retention rule to prevent backups from being deleted before they become obsolete
  • Backups are moved to lower tier storage when appropriate to save costs.

Backup strategy

  • A full backup is taken every Sunday at 5:30 PM and this backup is kept for 6 weeks.
  • Incremental backups are taken Monday through Saturday at 5:30 PM and are kept for 14 days
  • Archive log sweeps are taken 4 times a day and are kept for 14 days
  • A backup is taken the 1st day of the month at 5:30 PM and this backup is kept for 13 months.
  • A full backup is taken following the Tuesday morning bi-weekly payroll run and is kept for 7 years
This sounds easy enough.  If you look at the image above you can what this strategy looks like in general. I took this strategy and mapped it to the 4 buckets, how they would be configured, and what they would contain. This is the image below.

OCI Object rules


Challenges


As I walked through this strategy I found that it involved some challenges. My goal was limit the number of full backups to take advantage of current backups.  Below are the challenges I realized exist with this schedule
  • The weekly full backup taken every Sunday is kept for longer than the incremental backups and archive logs. This caused 2 problems
    1. I wanted to make this backup a KEEP backup that is kept for 6 weeks before becoming obsolete.  Unfortunately KEEP backups are ignored as part of an incremental backup strategy. I could not create a weekly full backup that was both a KEEP backup and also be used as part of  incremental backup strategy.
    2. Since the weekly full backup is kept longer than the archive logs, I need to ensure that this backup contains the archive logs needed to defuzzy the backup without containing too many unneeded archive logs
  • The weekly full backup could fall on the 1st of the month. If this is the case it needs to be kept for 13 months otherwise it needs to be kept for 6 weeks.
  • I want the payrun backups to be immediately placed in archival storage to save costs.  When doing a restore I want to ignore these backups as they will take longer to restore.
  • When restoring and recovering the database within the 14 day window I need to include channels allocated to all the buckets that could contain those buckets. 14_DAY, 6_WEEK,  and 13_MONTH.

Solutions

I then worked through how I would solve each issue.

  1. Weekly full backup must be both a normal incremental backup and KEEP backup - After doing some digging I found the best way to handle this issue was to CHANGE the backup to be a KEEP backup with either a 6 week retention, or a 13 month retention from the normal NOKEEP type. By using tags I can identify the backup I want change after it is no longer needed as part of the 14 day strategy.
  2. Weekly full backup contains only archive logs needed to defuzzy - The best way to accomplish this task is to perform an archive log backup to the 14_DAY bucket immediately before taking the weekly full backup
  3. Weekly full backup requires a longer retention - This can be accomplished by checking if the the full backup is being executed on the 1st of the month. If it is the 1st, the full backup will be placed in the 13_MONTH bucket.  If it is not the 1st, this backup will be placed in the 6_WEEK bucket.  This backup will be created with a TAG with a format that can be used to identify it later.
  4. Ignore bi-weekly payrun backups that are in archival storage - I found that if I execute a recovery and do not have any channels allocated to the 7_YEAR bucket, it will may try to restore this backup, but it will not find it and move to the next previous backup. Using tags will help identify that a restore from the payrun backup was attempted and ultimately bypassed.
  5. Include all possible buckets during restore - By using a run block within RMAN I can allocate channels to different buckets and ultimately include channels from all 3 appropriate buckets.
Then as a check I drew out a calendar to walk through what this strategy would look like.

OCI backup schedule


Backup examples

Finally I am including examples of what this would look like.

Mon-Sat 5:30 backup job



dg=$(date +%Y%m%d)
rman <<EOD
run {
ALLOCATE CHANNEL daily1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL daily2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
backup incremental level 1 database tag="incr_backup_${dg}" plus archivelog tag="arch_backup_${dg}";
   }
exit
EOD

Sat 5:30 backup job schedule

1) Clean up archive logs first



dg=$(date +%Y%m%d:%H)
rman <<EOD
run {
ALLOCATE CHANNEL daily1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL daily2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
backup archivelog tag="arch_backup_${dg}";
   }
exit
EOD

2a) If this 1st of the month then execute this script to send the full backup to the 13_MONTH bucket


dg=$(date +%Y%m%d)
rman <<EOD
run {
ALLOCATE CHANNEL monthly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
ALLOCATE CHANNEL monthly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
backup incremental level 1 database tag="full_backup_${dg}" plus archivelog tag="full_backup_${dg}";
   }
exit
EOD


2b) If this is NOT the 1st of the month execute this script and send the full backup to the 6_WEEK bucket

dg=$(date +%Y%m%d)
rman <<EOD
run {
ALLOCATE CHANNEL weekly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
ALLOCATE CHANNEL weekly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
backup incremental level 1 database tag="full_backup_${dg}" plus archivelog tag="full_backup_${dg}";
   }
exit
EOD


3a) If today is the 15th then change the  full backup to a 13 month retention


dg=$(date --date "-14 days" +%Y%m%d)
rman <<EOD
CHANGE BACKUPSET TAG="full_backup_${dg}" keep until time 'sysdate + 390';
EOD

3b) If today is NOT the 14th then change the  full backup to a 6 week retention


dg=$(date --date "-14 days" +%Y%m%d)
rman <<EOD
CHANGE BACKUPSET TAG="full_backup_${dg}" keep until time 'sysdate + 28';
EOD

Tuesday after payrun backup job 

1) Clean up archive logs first


dg=$(date +%Y%m%d:%H)
rman <<EOD
run {
ALLOCATE CHANNEL daily1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL daily2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
backup archivelog tag="arch_backup_${dg}";
   }
exit
EOD

2) Execute the keep backup


dg=$(date +%Y%m%d)
rman <<EOD
run {
ALLOCATE CHANNEL yearly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/7_YEAR.ora)';
ALLOCATE CHANNEL yearly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/7_YEAR.ora)';
backup database tag="payrun_backup_${dg}" plus archivelog tag="full_backup_${dg}" keep until time 'sysdate + 2555';
   }
exit
EOD


Restore example

Now in order to restore, I need to allocate channels to all the possible buckets. Below is the script I used  to validate this with a "restore database validate" command.


run {
ALLOCATE CHANNEL daily1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL daily2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
ALLOCATE CHANNEL weekly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
ALLOCATE CHANNEL weekly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
ALLOCATE CHANNEL monthly1 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
ALLOCATE CHANNEL monthly2 DEVICE TYPE     'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
restore database validate;
    }


Below is what I am seeing in the RMAN log because I picked a point in time where I want it to ignore the 7_YEAR backups.

In this case you can see that it tried to retrieve the Payrun backup but failed back to the previous backup with tag "FULL_073122". This is the backup I want.


channel daily1: starting validation of datafile backup set
channel daily1: reading from backup piece h613o4a4_550_1_1
channel daily1: ORA-19870: error while restoring backup piece h613o4a4_550_1_1
ORA-19507: failed to retrieve sequential file, handle="h613o4a4_550_1_1", parms=""
ORA-27029: skgfrtrv: sbtrestore returned error
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
   KBHS-07502: File not found
KBHS-01404: See trace file /u01/app/oracle/diag/rdbms/acmedbp/acmedbp/trace/sbtio_4819_140461854265664.log for det
failover to previous backup

channel daily1: starting validation of datafile backup set
channel daily1: reading from backup piece gq13o3rm_538_1_1
channel daily1: piece handle=gq13o3rm_538_1_1 tag=FULL_073122
channel daily1: restored backup piece 1
channel daily1: validation complete, elapsed time: 00:00:08


That's all there is to it. Tags are very help helpful to identify the correct backups.



Thursday, July 28, 2022

ZFSSA replicating locked snaphots to OCI for offsite backup

ZFSSA replication can be used to create locked offsite backups. In this post I will show you how to take advantage of the new "Locked Snapshot" feature of ZFSSA and the ZFS Image in OCI to create an offsite backup strategy to OCI.

ZFSSA Snapshot Replication
If you haven't heard of the locked snapshot feature of ZFSSA I blogged about here.  In this post I am going to take advantage of this feature and show you how you can leverage it to provide a locked backup in the Oracle Cloud using the ZFS image available in OCI.

In order to demonstrate this I will start by following the documentation to create a ZFS image in OCI as my destination.  Here is a great place to start with creating the virtual ZFS appliance in OCI.

Step 1 - Configure remote replication from source ZFSSA to ZFS appliance in OCI. 


By enabling the "Remote Replication" service with a named destination, "downstream_zfs" in my example, I can now replicate to my ZFS appliance in OCI.

zfssa remote replication


Step 2 -  Ensure the source project/share has "Enable retention policy for Scheduled Snapshots" turned on


For my example I created a new project "Blogtest".  On the "snapshots" tab I put a checkmark next to 
"Enable retention policy for Scheduled Snapshots".  By checking this, the project will adhere to preventing the deletion of any locked snapshots.  This property is replicated to the downstream and will cause the replicated project shares to also adhere to locking snapshots.  This can also be set at the individual share level if you wish to control the configuration of locked snapshots for individual shares.

Below you can see where this is enabled for snapshots created within the project.

ZFSSA Enable Snapshot Retention


Step 3 -  Create a snapshot schedule with "locked" snapshots


The next step is to create locked snapshots. This can be done at the project level (affecting all shares) or at the share level. In my example below I gave the scheduled snapshots a label "daily_snaps".  Notice for my example I am only keeping only 1 snapshot and I am locking the snapshot at the source. In order for the snapshot to be locked at the destination
  • Retention Policy MUST be enabled for the share (or inherited from the project).
  • The source snapshot MUST be locked when it is created
zfssa create snapshots

Step 4 -  Add replication to downstream ZFS in OCI

The next step is to add replication to the project  configuration to replicate the shares to my ZFS in OCI. Below you can see the target is my "downstream_zfs" that I configured in the "Remote Replication" service.
You can also see that I am telling the replication to "include snapshots", which are my locked snapshots, and also to "Retain user snapshots on target".  Under "Disaster Recovery" you can see that I am telling the downstream to keep a 30 day recovery point.  Even though I am only keeping 1 locked snapshot on the source, I want to keep 30 days of recovery on the downstream in OCI.

ZFSSA add replication

Step 5 -  Configure snapshots to replicate

In this step I am updating the replication action to replicate the locked scheduled snapshot to the downstream.  Notice that I changed the number of snapshots from 1 (on the source) to 30 on the destination, and I am keeping the snapshot retention locked. This will ensure that the daily locked snapshot taken on the source will replicate to the destination as a locked snapshot, and 30 snapshots on the destination will remain locked.  The 31st snapshot is no longer needed.

ZFSSA Autosnap replication


Step 6 -  Configure the replication schedule

The last step is to configure the replication schedule. This ensures that on a daily basis the snapshots that are configured to be replicated will be replicated regularly to the downstream. You can make this more aggressive than daily if you wish the downstream to be more in sync in the primary.  In my example below I configured the replication to occur every 10 minutes. This means that the downstream should have all updates as of 10 minutes ago or less. If I need to go back in time, I will have daily snapshots for the last 30 days that are locked and cannot be removed.

ZFSSA Replication Schedule

Step 7 -  Validate the replication


Now that I have everything configured I am going to take a look at the replicated snapshots on my destination.  I navigate to "shares" and I look under "replicat" and find my share. By clicking on the pencil and looking at the "snapshots" tab I can see my snapshot replicated over.

zfssa downstream copy

And when I click on the pencil next to the snapshot I can see that the snapshot is locked and I can't unlock it.

zfssa downstream locked



From there I can clone the snap and create a local snapshot, back it up to object storage, or reverse the replication if needed.



Tuesday, June 21, 2022

Migrate a large oracle database to OCI from disk backup

 Migrating an Oracle database from on-premise to OCI is especially challenging when the database is quite large.  In this blog post I will walk through the steps to migrate to OCI leveraging an on-disk local backup copied to object storage.

migrate Oracle database to OCI


The basic steps to perform this task are on on the image above.

Step #1 - Upload backup pieces to object storage.

The first step to migrate my database (acmedb) is to copy the RMAN backup pieces to the OCI object storage using the OCI Client tool.

In order to make this easier, I am breaking this step into a few smaller steps.

Step #1A - Take a full backup to a separate location on disk 


This can also be done by moving the backup pieces, or creating them with a different backup format.  By creating the backup pieces in a separate directory, I am able to take advantage of the bulk upload feature of the OCI client tool. The alternative is to create an upload statement for each backup piece.

For my RMAN backup example (acmedb) I am going to change the location of the disk backup and perform a disk backup.  I am also going to compress my backup using medium compression (this requires the ACO license).  Compressing the backup sets allows me to make the backup pieces as small as possible when transferring to the OCI object store.

Below is the output from my RMAN configuration that I am using for the backup.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ACMEDBP are:


CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/acmedb/ocimigrate/backup_%d_%U';
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

I created a new level 0 backup including archive logs and below is the "list backup summary" output showing the backup pieces.

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4125    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141019
4151    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141201
4167    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4168    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4169    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4170    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4171    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4172    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4173    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4174    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4175    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4176    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4208    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141309
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310



From the output you can see that there are a total of 14 backup pieces
  • 3 Archive log backup sets (two created before the backup of datafiles, and one after).
    • TAG20220621T141019
    • TAG20220621T141201
    • TAG20220621T141309
  • 10 Level 0 datafile backups
    • TAG20220621T141202
  • 1 controlfile backup 
    • TAG20220621T141310

Step #1B - Create the bucket in OCI and configure OCI Client

Now we need a bucket to upload the 14 RMAN backup pieces to. 

Before I can upload the objects, I need to download and configure the OCI Client tool. You can find the instructions to do this here.

Once the client tool is installed I can create the bucket and verify that the OCI Client tool is configured correctly.

The command to create the bucket is.



Below is the output when I ran it for my compartment and created the bucket "acmedb_migrate"

 oci os bucket create --namespace id2avsofo --name acmedb_migrate --compartment-id ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "auto-tiering": null,
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"created-by": "ocid1.user.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"defined-tags": { "Oracle-Tags": { "CreatedBy": "oracleidentitycloudservice/john.smith@oracle.com", "CreatedOn": "2022-06-21T14:36:19.680Z" } }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893", "freeform-tags": {}, "id": "ocid1.bucket.oc1.iad.aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"is-read-only": false, "kms-key-id": null, "metadata": {}, "name": "acmedb_migrate", "namespace": "id2avsofo",
"object-events-enabled": false, "object-lifecycle-policy-etag": null, "public-access-type": "NoPublicAccess", "replication-enabled": false, "storage-tier": "Standard", "time-created": "2022-06-21T14:36:19.763000+00:00", "versioning": "Disabled" }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893" }

Step #1C - Upload the backup pieces to Object Storage in OCI


The next step is to upload all the backup pieces that are in the directory "/acmedb/ocimigrate" to OCI using the bulk upload feature.



Below is the output of the upload - Notice I used a parallelism of 14 to ensure a quick upload.

 oci os object bulk-upload --namespace-name id20skavsofo    --bucket-name acmedb_migrate --src-dir /acmedb/ocimigrate/ --parallel-upload-count 10

Uploaded backup_RADB_3u10k6hj_126_1_1  [####################################]  100%
Uploaded backup_RADB_4710k6jl_135_1_1  [####################################]  100%
Uploaded backup_RADB_4610k6jh_134_1_1  [####################################]  100%
Uploaded backup_RADB_3n10k6b0_119_1_1  [####################################]  100%
Uploaded backup_RADB_3m10k6b0_118_1_1  [####################################]  100%
Uploaded backup_RADB_3r10k6ec_123_1_1  [####################################]  100%
Uploaded backup_RADB_4510k6jh_133_1_1  [####################################]  100%
Uploaded backup_RADB_4010k6hj_128_1_1  [####################################]  100%
Uploaded backup_RADB_3v10k6hj_127_1_1  [####################################]  100%
Uploaded backup_RADB_4110k6hk_129_1_1  [####################################]  100%
Uploaded backup_RADB_4210k6id_130_1_1  [####################################]  100%
Uploaded backup_RADB_4310k6ie_131_1_1  [####################################]  100%
Uploaded backup_RADB_3l10k6b0_117_1_1  [####################################]  100%
Uploaded backup_RADB_4410k6ie_132_1_1  [####################################]  100%
Uploaded backup_RADB_3k10k6b0_116_1_1  [####################################]  100%
Uploaded backup_RADB_3t10k6hj_125_1_1  [####################################]  100%

{
  "skipped-objects": [],
  "upload-failures": {},
  "uploaded-objects": {
    "backup_RADB_3k10k6b0_116_1_1": {
      "etag": "ab4a1017-3ba7-46e2-a2ee-3f4cd9a82ad3",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "W0hYIzfAWUVzACWNudcQDg==-3"
    },
    "backup_RADB_3l10k6b0_117_1_1": {
      "etag": "a620076e-975f-4d8c-87e8-394c4cf966cd",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-multipart-md5": "zapGBx8Imcdk91JM2+gORQ==-3"
    },
    "backup_RADB_3m10k6b0_118_1_1": {
      "etag": "a96c35c0-4c0b-4646-ae38-723f92c8496e",
      "last-modified": "Tue, 21 Jun 2022 14:57:32 GMT",
      "opc-content-md5": "vNAsU3vLcjzp6OwEeLXGgA=="
    },
    "backup_RADB_3n10k6b0_119_1_1": {
      "etag": "8f565894-5097-4ebb-9569-fdd31cc0c22d",
      "last-modified": "Tue, 21 Jun 2022 14:57:31 GMT",
      "opc-content-md5": "aSUSQWv5b+EfoLy9L9UBYQ=="
    },
    "backup_RADB_3r10k6ec_123_1_1": {
      "etag": "120dead4-c8ae-44de-9d27-39e1c28a2c48",
      "last-modified": "Tue, 21 Jun 2022 14:57:33 GMT",
      "opc-content-md5": "4wHBrgZXuIMlYWriBbs1ng=="
    },
    "backup_RADB_3s10k6hh_124_1_1": {
      "etag": "07d74b7f-68d6-4a77-9c4d-42f78c51c692",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "uzRd51bAKvFjhbbsfL1YAg=="
    },
    "backup_RADB_3t10k6hj_125_1_1": {
      "etag": "e5d3225b-a687-47e1-ad31-f4270ce31ddd",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "aZIirf98ZNqwBAlIeWzuhQ==-3"
    },
    "backup_RADB_3u10k6hj_126_1_1": {
      "etag": "5f5cc5ad-4aa3-4c3a-8848-16b3442a1e2c",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "dT6EYLv1yzf6LZCn1/Dsvw=="
    },
    "backup_RADB_3v10k6hj_127_1_1": {
      "etag": "297daece-be72-475f-b40d-982fb7115cd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "Zt3h5YfHU6F771ahltYhDQ=="
    },
    "backup_RADB_4010k6hj_128_1_1": {
      "etag": "9d723f2a-962e-4d03-9283-fc8a68f53af8",
      "last-modified": "Tue, 21 Jun 2022 14:57:35 GMT",
      "opc-content-md5": "KuNzVyUQrrSsA/kgioq9oA=="
    },
    "backup_RADB_4110k6hk_129_1_1": {
      "etag": "16f7f02a-e5ae-48a2-a7d2-b6d1dedc82ad",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "24SzzZwg7iu7PV8TBpMXEg=="
    },
    "backup_RADB_4210k6id_130_1_1": {
      "etag": "0584e14f-53dc-4251-8bad-907f357a283e",
      "last-modified": "Tue, 21 Jun 2022 14:57:37 GMT",
      "opc-content-md5": "sjPsmoeFsMhZISAmaVN0vQ=="
    },
    "backup_RADB_4310k6ie_131_1_1": {
      "etag": "176aea41-dd31-4404-99f4-ffd59c521fd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:40 GMT",
      "opc-content-md5": "2ksAQ2UuU/75YyRKujlLXg=="
    },
    "backup_RADB_4410k6ie_132_1_1": {
      "etag": "766c7585-3837-490b-8563-f3be3d24c98e",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-content-md5": "sh4CFUC/vnxjmMZ5mfgT3Q=="
    },
    "backup_RADB_4510k6jh_133_1_1": {
      "etag": "2de62d73-e44c-4f25-a41d-d45c556054dd",
      "last-modified": "Tue, 21 Jun 2022 14:57:34 GMT",
      "opc-content-md5": "4tVrHqwYG57STn9W6c2Mqw=="
    },
    "backup_RADB_4610k6jh_134_1_1": {
      "etag": "4667419d-9555-4edb-bd6d-749a1ee7660b",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "/MVdDn/vA2IXUcCmtdgKnw=="
    },
    "backup_RADB_4710k6jl_135_1_1": {
      "etag": "d467810a-d62e-42b3-bf7b-019913707312",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "hq8PEQ3PUwyTMWyUBfW4ew=="
    }
  }
}


Step #2 - Create the manifest for the backup pieces.


The next step covers creating the "metadata.xml" for each object which is the manifest the the RMAN library uses to read the backup pieces.

Again this is broken down into a few different steps.

Step #2A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

I executed the jar file which downloads/created the following files.
  • libopc.so - This is the library used by the Cloud Backup module, and I downloaded it into  "/home/oracle/ociconfig/lib/" on my host
  • acmedb.ora - This is the configuration file for my database backup. This was created in "/home/oracle/ociconfig/config/" on my host
This information is used to allocate the channel in RMAN for the manifest.

Step #2b - Generate the manifest create for each backup piece.

The next step is to dynamically create the script to build the manifest for each backup piece. This needs to be done for each backup piece, and the command is

"send channel t1 'export backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #2c - Execute the script with an allocated channel.

The next step is to execute the script in RMAN within a run block after allocating a channel to the bucket in object storage. This needs to be done for each backup piece. You create a run block with one channel allocation followed by "send" commands.

NOTE: This does not have be executed on the host that generated the backups.  In the example below, I set my ORACLE_SID to "dummy" and performed create manifest with the "dummy" instance started up nomount.


Below is an example of allocating a channel to the object storage and creating the manifest for one of the backup pieces.



export ORACLE_SID=dummy
 rman target /
RMAN> startup nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/19c/dbhome_1/dbs/initdummy.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area    1073737792 bytes

Fixed Size                     8904768 bytes
Variable Size                276824064 bytes
Database Buffers             780140544 bytes
Redo Buffers                   7868416 bytes

RMAN> run {
          allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
       send channel t1 'export backuppiece backup_RADB_3r10k6ec_123_1_1';
        }
2> 3> 4>
allocated channel: t1
channel t1: SID=19 device type=SBT_TAPE
channel t1: Oracle Database Backup Service Library VER=23.0.0.1

sent command to channel: t1
released channel: t1


Step #2d - Validate the manifest is created.

I logged into the OCI console, and I can see that there is a directory called "sbt_catalog". This is the directory containing the manifest files. Within this directory you will find a subdirectory for each backup piece. And within those subdirectories you will find a "metadata.xml" object containing the manifest.

Step #3 - Catalog the backup pieces.


The next step covers cataloging the backup pieces in OCI. You need to download the controlfile backup from OCI and start up mount the database.

Again this is broken down into a few different steps.

Step #3A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

Again, you need to configure the backup module (or you can copy the files from your on-premise host).

Step #3b - Catalog each backup piece.

The next step is to dynamically create the script to build the catalog each backup piece. This needs to be done for each backup piece, and the command is

"catalog device type 'sbt_tape'  backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #3c - Execute the script with a configured channel.

I created a configure channel command, and cataloged the backup pieces that in the object store.


RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';


  run {
           catalog device type 'sbt_tape' backuppiece 'backup_RADB_3r10k6ec_123_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3s10k6hh_124_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3t10k6hj_125_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3u10k6hj_126_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3v10k6hj_127_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4010k6hj_128_1_1';
          catalog device type 'sbt_tape' backuppiece ' backup_RADB_4110k6hk_129_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4210k6id_130_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4310k6ie_131_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4410k6ie_132_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4510k6jh_133_1_1';
        }

old RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN>
RMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13>
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=406 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=22 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=407 device type=SBT_TAPE
...
...
...
channel ORA_SBT_TAPE_4: SID=23 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: cataloged backup piece
backup piece handle=backup_RADB_4510k6jh_133_1_1 RECID=212 STAMP=1107964867

RMAN>


Step #3d - List the backups pieces cataloged

I performed a list backup summary to view the newly cataloged tape backup pieces.


RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310
4258    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141019
4270    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141201
4282    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4292    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4303    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4315    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4446    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4468    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4490    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4514    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4539    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202

RMAN>


Step #4 - Restore the database.


The last step is restore the cataloged backup pieces. Remember you might have to change the location of the datafiles.



The process above can be used to upload and catalog both additional archive logs (to bring the files forward) and incremental backups to bring the database forward.