Saturday, August 14, 2021

Using rclone to download Objects from OCI

 I previously created a post that walked through how to configure rclone to easily access objects within the Oracle Cloud Object Store.


Object Store access with rclone


This post is going to go a little deeper on how to quickly download objects from the OCI object store onto your host.

In my example, I needed to download RMAN disk backup files that were copied to the Object Store in OCI.

I have over 10 TB of RMAN backup pieces, so I am going to create an ACFS mount point to store them on.


1) Create ACFS mount point

Creating the mount point is made up of multiple small steps that are documented here. This is a link to the 19c documentation so note it is subject to change over time.

  • Use ASMCMD to create a volume on the data disk group of 20 TB 
- Start ASMCMD connected to the Oracle ASM instance. You must be a user in the OSASM operating system group.

                    - Create the volume "volume1" on the "data" disk group

                    ASMCMD [+] > volcreate -G data -s 20G volume1

  • Use ASMCMD to list the volume information  NOTE: my volume name is volume1-123
                             
ASMCMD [+] > volinfo -G data volume1
Diskgroup Name: DATA

         Volume Name: VOLUME1
         Volume Device: /dev/asm/volume1-123
         State: ENABLED
     ... 

SQL> SELECT volume_name, volume_device FROM V$ASM_VOLUME 
     WHERE volume_name ='VOLUME1';

VOLUME_NAME        VOLUME_DEVICE
-----------------  --------------------------------------
VOLUME1            /dev/asm/volume1-123


  • Create the file system with mkfs from the volume "/dev/asm/volume1-123"
$ /sbin/mkfs -t acfs /dev/asm/volume1-123
mkfs.acfs: version                   = 19.0.0.0.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/volume1-123
mkfs.acfs: volume size               = 21474836480  (   20.00 GB )
mkfs.acfs: Format complete.
  • Register the file system with srvctl
# srvctl add filesystem -device /dev/asm/volume1-123 -path /acfsmounts/acfs2
       -user oracle -mounttowner oracle -mountgroup dba -mountperm 755
NOTE: This will mount the filesystem on /acfsmounts/acfs2

  • Start the filesystem with srvctl
$ srvctl start filesystem -device /dev/asm/volume1-123

  • Change the ownership to oracle

chown -R oracle:dba /acfsmounts/acfs2

2) Use rclone to view objects

The next step is to look at the objects I want to copy to my new ACFS file system. The format of accessing the object store in the commands is
 "rclone {command} [connection name]:{bucket/partial object name - optional}.


NOTE: For all examples my connection name is oci_s3 

I am going to start with the simplest command list buckets (lsd).

NOTE: We are using the s3 interface to view the objects in the namespace.  There is a single namespace space for the entire tenancy.  With OCI there is the concept of "compartments" which can be used to separate applications and users.  The S3 interface does not have this concept, which means that all buckets are visible.
  • rclone lsd - This is the simplest command to list the buckets, and as I noted previously, it lists all buckets, not just my bucket.
       ./rclone lsd oci_s3:
          -1 2021-02-22 15:33:06        -1 Backups
          -1 2021-02-16 21:31:05        -1 MyCloudBucket
          -1 2020-09-23 22:21:36        -1 Test-20200923-1719
          -1 2021-07-20 20:03:27        -1 ZDM_bucket
          -1 2020-11-23 23:47:03        -1 archive
          -1 2021-01-21 13:03:33        -1 bsgbucket
          -1 2021-02-02 15:35:18        -1 bsgbuckets3
          -1 2021-03-03 11:42:13        -1 osctransfer
          -1 2021-03-19 19:57:16        -1 repo
          -1 2021-01-21 19:35:24        -1 short_retention
          -1 2020-11-12 13:41:48        -1 jsmithPublicBucket
          -1 2020-11-04 14:10:33        -1 jsmith_top_bucket
          -1 2020-11-04 11:43:55        -1 zfsrepl
          -1 2020-09-25 16:56:01        -1 zs-oci-bucket

If I want to list what is within my bucket (bsgbucket) I can list that bucket. In this case it treats the flat structure of the object name as if it is a file system, and lists only the top "directories" within my bucket.

./rclone lsd oci_s3:bsgbucket
           0 2021-08-14 23:58:02        -1 file_chunk
           0 2021-08-14 23:58:02        -1 sbt_catalog


  • rclone tree - command will list what is within my bucket as a tree structure.
[opc@rlcone-test rclone]$ ./rclone tree oci_s3:bsgbucket
/
├── expdat.dmp
├── file_chunk
│   └── 2985366474
│       └── MYDB
│           └── backuppiece
│               └── 2021-06-14
│                   ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
│                   │   └── yHqtjSE51L3B
│                   │       ├── 0000000001
│                   │       └── metadata.xml
│                   └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
│                       └── lS9Sdnka2nD0
│                           ├── 0000000001
│                           └── metadata.xml
└── sbt_catalog
    ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
    │   └── metadata.xml
    └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
        └── metadata.xml


  • rclone lsl- command will list what is within my bucket as a long listing with more detail
[opc@rlcone-test rclone]$ ./rclone lsl oci_s3:bsgbucket
   311296 2021-01-21 13:04:05.000000000 expdat.dmp
337379328 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001
     1841 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml
 36175872 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001
     1840 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml
     1841 2021-06-14 19:48:46.000000000 sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml
     1840 2021-06-14 19:49:10.000000000 sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml


3) Use rclone to copy the objects to my local file system.


There are 2 command you can use to copy the files from the object store to the local file system.
  • copy - This is as you expect. It copies the files to the local file system and overwrites the local copy
  • sync - This syncronizes the local file system with the objects in the object store, and will not copy down the object if it already has a local copy.

In my case I am going to use the sync command. This will allow me to re-start copying the objects and it will ignore any objects that were previously successfully copies.

Below is the command I am using to copy (synchronize) the objects from my bucket in the object store (oci_s3:bsgbucket) to the local filesystem (/home/opc/acfs).

./rclone -vv sync -P --multi-thread-streams 12 --transfers 64  oci_s3:bsgbucket   /home/opc/acfs

To break down the command.

  • -vv  This option to rclone gives me "verbose" output so I can see more of what is being copied as the command is executed.
  • -P  This option to rclone gives me feedback on how much of the object has downloaded so far to help me monitor it.
  • --multi-threaded-streams 12 This option to rclone breaks larger objects into chunks to increase the concurrency.
  • --transfers 64 This option to rclone allows for 64 concurrent transfers to occur. This increases the download throughput
  • oci-s3:bsgbucket - This is the source to copy/sync
  • /home/opc/acfs - this is the destination to copy/.sync with

Finally, this is the what the command looks like when it is executing.

opc@rlcone-test rclone]$  ./rclone -vv sync -P --multi-thread-streams 12 --transfers 64  oci_s3:bsgbucket   /home/opc/acfs
2021/08/15 00:15:32 DEBUG : rclone: Version "v1.56.0" starting with parameters ["./rclone" "-vv" "sync" "-P" "--multi-thread-streams" "12" "--transfers" "64" "oci_s3:bsgbucket" "/home/opc/acfs"]
2021/08/15 00:15:32 DEBUG : Creating backend with remote "oci_s3:bsgbucket"
2021/08/15 00:15:32 DEBUG : Using config file from "/home/opc/.config/rclone/rclone.conf"
2021/08/15 00:15:32 DEBUG : Creating backend with remote "/home/opc/acfs"
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO  : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : expdat.dmp: md5 = f97060f5cebcbcea3ad6fadbda136f4e OK
2021-08-15 00:15:33 INFO  : expdat.dmp: Copied (new)
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for checks to finish
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for transfers to finish
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: Starting multi-thread copy with 2 parts of size 160.875Mi
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 2/2 (168689664-337379328) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO  : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001: Copied (new)
2021-08-15 00:15:34 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi finished
Transferred:      333.398Mi / 356.554 MiByte, 94%, 194.424 MiByte/s, ETA 0s
Transferred:            6 / 7, 86%
Elapsed time:         2.0s
Transferring:

NOTE: it broke up the larger object into chunks, and you can see that it downloaded 2 chunks simultaneously.  At the end you can see the file that it was in the middle of transferring.

Conclusion.

rclone is great alternative to the OCI CLI to manage your objects and download them.  It has  more intuitive commands (like "rclone ls").  And the best part is that it doesn't require python and special privleges to install.

Thursday, August 5, 2021

Adding immutability to buckets in the Oracle Cloud Object Store

 I am going to demonstrate a new feature of the object store that you might not have known about.  The feature is "Retention Lock" and is used to protect the objects in a bucket.



Let me first start with a few  links to get you started and then I will demonstrate how to use this feature.


In order to add a retention lock to a bucket you create a rule for the individual bucket.

Below is a screen shot of where you will find the retention rules, and the "Create Rule" button. Also note that I highlighted the "Object Versioning" attribute of the bucket.

NOTE: You cannot add a retention lock to a bucket that has "Object Versioning" enabled. You can also not disable "Object Versioning" once enabled. You MUST suspend "Object Versioning" before adding any retention rules to your bucket.



 There are 3 types of retention locks and below I will describe them and show you how to implement them. They are listed from least restrictive to most restrictive.


DATA GOVERNANCE

Data Governance is a time based lock based on the modified time of EACH OBJECT in the bucket.

The Retention can be set in "days" or "years".

Below is what the settings look like for data governance. You choose "Time-Bound" for the rule type and ensure that you do not "enable retention rule lock".



With Data Governance you can both increase and decrease the duration of the retention lock.

Below you can see after the lock was created, the rule is not locked.



REGULATORY COMPLIANCE

Regulatory Compliance is similar to Data Governance with the exception that the duration can only be increased.
The retention lock of the individual objects, just like Data Governance is based on when the individual object was last modified.
Another key difference is that when you "enable retention rule lock", you also set when this rule is locked. The default is 14 days, and cannot be set less than 14 days.
The delay of 14 days is a "cooling off period" that gives you 14 days to test before the rule takes effect. This is because once the cooling off period ends, the retention time cannot be shortened.


Below is the screen shot of creating a retention rule for regulatory compliance and note that the retention rule lock MUST be enabled to ensure the duration is not shortened.


It also asked me to confirm the "lock date" before the rule is created.




Below are the rules that are set after both of these steps.


.NOTE: I now have 2 rules. I have the original rule that will lock the objects for 30 days (this can be changed as needed). I also have a Regulatory Compliance rule that will lock the objects for 1 day. The Regulatory Compliance rule not take effect for 14 days from today.


LEGAL HOLD

The final type of retention is a legal hold.  A legal hold will put a retention lock on the WHOLE bucket. All objects in the bucket are locked and cannot be modified/deleted until the hold is removed. There is no ending time period for a legal hold.

Below is how you create a legal hold.



SUMMARY

You can create the 3 types of retention locks, and you can even layer them. Below you can see that I have 3 locks. The Legal Hold rule will lock everything, but that can be removed leaving the 2 remaining rules.  I can remove the Data Governance rule, but the Regulatory Compliance rule is the most restrictive. Once the 14 day (or whatever you set) has passed this rule cannot be changed.


Now when I go to delete an object that is protected by a retention rule I get an error. Below is example of what you will see.




Wednesday, July 28, 2021

A New ZDLRA feature can help you migrate to a new ZDLRA

 A new feature was included in the 19.2.1.1.2 ZDLRA software release to help you migrate your backup strategy when moving to a new ZDLRA.


This feature allows you to continue to access your older backups during the cut-over period directly from the new ZDLRA.  You point your database restore to the the new ZDLRA  and it will automagically access the older backups if necessary. Once the cutover period has passed, the old ZDLRA can be retired.

I am going to walk through the steps.

1. Configure new ZDLRA

  • Add the new ZDLRA to OEM - The first step is to ensure that the new ZDLRA has been registered within your OEM environment. This will allow it to be managed, and of course monitored.
  • Add a replication VPC user to the new ZDLRA. This will be used to connect from the old ZDLRA.
  • Add the VPC users on the new ZDLRA that match the old ZDLRA
  • Configure policies on new ZDLRA to match old ZDLRA.
          This can done by dynamically executing DBMS_RA.CREATE_PROTECTION_POLICY. 
           Current protection policy information can be read from the RA_PROTECTION_POLICY view.
  • Add databases to proper protection policies on new ZDLRA.
        This can be done by dynamically executing DBMS_RA.ADD_DB. 
        Current database information can be read from the RA_DATABASE view.

  • Grant the replication VPC user access to all databases for replication.
        This can be done by dynamically executing DBMS_RA.GRANT_DB_ACCESS
        The current list of databases can be read from the RA_DATABASE view.

  • Grant the VPC users access to the database for backups/restores
        This can be by dynamically executing DBMS_RA.GRANT_DB_ACCESS
        The current list of grants can be read from the RA_DB_ACCESS view
  • Create a replication server on the old ZDLRA that points to the new ZDLRA
  • Add the protection policies on the old ZDLRA to the replication server created previously..

NOTE: When these steps are completed, the old ZDLRA will replicate the most recent L0 to the new ZDLRA, and will then replicate all new incremental backups and archive logs.




2. Switch to new ZDLRA for backups

  • Update the wallet on all clients to include the VPC user/Scan listener of the new ZDLRA.
  • Update the real-time redo configuration (if using real-time redo) to point to the new ZDLRA.
  • Update backup jobs to open channels to the new ZDLRA
  • Remove the VPC replication user from the new ZDLRA  
  • Drop the replication server on the old ZDLRA
NOTE: The backups will begin with an incremental backup based on the contents of the new ZDLRA and will properly create a "virtual full". Archive logs will automatically pick up with the sequence number following the last log replicated from the old ZDLRA.



3 . Configure "Read-Only Mode" replication to old ZDLRA

  • Add a replication VPC user on the old ZDLRA. This will be used to connect from the new ZDLRA.
  • Create a replication server from new ZDLRA to the old ZDLRA
  • Grant the replication VPC user on the old ZDLRA access to all databases for replication.
        This can be done by dynamically executing DBMS_RA.GRANT_DB_ACCESS
        The current list of databases can be read from the RA_DATABASE view.
  • Add a replication server for each policy that includes the "Read-Only" flag set to "YES".
NOTE: this will allow the new ZDLRA to pull backups from the old ZDLRA that only exist on the old ZDLRA.


4 . Retire old ZDLRA after cutover period

  • Remove replication server from new ZDLRA that points to old ZDLRA
NOTE: The old ZDLRA can now be decommissioned.



That's all there is to it. This will allow you to restore from the new ZDLRA, and not have to keep track of which backups are on which appliance during the cutover window !

Tuesday, May 18, 2021

TDE queries to view your configuration

 This post contains some of the scripts I have been using on my TDE encrypted database to see the big picture of what is being encrypted by what key.



1) Wallet information


 The first script I put together will list the status of wallets for all tenants on all nodes. This will give you the wallet location, type of wallet, united, etc.



Below is the output of this script for my single node, local wallet database.

  INST_ID PDB Name   Type	 WRL_PARAMETER					    Status			   WALLET_TYPE		KEYSTORE Backed Up
---------- ---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
	 1 CDB$ROOT   FILE	 /home/oracle/app/oracle/admin/tdecdb/wallet/tde/   OPEN			   AUTOLOGIN		NONE	 NO
	   PDB$SEED   FILE							    OPEN			   AUTOLOGIN		UNITED	 NO
	   PDBTDE1    FILE							    OPEN			   AUTOLOGIN		UNITED	 NO
	   PDBTDE2    FILE							    OPEN			   AUTOLOGIN		UNITED	 NO
	   PDBTDE3    FILE							    OPEN			   AUTOLOGIN		UNITED	 NO



Below is a the output from a 4 node cluster with OKV configured.



INST_ID PDB Name   Type       WRL_PARAMETER                                 Status               WALLET_TYPE      KEYSTORE Backed Up
------ ---------- ---------- ------------------------------------         --------------      ----------------    ------------- -------------------- -------- ----------
     1 CDB$ROOT   FILE       /u02/app/oracle/admin/jckey/wallet/tde/      OPEN_NO_MASTER_KEY  AUTOLOGIN            NONE     UNDEFINED
       CDB$ROOT   OKV                                                     OPEN                OKV                  NONE     UNDEFINED
       JCKPDB     FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       JCKPDB     OKV                                                     OPEN                OKV                  UNITED   UNDEFINED
       PDB$SEED   FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       PDB$SEED   OKV                                                     OPEN                OKV                  UNITED   UNDEFINED

     2 CDB$ROOT   FILE       /u02/app/oracle/admin/jckey/wallet/tde/      OPEN_NO_MASTER_KEY  AUTOLOGIN            NONE     UNDEFINED
       CDB$ROOT   OKV                                                     OPEN                OKV                  NONE     UNDEFINED
       JCKPDB     FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       JCKPDB     OKV                                                     OPEN                OKV                  UNITED   UNDEFINED
       PDB$SEED   FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       PDB$SEED   OKV                                                     OPEN                OKV                  UNITED   UNDEFINED

     3 CDB$ROOT   FILE       /u02/app/oracle/admin/jckey/wallet/tde/      OPEN_NO_MASTER_KEY  AUTOLOGIN            NONE     UNDEFINED
       CDB$ROOT   OKV                                                     OPEN                OKV                  NONE     UNDEFINED
       JCKPDB     FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       JCKPDB     OKV                                                     OPEN                OKV                  UNITED   UNDEFINED
       PDB$SEED   FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       PDB$SEED   OKV                                                     OPEN                OKV                  UNITED   UNDEFINED

     4 CDB$ROOT   FILE       /u02/app/oracle/admin/jckey/wallet/tde/      OPEN_NO_MASTER_KEY  AUTOLOGIN            NONE     UNDEFINED
       CDB$ROOT   OKV                                                     OPEN                OKV                  NONE     UNDEFINED
       JCKPDB     FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       JCKPDB     OKV                                                     OPEN                OKV                  UNITED   UNDEFINED
       PDB$SEED   FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       PDB$SEED   OKV                                                     OPEN                OKV                  UNITED   UNDEFINED





2) Tablespace information

This script will list the tablespaces, if the tablespace is encrypted, and what the key is.


Below is the output from my database.

PDB Name   Tablespace Name Enc.          Master Key ID              Key ID                             tablespace Encryt key (trunc)
---------- --------------- -----         ------------------------- ----------------------------------- ------------------------------
CDB$ROOT   SYSAUX	       NO	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D
	   SYSTEM	       NO	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D
	   TEMP 	       NO	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D
	   UNDOTBS1	       NO	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D
	   USERS	       YES	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D

PDBTDE1    SYSAUX	       NO	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A
    	   SYSTEM	       NO	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A
    	   TEMP 	       NO	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A
    	   UNDOTBS1	       NO	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A
    	   USERS	       YES	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A

PDBTDE2    SYSAUX	       NO	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8
    	   SYSTEM	       NO	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8
    	   TEMP 	       NO	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8
    	   UNDOTBS1	       NO	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8
    	   USERS	       YES	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8

PDBTDE3    SYSAUX	       NO	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6
	   SYSTEM	       NO	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6
    	   TEMP 	       NO	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6
    	   UNDOTBS1	       NO	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6
    	   USERS	       YES	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6




3) Wallet Contents

Now let's take a look at what's in my wallet.



Below you can see the master key ID for each CDB/PDB and information about when it was created.

Master Key ID                                           Tag                  PDB Name        KEYSTORE_TYPE     Origin     Key Creation Time  Key Act. Time
------------------------------------------------------- -------------------- --------------- ----------------- ---------- ------------------ ------------------
ASd1jY/loU8Bv6HuSfZZFqAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	cdbroot_first_key    CDB$ROOT	     SOFTWARE KEYSTORE LOCAL	  06/28/2021 17:46   06/28/2021 17:46
AQbOELhZAk9Dv8A2mADBKQQAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	cdbroot_second_key		     SOFTWARE KEYSTORE LOCAL	  06/28/2021 18:46   06/28/2021 18:46

AfhjvV/z/U9ev5bICBLYV1MAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde1_firstkey     PDBTDE1	     SOFTWARE KEYSTORE LOCAL	  06/28/2021 17:53   06/28/2021 17:53
AYQysCoXXk+Nv/Q//9sUAV4AAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde1_second_key		     SOFTWARE KEYSTORE LOCAL	  06/28/2021 18:50   06/28/2021 18:50

AVXCNjl3f0+Av+/osXobX2sAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde2_firstkey     PDBTDE2	     SOFTWARE KEYSTORE LOCAL	  06/28/2021 17:54   06/28/2021 17:54
AegHs2QPk09xv0HVO3B1alQAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde2_second_key		     SOFTWARE KEYSTORE LOCAL	  06/28/2021 18:50   06/28/2021 18:50

Ab1/+jaPck+Ev6rhmBKtxXEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde3_firstkey     PDBTDE3	     SOFTWARE KEYSTORE LOCAL	  06/28/2021 17:54   06/28/2021 17:54
AW5TJ43d8E+ZvxD8A1YhdcMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde1_second_key		     SOFTWARE KEYSTORE LOCAL	  06/28/2021 18:50   06/28/2021 18:50


NOTE: I rotated my master key, and you can see both keys.. Adding a tag to the key helps identify the key also.


4) Control file Contents

This query looks at the x$jcbdbk table to determine the master key(s) currently in use.



PDB Name        Key ID                              Master Key ID
--------------- ----------------------------------- -------------------------
CDB$ROOT        06CE10B859024F43BFC0369800C12904    AQbOELhZAk9Dv8A2mADBKQQ=

PDB$SEED        00000000000000000000000000000000    AQAAAAAAAAAAAAAAAAAAAAA=

PDBTDE1         8432B02A175E4F8DBFF43FFFDB14015E    AYQysCoXXk+Nv/Q//9sUAV4=

PDBTDE2         E807B3640F934F71BF41D53B70756A54    AegHs2QPk09xv0HVO3B1alQ=

PDBTDE3         6E53278DDDF04F99BF10FC03562175C3    AW5TJ43d8E+ZvxD8A1YhdcM=



Conclusion :

 By looking at the queries above you should have a better of idea of how the Master encryption key ties to the tablespace encryption.

 You can also see what happens when you rotate the master key, and how it affects the tablespaces.



Monday, May 10, 2021

Configuring ExaCC backups of an Oracle Database

This post covers how to configure your backups of an ExaCC database beyond the web interface. 


First off the documentation can be found below, along with using the "--help" option at the command line with "bkup_api"

Configuration - https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/customize-backup-configuration-using-bkup_api.html

Backup execution - https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/create-demand-backup.html#GUID-2370EA04-3141-4D02-B328-5EE9A10F66F2



    Step #1 - Configure backup settings in ExaCC

    The next step is to configure my database to be backed up using the tooling. This is pretty straightforward. I click on the "edit backup" button and fill in the information for my database and save it.  In my case I am using ZFS, and the need to make sure that I change my container to the container where the ZFS is configured.

    NOTE : The backup strategy is a Weekly L0 (full) backup every Sunday, and a daily L1 (differential incremental backup) on all other days. The time the backup is scheduled can be found in either the backup settings, or by looking at the crontab file.



    Then I just wait until I see complete. If I click on the work requests, I can see the progress until it's finished.



    Step #2 - Update the settings to use my RMAN catalog.

    First I need to get what the current settings are for my database (dbsg2) and save them in a config file so I can update them.

    I log into the first node, and su to root.
    Once there I execute "get config --all" and save all the settings to a file that I can update.

    NOTE : I an creating a new file under the bakup_api/cfg directory to make it easy to find.

    $ sudo su -
    Last login: Thu May  6 11:43:46 PDT 2021 on pts/0
    [root@ecc ~]## /var/opt/oracle/bkup_api/bkup_api get config --all --file=/var/opt/oracle/bkup_api/cfg/dbsg2.cfg --dbname dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : get_config
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_92303612_20210506125612.006275.log
    File /var/opt/oracle/bkup_api/cfg/dbsg2.cfg created
    
    

    Now I am going to edit it and make some changes.

    I changed to RMAN catalog settings to use my catalog.
    NOTE: The entry has to be the connect string, not a tnsnames.ora entry.

    #### This section is applicable when using a rman catalog ####
    # Enables RMAN catalog. Can be set to yes or no.
    bkup_use_rcat=yes
    
    ## Below parameters are required if rman catalog is enabled
    # RMAN catalog user
    bkup_rcat_user=rco
    
    
    # RMAN catalog password
    #bkup_rcat_passwd=RMan19c#_
    
    # RMAN catalog conn string
    bkup_rcat_conn=ecc-scan.bgrenn.com:1521:rmanpdb.bgrenn.com
    
    
    

    Now I am going to commit (set) the changes using the "set config" command
    # /var/opt/oracle/bkup_api/bkup_api set config --file=/var/opt/oracle/bkup_api/cfg/dbsg2.cfg --dbname dbsg2 
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : set_config
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_b800281f_20210506130824.084259.log
    cfgfile : /var/opt/oracle/bkup_api/cfg/dbsg2.cfg
    Using configuration file: /var/opt/oracle/bkup_api/cfg/dbsg2.cfg
    API::Parameters validated.
    UUID d0845ea0aea611eb98fb52540068a695 for this set_config(configure-backup)
    ** process started with PID: 86143
    ** see log file for monitor progress
    -------------------------------------
    
    


    And after a few minutes, I am going to check and make sure it was successful by using the configure_status command

    
    /var/opt/oracle/bkup_api/bkup_api configure_status --dbname dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : configure_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_fa81558e_20210507060019.504831.log
    * Last registered operation: 2021-05-07 12:58:41 UTC 
    * Configure backup status: finished
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 120531
    *
    * RETURN CODE:0
    ##################################################
    
    
    Everything looks good !  It removed my configuration file (which is good because it had the password in it).  
    I found that 2 things happened as part of adding an RMAN catalog
    1. The password  for the RMAN catalog user is now stored in the wallet file.
    2. There is an entry in my tnsnames file on all nodes for "CATALOG" which points to the rman catalog.

    NOTE: When part of this process is to register the database with the RMAN catalog. You do not have to manually register the database in the catalog.

    Step #3 - Take a manual backup

    Now logged in as OPC, and becoming Root, and can run a special backup using bkup_api


    # /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_start
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_9458c30f_20210510084341.430481.log
    UUID 7f6622f8b1a611eb865552540068a695 for this backup
    ** process started with PID: 336757
    ** see log file for monitor progress
    -------------------------------------
    
    

    I can see the status while it's running

    /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_46545e6f_20210510084812.014419.log
    (' Warning: unable to get current configuration of:', 'catalog')
    * Current backup settings:
    * Last registered Bkup: 05-10 15:44 UTC API::336757:: Starting dbaas backup process
    * Bkup state: running
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 336757
      API:: Starting dbaas backup process
    *
    * RETURN CODE:0
    ##################################################
    
    

    And I waited a few minutes, and I can see it was successful.

    # /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_8acd03e3_20210510085129.207757.log
    (' Warning: unable to get current configuration of:', 'catalog')
    * Current backup settings:
    * Last registered Bkup: 05-10 15:44 UTC API::336757:: Starting dbaas backup process
    * Bkup state: running
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 336757
      API:: Starting dbaas backup process
    *************************************************
    * Backup steps
     -> 2021-05-10 08:44:20.651787 - API:: invoked with args : -dbname=dbsg2 -uuid=7f6622f8b1a611eb865552540068a695 -level1 
     -> 2021-05-10 08:44:23.458698 - API:: Wallet is in open AUTOLOGIN state
     -> 2021-05-10 08:44:24.204793 - API:: Oracle database state is up and running
     -> 2021-05-10 08:44:25.686134 - API:: CATALOG SETTINGS 
     -> 2021-05-10 08:45:19.767284 - API:: DB instance: dbsg2
     -> 2021-05-10 08:45:19.767424 - API:: Validating the backup repository ...... 
     -> 2021-05-10 08:46:38.263401 - API::      All backup pieces are ok
     -> 2021-05-10 08:46:38.263584 - API:: Validating the TDE wallet ...... 
     -> 2021-05-10 08:46:41.842706 - API:: TDE check successful.
     -> 2021-05-10 08:46:42.446560 - API:: Performing incremental backup to shared storage
     -> 2021-05-10 08:46:42.448228 - API:: Executing rman instructions
     -> 2021-05-10 08:49:21.161884 - API:: ....... OK
     -> 2021-05-10 08:49:21.162089 - API:: Incremental backup to shared storage is Completed
     -> 2021-05-10 08:49:21.163822 - API:: Starting backup of config files
     -> 2021-05-10 08:49:21.699197 - API:: Determining the oracle database id
     -> 2021-05-10 08:49:21.726308 - API::  DBID: 2005517379
     -> 2021-05-10 08:49:22.040891 - API:: Creating directories to store config files
     -> 2021-05-10 08:49:22.085476 - API:: Enabling RAC exclusions for config files.
     -> 2021-05-10 08:49:22.114211 - API:: Compressing config files into tar files
     -> 2021-05-10 08:49:22.173842 - API:: Uploading config files to NFS location
     -> 2021-05-10 08:49:22.222493 - API:: Removing temporary location /var/opt/oracle/log/dbsg2/obkup/7f6622f8b1a611eb865552540068a695.
     -> 2021-05-10 08:49:22.224071 - API:: Config files backup ended successfully
     -> 2021-05-10 08:49:26.052494 - API:: All requested tasks are completed
    *
    * RETURN CODE:0
    ##################################################
    
    


    Step #4 - Check my periodic backups


    Now it's been a few days (I started on Thursday and it's now Monday).
    I am going to check on the incremental backups, and the archive log backups.

    There are 2 ways I can do this.

    Using the bkup_api command to list the backups that have run.

    # /var/opt/oracle/bkup_api/bkup_api list --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_eddcd4e1_20210510064145.497707.log
    -> Listing all backups
      Backup Tag             Completion Date (UTC)            Type          keep    
    ----------------------   -----------------------      -----------    --------
       TAG20210506T123203     05/06/2021 19:32:03       full        False
       TAG20210506T131438     05/06/2021 20:14:38       incremental        False
       TAG20210507T012240     05/07/2021 08:22:40       incremental        False
       TAG20210508T012315     05/08/2021 08:23:15       incremental        False
       TAG20210509T012438     05/09/2021 08:24:38       full        False
       TAG20210510T012322     05/10/2021 08:23:22       incremental        False
    
    

    Using the RMAN catalog

    Backup Type         Encrypted Tag                                Backup Piece                                                 Backup Time           Day Of Week
    -------------------- --------- --------------------------------- ------------------------------------------------------------ -------------------- --------------------
    Full L0              YES       DBAAS_FULL_BACKUP20210506122626     /backup/dbaas_bkup_DBSG2_2005517379_0dvu5rp2_13_1          05/06/21 12:29:32    THURSDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210506131110     /backup/dbaas_bkup_DBSG2_2005517379_2avu5ud1_74_1          05/06/21 13:14:18    THURSDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210507011926     /backup/dbaas_bkup_DBSG2_2005517379_72vu792b_226_1         05/07/21 01:22:27    FRIDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210508011939     /backup/dbaas_bkup_DBSG2_2005517379_lbvu9tf3_683_1         05/08/21 01:22:51    SATURDAY
    Full L0              YES       DBAAS_FULL_BACKUP20210509011940     /backup/dbaas_bkup_DBSG2_2005517379_u3vuchr8_963_1         05/09/21 01:22:59    SUNDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210510011940     /backup/dbaas_bkup_DBSG2_2005517379_6rvuf672_1243_1        05/10/21 01:22:49    MONDAY
    
    
    

    NOTE: I can see that a periodic L1 (differential) is executed at 1:22 AM, every day but Sunday where a Full backup is executed.

    Now to look at archive log backups -- I am going to show a subset.

    Again I can use the bkup_api "list_jobs" command and see all the backup jobs that have been run (which include archive logs).


    # /var/opt/oracle/bkup_api/bkup_api list_jobs --dbname dbsg2 | more
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list_jobs
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_b2532724_20210510070545.552300.log
    UUID                             | DATE                | STATUS  | TAG                 | ACTION              
    e7ad1ef6aea011eb9c8252540068a695 | 2021-05-06 19:26:23 | success | TAG20210506T123203  | create-backup-full  
    03616d68aea211eba5aa52540068a695 | 2021-05-06 19:34:12 | success | TAG20210506T123516  | archivelog-backup   
    33fae162aea611eba0ed52540068a695 | 2021-05-06 20:04:12 | success | TAG20210506T130518  | archivelog-backup   
    267c21daaea711eb9d3852540068a695 | 2021-05-06 20:11:07 | success | TAG20210506T131438  | create-backup-incremental
    650fd222aeaa11ebb58652540068a695 | 2021-05-06 20:34:12 | success | TAG20210506T133516  | archivelog-backup   
    961831e4aeae11ebb0d452540068a695 | 2021-05-06 21:04:11 | success | TAG20210506T140517  | archivelog-backup   
    c6919f28aeb211eb957e52540068a695 | 2021-05-06 21:34:12 | success | TAG20210506T143518  | archivelog-backup   
    f7ce0d0caeb611eb97c552540068a695 | 2021-05-06 22:04:12 | success | TAG20210506T150522  | archivelog-backup   
    286e8ea6aebb11eb864c52540068a695 | 2021-05-06 22:34:11 | success | TAG20210506T153516  | archivelog-backup   
    598f77eeaebf11eb92c052540068a695 | 2021-05-06 23:04:11 | success | TAG20210506T160518  | archivelog-backup   
    89f4919aaec311eb9a9452540068a695 | 2021-05-06 23:34:11 | success | TAG20210506T163516  | archivelog-backup   
    bb5ba95eaec711ebb1ed52540068a695 | 2021-05-07 00:04:11 | success | TAG20210506T170518  | archivelog-backup   
     
    

    Step #5 - On demand backups 

    Now that I have my database configured, I am going to demonstrate some of the options you can add to your backup.

    I am going to create a keep backup and give it a tag using bkup_start

    $ /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=dbsg2 --keep --tag=Maymonthlybackup
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_start
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_7d923417_20210507113940.052080.log
    UUID 958a58beaf6311eba98a52540068a695 for this backup
    ** process started with PID: 262102
    ** see log file for monitor progress
    -------------------------------------
    
    

    Now to list it.

    $ /var/opt/oracle/bkup_api/bkup_api list --dbname dbsg2 --keep
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_19714a18_20210507114254.007083.log
    -> Listing all backups
      Backup Tag                           Completion Date (UTC)      Type          keep    
    ----------------------                 -----------------------   -----------    --------
       Maymonthlybackup20210507T113125     05/07/2021 18:31:25       keep-forever   True
    
    

    Step #6 - Restore my database


    The last step I'm going to do in my database is to restore it to a previous point in time.

    Below is what you see in the console.
    NOTE - If you chose a specific time it will be in UTC time.


    I pick a time to restore to, and click on the 'Restore Database' option. I can follow the process by looking at 'Workload Requests'.




    Step #7 - Validating backups


    A great feature of the command tool is the ability to validate backups that have been taken.  This is easy to do with the 'bkup_api reval_start' command.

    I started my validate for my database dbbsg and I saved the uuid to monitor it.

    # /var/opt/oracle/bkup_api/bkup_api reval_start --dbname=dbbsg
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : reval_start
    -> logfile: /var/opt/oracle/log/dbbsg/bkup_api_log/bkup_api_d0647aa8_20210511032638.300613.log
    UUID 5f204c4cb24311eb887252540068a695 for restore validation
    ** process started with PID: 15281
    ** Backup Request uuid     : 5f204c4cb24311eb887252540068a695
    
    

    Now to monitor it using the uuid until it's done, and I can see it completed successfully.

    # /var/opt/oracle/bkup_api/bkup_api --uuid=5f204c4cb24311eb887252540068a695 --dbname=dbbsg
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    @ STARTING CHECK STATUS 5f204c4cb24311eb887252540068a695
    [ REQUEST TICKET ]
    [UUID    ->  5f204c4cb24311eb887252540068a695 
    [DBNAME  ->  dbbsg 
    [STATE   ->  success 
    [ACTION  ->  start-restore-validate 
    [STARTED ->  2021-05-11 10:26:39 UTC 
    [ENDED   ->  2021-05-11 10:28:00 UTC 
    [PID     ->  15281 
    [TAG     ->  None 
    [PCT     ->  0.0 
    [LOG     ->  2021-05-11 03:26:39.780830 - API:: invoked with args : -dbname=dbbsg -reval=default  
    [LOG     ->  2021-05-11 03:26:42.324669 - API:: Wallet is in open AUTOLOGIN state 
    [LOG     ->  2021-05-11 03:26:42.996885 - API:: Oracle database state is up and running 
    [LOG     ->  2021-05-11 03:28:00.857565 - API:: ....... OK 
    [LOG     ->  2021-05-11 03:28:00.857645 - API:: Restore Validation is Completed 
    [ END TICKET ]
    
    

    Step #8 - Restoring/listing/backups with API

    There are many options to restoring with the API for both the "database" which consists of the CDB and all PDBs, or just a specific PDB.

    Below are some of the commands that help with this.
    NOTE: All commands are executed using "bkup_api" from /var/opt/oracle/bkup_api as "oracle"


    Command Options Description
    bkup_start   Start new special backup now
    bkup_start --keep Create keep backup
    bkup_start --level0 Perform a new FULL level 0 backup 
    bkup_start --level1 Perform a new level1 incremental backup
    bkup_start --cron Creates an incremntal backup through Cron
    bkup_chkcfg   Verifies that backups have been configured
    bkup_status   Shows the status of the most recent backup
    list   Shows the list of the most recent backups
    reval_start   Starts a restore validation of datafiles
    archreval_start   Starts a revalidation of archive logs
    recover_start --latest Recover from latest backup
    recover_start --scn Recover to SCN #
    recover_start --b Recover using a specific backup tag and defuzzy to archivelog following
    recover_start -t Recover to time. Specify --nonutc to use a non-UTC timestamp
    recover_status   Show status of most recent recover of this database


    With recovery you can also just recover a single PDB
    • --pdb={pdbname} - Recovery just a single PDB
    You can also specify if the config files should be restored
    • --cfgfiles - store the configuration files (controlfiles, spfiles etc) along with database files.

    Step #9 - Configuration changes

    You can execute the "bkup_api get config --dbname={dbname}" to create a file containing the  current configuration.  In that file you can see some of the other changes you can be.
    Below is what I see it using the version at the time of writing this.

    Config Parameter Settings Description
    bkup_cron_entry yes/no Enable/Disable automatic backups
    bkup_archlog_cron_entry yes/no Enable automatic archive log cleanup when not using tooling
    bkup_cfg_files yes/no Enable backup of Config files
    bkup_daily_time hh24:mi Time to execute daily backup
    bkup_archlog_frequency 15,20,30… How many minutes apart to execute archive log backups
    bkup_disk yes/no Backups to the FRA
    bkup_disk_recovery_window 1-14 Recover window of FRA
    bkup_oss_xxx   Backup settings when backing up to Object Store in Public Cloud
    bkup_zdlra_xx   Backup settings when backing up to a ZDLRA
    bkup_nfs_xxx   Backup settings when backing up to NFS
    bkup_set_section_size yes/no Set to yes to over ride the default setting
    bkup_section_size   Value for Over riding the default setting for section size
    bkup_channels_node xx Number of channels to be used by RMAN
    bkup_use_rcat yes/no If you are using an RMAN catalog
    bkup_rcat_xxx   RMAN catalog settings

    Step #10 - Scheduled backups


    Backups are scheduled in the crontab on the first node of a cluster. You can view schedule by executing "sudo su - " to become root, and look at the /etc/crontab file.
    Below is what is there for my database (dbsg2)

    # Example of job definition:
    # .---------------- minute (0 - 59)
    # |  .------------- hour (0 - 23)
    # |  |  .---------- day of month (1 - 31)
    # |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
    # |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
    # |  |  |  |  |
    # *  *  *  *  * user-name  command to be executed
    
    15 * * * * oracle /var/opt/oracle/misc/backup_db_wallets.pl
    15 * * * * oracle /var/opt/oracle/dbaascli/dbaascli tde backup --alldb
    19 1 * * * oracle /var/opt/oracle/bkup_api/bkup_api bkup_start --cron --dbname=dbsg2
    4,34 * * * * oracle /var/opt/oracle/bkup_api/bkup_api bkup_archlogs --cron --dbname=dbsg2
    
    
    The jobs that are scheduled to execute are.

    1. backup_db_wallets.pl - Every 15 minutes this script executes from the Crontab.  This script goes through the list of databases (regardless if database backups have been turned on) and it makes a copy of the SEPS wallet file in the current wallet location adding the current date/time. The old copy is removed and only one back exists.

    The following 2 settings are in my configuration file (/var/opt/oracle/creg/dbsg2.ini) are used as the source location of the wallet, and the location for the backup

     wallet_loc=/var/opt/oracle/dbaas_acfs/dbsg2/db_wallet
     wallet_loc_bak=/u02/app/oracle/admin/dbsg2/db_wallet

    NOTE: This wallet is used for storing user credentials and is an autologin wallet.
    I can see the credentials stored. In my case it is both the "sys" password, and the password for "rco". If I was using OSS (object store) my login credential would be stored in this wallet, and if I backed up to ZDLRA, this wallet would contain my connection to the ZDLRA(s) I was backing up to.
    2: CATALOG rco
    1: dbsg2 sys

    2. /var/opt/oracle/dbaascli/dbaascli tde backup --alldb - Every 15 minutes this script executes from the Crontab.  This script goes through the list of databases (regardless if database backups have been turned on) and it makes a copy of the TDE wallet file in the $ORACLE_BASE directory.
    The location is $ORACLE_BASE/{db_name}/tde_wallet/tde/

    The output from this script is in /var/opt/oracle/log/misc/backup

    3. /var/opt/oracle/bkup_api/bkup_api bkup_start --cron --dbname={mydb}- Every at 1:19 AM,  This time is determined in the bkup_api configuration using the parameter  "bkup_daily_time".  This is the same API that is called to perform an on-demand backup from the command line, but with a '--cron' parameter also.

    4. /var/opt/oracle/bkup_api/bkup_api bkup_archlogs --dbname={mydb} Every 30 minutes this script is executed based on the 'bkup_archlog_frequency' bkup_api configuration setting. This script will backup my archive logs to the backup location.