Showing posts with label Back of an Exadata... Show all posts
Showing posts with label Back of an Exadata... Show all posts

Monday, October 23, 2023

Oracle Recovery Service now offers retention lock

 Oracle DB Recovery Service recently added a new feature to protect backups from being prematurely deleted, even by a tenancy administrator.  This new feature adds a retention lock to the Backup Retention Period at the policy level. The image below shows the new settings that you see within the protection policy.

Enabling retention lock

The recovery service comes with some default policies that appear as "oracle defined" policy types

Name            Backup retention period
Platinum            46 days
Gold                   65 days
Silver                 35 days
Bronze               14 days

These policies can't' be changed, and they do not enable retention lock.

In order to implement a retention lock you need to create a new protection policy or  update an existing user defined protection policy.

Step #1 Set/Adjust "Backup retention period"

If you are creating a new "user defined" protection policy, you need to set the backup retention to a number of days between 14 and 95.  You should also take this opportunity to adjust the backup retention of an existing policy, if appropriate, before it is locked.

NOTE: Once a retention lock on the protection policy is activated (discussed in step #3), the backup retention period cannot be decreased, it can only be increased.

Step #2 Click on "enable retention lock"

This step is pretty straightforward. But the most important item to know is that the retention lock is not immediately in effect.  Much like the "retention lock" that is set on object storage, there is a minimum period of at least 14 days before the lock is "active".

 Note: Once the grace period has expired for the policy (explained later in this blog post) the  "retention lock"  is permanent and cannot be removed.


Step #3 Set "Scheduled lock time"

As I said in the previous step, the lock isn't immediately active. In this step you set the future date/time  that the lock time becomes active, and this Date/Time must be at least 14 days in the future.  This provides a grace period that delays when the lock on the policy becomes active. You have up until the lock activation date/time to adjust the scheduled lock time further into the future if it becomes necessary to further day lock activation.

Grace Period 

I wanted to make sure I explain what happens with this grace period so that you can plan accordingly.

  • If you change an existing "user defined" policy to enable the retention lock, any databases that are a member of this policy will not have locked backups until the scheduled lock date/time activates the lock.  
  • If you add databases to a protection policy that has a retention lock enabled, the backups will not be locked until whichever time is farther in the future.
    • Scheduled lock time for the policy if the retention lock has not yet activated.
    • 14 days after the database is added to the protection policy.
  • Databases can be removed from a retention locked protection policy during this grace period.
  • If the policy itself is still within it's grace period from activating, the backup retention period can be adjusted down for the protection policy.
NOTE: This 14 day grace period allows you to review the estimated space needed.  On the protected database summary page, for each database, you can see the "projected space for policy"  in the Space Usage section.  This value can be used to estimate the "locked backup" utilization.


What happens with a retention lock ?

Once the grace period expires the backups for the protected database are time locked and can't be prematurely deleted.  

The backups are protected by the following rules.

1. The database cannot be moved to another policy. No user within the tenancy, including an administrator can remove a database from it's retention enabled policy.  If it becomes necessary to move a database to another policy , an SR needs to raised, and security policies are followed to ensure that this is an approved change.


2.  There is always a 14 day grace period in which changes can be made before the backups become locked. This is your window to verify the backup storage usage required before the lock activates.

3. Even if you check the "72 hour termination option" on the database, backups are locked throughout the retention window.


Comments:

This is a great new feature that protects backups from being deleted by anyone in the tenancy, including tenancy administrators.  This provides an extra layer of security from an attack with compromised credentials.  Because the lock is permanent, always use the 14 day grace period to ensure the usage and duration is appropriate for you database.






Tuesday, September 5, 2023

Creating dynamic KEEP archival backups from ZDLRA

 This post covers how to utilize the new package DBMS_RA.CREATE_ARCHIVAL_BACKUP to dynamically create KEEP archival backups from a ZDLRA.

When using this package to schedule KEEP backups, I recommend creating restore points with every incremental backup.  Read this blog post to find out why.

PROCEDURE CREATE_ARCHIVAL_BACKUP(
   db_unique_name IN VARCHAR2,
   from_tag IN VARCHAR2 DEFAULT NULL,
   compression_algorithm IN VARCHAR2 DEFAULT NULL,
   encryption_algorithm IN VARCHAR2 DEFAULT NULL,
   restore_point IN VARCHAR2 DEFAULT NULL,
   restore_until_scn      IN VARCHAR2 DEFAULT NULL,
   restore_until_time     IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   attribute_set_name     IN VARCHAR2,
   format                 IN VARCHAR2 DEFAULT NULL,
   autobackup_prefix      IN VARCHAR2 DEFAULT NULL,
   restore_tag            IN VARCHAR2 DEFAULT NULL,
   keep_until_time        IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   max_redo_to_apply      IN INTEGER DEFAULT 14                    --> Added in 21.1 June PSU
   comments IN VARCHAR2 DEFAULT NULL);

NOTE: This blog post was updated to include the MAX_REDO_TO_APPLY parameter which is not documented as of writing this post.

 The documentation can be found here.  

These archival KEEP backups can be sent to either

  • TAPE - Using the copy-to-tape process you can send archival backups to physical tape, virtual tape, or any media manager that uses a "TAPE" backup type.
  • CLOUD - Using the copy-to-cloud process you can send archival backups to an OCI object store bucket which can be either on a local ZFSSA (using the OCI API protocol), or to the Oracle Cloud directly.



NOTE: When sending backups to a cloud location, retention rules can be set on the bucket LOCKING the cloud backups to ensure that they are immutable.  This is integrated with the new compliance settings on the RA21.



How to use this package

1. Identify the Database

Because this is more of an on demand process, you have to execute the package for each database separately (rather than by using a protection policy), and identify for each database the point-in-time you want to use for recovery..

2. Set Archival Restore Point

Because the archival backup is dynamically created using existing backups the restore point works differently than if you create the KEEP backup on demand from the protected database. 


When you create a KEEP backup from the protected database, the backup contains 

    • Full backup of all datafiles
    • Backup of spfile and controlfile
    • Backup of archive logs created during the backup starting with a log switch at the beginning of the backup.
    • Final archive logs created by performing a log switch at the end of the backup.

 When you create an Archival backup from the ZDLRA , the backup contains

    • Most current virtual full backup of each datafile prior to the point in time for recovery that you choose. 
    • Backup of spfile and controlfile 
    • Backup of the active archive logs generated when the oldest virtual full datafile backup started, up to the archive logs needed to recover until the point in time chosen for recovery.

As you can see a normal KEEP backup generated by the protected database is a a "self-contained" backup that can be recovered only to the point in time that the backup completed.  You can increase the recover point by adding additional KEEP archival log backups after the backup.

The dynamically created KEEP backup generated by the ZDLRA is also a "self-contained" backup that can be recovered to any point in time after the last datafile backup completed, but it also includes any point in time up to the restore point identified.  

Choices for a dynamic restore point 

 There are 3 options to choose a specific restore point. If you do not set one of these options, the KEEP backup will be created using the current restore point of the database.  

  •  RESTORE_POINT - If you set a unique restore point in the database immediately following an incremental backup (or  at a later point in time), you can create a KEEP backup that will recover to that point-in-time.  When using this process, after creating the restore point you should ensure that you also perform a log switch, and a log sweep to backup the archive logs.  This restore point name is used as the default RESTORE_TAG, and should be unique.  The recommended name (because it is the default KEEP restore tag) is "<KEEP_BACKUP_><yyyyMMddHH24miSS>".  BUT- in order to better identify the restore point, I would use a shorter name that just contains the date (assuming you are only performing an single daily incremental backup), for example "KEEP_BACKUP_MMDDYY".  By using a restore point, you can better control the amount of archive logs necessarily to recover the database.

 

    • Incremental forever backups ensure that the duration of the backup is much shorter than a typical full KEEP backup limiting the amount of archive logs necessary to have a recovery point.
    • Setting a restore point immediately following the backup ensures that the recovery window following the last datafile backup piece is short also limiting the amount of archive logs necessary.

  • RESTORE_UNTIL_SCN or RESTORE_UNTIL_TIME I am grouping these 2 choices together, because they are so similar.  Unlike using a restore point that is preset, using either of these options will create the KEEP archive backup with a recover point as the SCN number given or the UNTIL TIME given (using the databases timezone). 


FROM_TAG - The documentation states that only backups containing the FROM_TAG will be considered if a FROM_TAG is set. I am thinking this would make sense if you let the restore point default to the current time, and you want to choose which backup pieces to include.  I am not sure of the full use of this option however.


WARNING: This process only looks back 14 days for a full backup to start the KEEP backupset with.  If you do not have a full backup within the 14 day window this can be over ridden with the  MAX_REDO_TO_APPLY parameter in the package call. This was added in the 21.1 June PSU to allow customers to set a window farther than 14 days.

 RECOMMENDATIONS 

  •  Because you can create up to 2048 RESTORE_POINTs in a database, and normal restore points are automatically dropped when necessary, I would recommend creating a restore point following each incremental backup with the format mentioned above, This will allow you to create a self-contained FULL KEEP backup from any incremental backup as needed. This can be used to easily create an end-of-month KEEP backup (for example).

 

  • I would use the RESTORE_UNTIL options when it is necessary to create a KEEP backup as of a specific point-in-time regardless of when the backup completed. This would be used if the recovery point is critical.

WARNING

Before creating the archival backup, ensure you have the archive logs backed up that are needed to support the recover point, and ensure there is enough time for the incremental backups to virtualize.  You many need perform a log switch and execute an additional log sweep prior to scheduling the archival backup.

3. Set Archival Options


COMPRESSION_ALGORITHM
-  The default is no compression, and if the backup piece is already compressed, it will not try to compress the backup again.  The documentation does a good job of going through the options, and why you would chose one or the other.  Keep in mind that if your database uses TDE for all the datafiles, there will be no gain with compression, and the extra resources required for compression may slow down the restore.  Also, the compression is performed by the ZDLRA (RMAN compression), but the de-compression is performed by the protected database during restore.

 ENCRYPTION_ALGORITHM - The default is no encryption, but it is important to understand that any copy-to-cloud processing MUST have encryption set.  It is also important to understand that the ZDLRA must be using OKV (Oracle Key Vault) to store the encryption keys when encryption is set. The list of algorithms can be found in the documentation.

 

4. Set Archival Location and Name

ATTRIBUTE_SET_NAME - This must be specified, and this identifies the backup location to send the archival backups.

FORMAT - By default the  backup pieces are given handles automatically generated by the ZDLRA, this setting allows you to change the default backup piece format using normal RMAN formatting options.

AUTOBACKUP_PREFIX - - By default the autobackup pieces will retain the original names, but  you can add a prefix to the original autobackup names. 

 

5. Set Restore TAG

 By default the RESTORE_TAG defaults to  "<KEEP_BACKUP_><yyyyMMddHH24miSS>". This can be overridden to give the backup a more meaningful tag. For example, the end-of-month backup could be tagged as "MONTHLY_12_2023", making it easier to automate finding specific KEEP backups.

 RECOMMENDATIONS 

I would set the Restore Tag to a set format that makes the KEEP backups easy to find. You can see the example above. 

6. Set KEEP_UNTIL time

The default KEEP_UNTIL time is "FOREVER". In most cases you want to set an end date for the backup, allowing the ZDLRA to automatically remove the backup when it expires.  This date-time is based on the timezone of the protected database. 



 SUMMARY 

 If using this functionality to dynamically create Archival KEEP backups...

  • I would set a Restore Point in each database immediately following every incremental backup.  
  • I would schedule this procedure to create the archival backup with a formatted restore tag to make the backup easy to find.
  • If backing up to a CLOUD location, I would use retention rules to ensure the backups are immutable until they expire.

 

 

Tuesday, August 8, 2023

How to clone a single PDB onto another DB host.

Cloning a single PDB isn't always easy to do, especially if you are trying to use an existing backup rather copying from an existing database.  In this blog post I will walk through how to restore a PDB from an existing Multi-tenant backup to another host, and plug it into another CDB.




My environment is:

DBCS database  FASTDB

        db_name                                  fastdb

        db_unique_name                     = fastdb_67s_iad

        DB Version                              = 19.19

        TDE                                         = Using local wallet 

        Backup                                    = Object Storage using the Tooling 

        RMAN catalog                        = Using RMAN catalog to emulate ZDLRA

        PDB name                              = fastdb_pdb1


Step #1 - Prepare destination

The first step is to copy over all the necessary pieces for restoring the database using the object store library.

  • TDE wallet
  • Tape Library
  • Tape Library config file
  • SEPS wallet used by backup connection
  • SPFILE contents to build a pfile
NOTE: When using a ZDLRA as a source you need to copy over the following pieces.
  • TDE wallet
  • ZDLRA library (or use the library in the $ORACLE_HOME)
  • SEPS wallet used by the channel allocation to connect to the ZDLRA
  • SPFILE contents to build a pfile

Also create any directories needed (like audit file location).
  • mkdir /u01/app/oracle/admin/fastdb_67s_iad/adump
I added the entry to the /etc/oratab file and changed my environment to point to this database name.

In my case I copied the following directories and subdirectories to the same destination on the host.
  • scp /opt/oracle/dcs/commonstore/wallets/fastdb_67s_iad/*
  • scp /opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/*
Finally, I copied some of the contents in the spfile.  Below are the critical entries.
audit_file_dest='/u01/app/oracle/admin/fastdb_67s_iad/adump'
*.compatible='19.0.0.0'
*.control_files='+RECO/FASTDB_67S_IAD/CONTROLFILE/current.256.1143303659'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+RECO'
*.db_domain='subnet.vcn.oraclevcn.com'
*.db_files=1024
*.db_name='fastdb'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=8191g
*.db_unique_name='fastdb_67s_iad'
*.diagnostic_dest='/u01/app/oracle'
*.enable_pluggable_database=true
*.global_names=TRUE
*.log_archive_format='%t_%s_%r.dbf'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.processes=4000
*.sga_target=4g
*.tde_configuration='keystore_configuration=FILE'
*.undo_retention=900
*.undo_tablespace='UNDOTBS1'
*.wallet_root='/opt/oracle/dcs/commonstore/wallets/fastdb_67s_iad'




Step #2 - Restore controlfile

The next step is to restore the controlfile to my destination host

I grabbed 2 pieces of information from the source database
  • DBID  - This is needed to restore the controlfile from the backup.
  • Channel configuration.
With this I executed the following to restore the controlfile.

startup nomount;
set dbid=1292000107;

 run
 {
 allocate CHANNEL sbt1 DEVICE TYPE  'SBT_TAPE' FORMAT   '%d_%I_%U_%T_%t' PARMS  'SBT_LIBRARY=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/acefbba5-65ad-454c-b1fe-467dec1abde4/opc_fastdb_67s_iad.ora)';
 restore controlfile ;
 }

and below is my output.

RMAN>  run
 {
 allocate CHANNEL sbt1 DEVICE TYPE  'SBT_TAPE' FORMAT   '%d_%I_%U_%T_%t' PARMS  'SBT_LIBRARY=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/acefbba5-65ad-454c-b1fe-467dec1abde4/opc_fastdb_67s_iad.ora)';
 restore controlfile ;
 }2> 3> 4> 5>

allocated channel: sbt1
channel sbt1: SID=1513 device type=SBT_TAPE
channel sbt1: Oracle Database Backup Service Library VER=19.0.0.1

Starting restore at 08-AUG-23

channel sbt1: starting datafile backup set restore
channel sbt1: restoring control file
channel sbt1: reading from backup piece c-1292000107-20230808-04
channel sbt1: piece handle=c-1292000107-20230808-04 tag=TAG20230808T122731
channel sbt1: restored backup piece 1
channel sbt1: restore complete, elapsed time: 00:00:01
output file name=+RECO/FASTDB_67S_IAD/CONTROLFILE/current.2393.1144350823
Finished restore at 08-AUG-23


Step #3 - Restore Datafiles for CDB and my PDB

Below is the commands I am going to execute to restore the datafiles for my CDB , my PDB and the PDB$SEED.

First I'm going to mount the database, and I am going to spool the output to a logfile.



alter database mount;

SPOOL LOG TO '/tmp/restore.log';
set echo on;

run { 
            restore database root ;
            restore database FASTDB_PDB1;
            restore database "PDB$SEED";
     }

I went through the output, and I can see that it only restored  the CDB , my PDB, and the PDB$SEED.


Step #4 - execute report schema and review file locations


List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1040     SYSTEM               YES     +DATA/FASTDB_67S_IAD/DATAFILE/system.283.1144351313
3    970      SYSAUX               NO      +DATA/FASTDB_67S_IAD/DATAFILE/sysaux.284.1144351305
4    95       UNDOTBS1             YES     +DATA/FASTDB_67S_IAD/DATAFILE/undotbs1.280.1144351303
5    410      PDB$SEED:SYSTEM      NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/system.264.1143303695
6    390      PDB$SEED:SYSAUX      NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/sysaux.265.1143303695
7    50       PDB$SEED:UNDOTBS1    NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/undotbs1.266.1143303695
8    410      FASTDB_PDB1:SYSTEM   YES     +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/system.291.1144351333
9    410      FASTDB_PDB1:SYSAUX   NO      +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/sysaux.292.1144351331
10   70       FASTDB_PDB1:UNDOTBS1 YES     +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/undotbs1.281.1144351329
11   5        USERS                NO      +DATA/FASTDB_67S_IAD/DATAFILE/users.285.1144351303
12   5        FASTDB_PDB1:USERS    NO      +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/users.295.1144351329
13   420      RMANPDB:SYSTEM       YES     +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/system.285.1143999311
14   420      RMANPDB:SYSAUX       NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/sysaux.282.1143999317
15   50       RMANPDB:UNDOTBS1     YES     +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/undotbs1.281.1143999323
16   5        RMANPDB:USERS        NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/users.284.1143999309
17   100      RMANPDB:RMANDATA     NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/rmandata.280.1144001911

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +DATA/FASTDB_67S_IAD/TEMPFILE/temp.263.1143304005
2    131      PDB$SEED:TEMP        32767       +DATA/FASTDB_67S_IAD/017B5DDEB84167ACE063A100000AD816/TEMPFILE/temp.267.1143303733
4    224      FASTDB_PDB1:TEMP     4095        +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/TEMPFILE/temp.272.1143304235
6    224      RMANPDB:TEMP         4095        +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/TEMPFILE/temp.283.1143999305





Step #5 - Determine tablespaces to skip during recovery

I ran this on my primary database, and used it to build the RMAN command. This command will get the names of the tablespaces that are not part of this PDB so that I can ignore them.



select '''' ||pdb_name||''':'||tablespace_name ||',' 
    from cdb_tablespaces a,
         dba_pdbs b
         where a.con_id=b.con_id(+)
         and b.pdb_name not in ('FASTDB_PDB1')
order by 1;

From the above, I built the script below that skips the tablespaces for the PDB "RMANPDB".



recover database skip forever tablespace 
'RMANPDB':RMANDATA,
'RMANPDB':SYSAUX,
'RMANPDB':SYSTEM,
'RMANPDB':TEMP,
'RMANPDB':UNDOTBS1,
'RMANPDB':USERS;
And then ran my RMAN script to recover my datafiles that were restored.
NOTE: the datafiles for my second PDB were "offline dropped"


Starting recover at 08-AUG-23
RMAN-06908: warning: operation will not run in parallel on the allocated channels
RMAN-06909: warning: parallelism require Enterprise Edition
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=3771 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=4523 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=19.0.0.1
channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: +DATA/FASTDB_67S_IAD/DATAFILE/system.283.1144351313


...

Executing: alter database datafile 13, 14, 15, 16, 17 offline drop
starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=26
channel ORA_SBT_TAPE_1: reading from backup piece FASTDB_1292000107_5m23a29f_182_1_1_20230808_1144326447
channel ORA_SBT_TAPE_1: piece handle=FASTDB_1292000107_5m23a29f_182_1_1_20230808_1144326447 tag=TAG20230808T122727
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
archived log file name=+RECO/FASTDB_67S_IAD/ARCHIVELOG/2023_08_08/thread_1_seq_26.2389.1144352807 thread=1 sequence=26
channel default: deleting archived log(s)
archived log file name=+RECO/FASTDB_67S_IAD/ARCHIVELOG/2023_08_08/thread_1_seq_26.2389.1144352807 RECID=1 STAMP=1144352806
media recovery complete, elapsed time: 00:00:01
Finished recover at 08-AUG-23


Step #6 - Open database 

I opened the database and the PDB
SQL> alter database open;

Database altered.


SQL> alter pluggable database fastdb_pdb1 open;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 FASTDB_PDB1                    READ ONLY  NO
         4 RMANPDB                        MOUNTED


I also went and updated my init{sid}.ora to point to the controlfile that I restored.


Step #8 - Create shell PDB in the tooling

I created a new PDB that is going to be the name of the PDB I am going to plug in.  This is optional.




Step #7 - Switch my restored database to be a primary database

I found that the database was considered a standby database, and I needed to make it a primary to unplug my pdb



SQL> RECOVER MANAGED STANDBY DATABASE FINISH;
Media recovery complete.
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

SQL> alter database commit to switchover to primary with session shutdown;



Database altered.


Step #8 - unplug my PDB

I opened the database and unplugged my PDB.

SQL> alter database open;

Database altered.

SQL> alter pluggable database fastdb_pdb1 unplug into '/tmp/fastdb_pdb1.xml' ENCRYPT USING transport_secret;


Pluggable database altered.

SQL>
drop pluggable database fastdb_pdb1 keep datafiles;SQL>

Pluggable database dropped.



Step #9 - Drop the placeholder PDB from the new CDB

Now I am unplugging, and dropping the placeholder PDB.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 LAST21C_PDB1                   READ WRITE NO
         4 CLONED_FASTDB                  READ WRITE NO
SQL> alter pluggable database CLONED_FASTDB close;

Pluggable database altered.


SQL> alter pluggable database CLONED_FASTDB unplug into '/tmp/CLONED_FASTDB.xml' ENCRYPT USING transport_secret;

Pluggable database altered.

SQL> drop pluggable database CLONED_FASTDB keep datafiles;

Pluggable database dropped.



Step #10 - Plug in the PDB and open it up




create pluggable database CLONED_FASTDB USING '/tmp/fastdb_pdb1.xml' keystore identified by W3lCom3#123#123 decrypt using transport_secret
NOCOPY
TEMPFILE REUSE;
SQL>   2    3

Pluggable database created.

SQL> SQL>alter pluggable database cloned_fastdb open;



That's it.  it took a bit to track down the instructions, but this all seemed to work.


Step #11 - Clone the PDB to ensure that the tooling worked

I next cloned the PDB to make sure the tooling properly recognized my PDB and it all worked fine. You can that I know have a second copy of the PDB (test_clone).




Wednesday, May 10, 2023

ZDLRA real-time redo demonstrated

 One of the key features of the ZDLRA is the ability to capture changes from the database "real-time" just like a standby database does. In this blog post I am going to demonstrate what is happening during this process so that you can get a better understanding of how it works.

ZDLRA Real-time Redo


If you look at the GIF above, I will explain what is happening, and show what happens with a demo of the process.

The ZDLRA uses the same process as a standby database.  In fact if you look at the flow of the real-time redo you will notice the redo blocks are sent to BOTH the local redo log files, AND to the staging area on the ZDLRA.  The staging area on the ZDLRA acts just like a standby redo does on a standby database.

As the ZDLRA receives the REDO blocks from the protected database they are validated to ensure that they are valid Oracle Redo block information.  This ensures that a man-in-the-middle attack does not change any of the backup information.  The validation process also assures that if the database is attacked by ransomware (changing blocks), the redo received is not tainted.


The next thing that happens during the process is the logic when a LOG SWITCH occurs.  As we all know, when a log switch occurs on a database instance, the contents of the redo log are written to an archive log.  With real-time redo, this causes the contents of the redo staging area on the ZDLRA (picture a standby redo log) to become a backup set of an archive log.  The RMAN catalog on the ZDLRA is then updated with the internal location of the backup set.


Log switch operation

I am going to go through a demo of what you see happen when this process occurs.

ZDLRA is configured as a redo destination

Below you can see that my database has a "Log archive destination" 3 configured.  The destination itself is the database on the ZDLRA (zdl9), and also notice that the log information will be sent for ALL_ROLES, which will send the log information regardless if it is a primary database or a standby database.
Archive Dest


List backup of recent archive logs from RMAN catalog


Before I demonstrate what happens with the RMAN catalog, I am going to list out the current archive log backup. Below you see that the current archive log backed up to the ZDLRA has the "SEQUENCE #10".

archive log backups prior

Perform a log switch

As you see in the animation at the top of the post, when a log switch occurs, the contents of the redo log in the "redo staging area" are used to create an archive log backup that is stored and cataloged.  I am going to perform a log switch to force this process.

Log switch


List backup of archive logs from RMAN catalog

Now that the log switch occurred, you can see below that there is a new backup set created from the redo staging area.
There are a couple of interesting items to note when you look at the backup set created.

archive logs after


  1. The backup of the archive log is compressed.  As part of the policy on the ZDLRA you have the option to have the backup of the archive log compressed when it is created from the "staged redo". This does NOT require the ACO (Advanced Compression) license. The compressed archive log will be sent back to the DB compressed during a restore operation, and the DB host will uncompress it.  This is the default option (standard compression) and I recommend changing it.  If you decide to compress, then MEDIUM or Low is recommended. Keep this in mind that he this may put more workload on the client to uncompress  the backup sets which may affect recovery times.  NOTE: When using TDE, there will be little to no compression possible.
  2. The TAG is automatically generated. By looking at the timestamp in the RMAN catalog information, you can see that the TAG is automatically generated using the timestamp to make it unique.
  3. The handle begins with "$RSCN_", this is because the backup piece was generated by the ZDLRA itself, and archivelog backup sets will begin with these characters.

Restore and Recovery using partial log information


Now I am going to demonstrate what happens when the database crashes, and there is no time for the database to perform a log switch.

List the active redo log and current SCN

Below you can see that my currently active redo log is sequence # 12.  This is where I am going to begin my test.

begin test


Create a table 

To demonstrate what happens when the database crashes I am going to create a new table. In the table I am going to store the current date, and the current SCN. Using the current SCN we will be able to determine the redo log that contains the table creation.

table create


Abort the database


As you probably know, if I shut down the database gracefully, the DB will automatically clean out the redo logs and archive it's contents. Because I want to demonstrate what happens with crash I am going to shut the database down with an ABORT to ensure the log switch doesn't occur.  Then start the database mount so I can look at the current redo log information

abort


Verify that the log switch did not occur


Next I am going to look at the REDO Log information and verify that my table creation (SCN 32908369) is still in the active redo log and did not get archived during the shutdown.

Log switch doesn't occur

Restore the database


Next I am going to restore the database from backup.


restore

Recover the database


This is where the magic occurs so I am going to show that happens step by step.

Recover using archive logs on disk


The first step the database does is to use the current archive logs to recover the database. You can see in the screenshot below that the database recovers the database using archive logs on disk up to sequence #11 for thread 1.  This contains all the changes for this thread, but does not include what is in the REDO log sequence #12.  Sequence #12 contains the create table we are interested in.

archives on disk

Recover using partial redo log


This step is where the magic of the ZDLRA occurs.  You can see from the screen shot below that the RMAN catalog on the ZDLRA returns the redo log information for Sequence #12 even though it was never archived. The ZDLRA was able to create an archive log backup from the partial contents it had in the Redo Staging area.

rtr recovery

Open the database and display table contents.


This is where it all comes together.  Using the partial redo log information from Redo Log sequence #12, you can see that when the database is opened, the table creation transaction is indeed in the database even though the redo did not become an archive log.
'


Conclusion : I am hoping this post gives you a better idea of how Real-time redo works on the ZDLRA, and how it handles recovering transactions after a database crash

Monday, May 10, 2021

Configuring ExaCC backups of an Oracle Database

This post covers how to configure your backups of an ExaCC database beyond the web interface. 


First off the documentation can be found below, along with using the "--help" option at the command line with "bkup_api"

Configuration - https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/customize-backup-configuration-using-bkup_api.html

Backup execution - https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/create-demand-backup.html#GUID-2370EA04-3141-4D02-B328-5EE9A10F66F2



    Step #1 - Configure backup settings in ExaCC

    The next step is to configure my database to be backed up using the tooling. This is pretty straightforward. I click on the "edit backup" button and fill in the information for my database and save it.  In my case I am using ZFS, and the need to make sure that I change my container to the container where the ZFS is configured.

    NOTE : The backup strategy is a Weekly L0 (full) backup every Sunday, and a daily L1 (differential incremental backup) on all other days. The time the backup is scheduled can be found in either the backup settings, or by looking at the crontab file.



    Then I just wait until I see complete. If I click on the work requests, I can see the progress until it's finished.



    Step #2 - Update the settings to use my RMAN catalog.

    First I need to get what the current settings are for my database (dbsg2) and save them in a config file so I can update them.

    I log into the first node, and su to root.
    Once there I execute "get config --all" and save all the settings to a file that I can update.

    NOTE : I an creating a new file under the bakup_api/cfg directory to make it easy to find.

    $ sudo su -
    Last login: Thu May  6 11:43:46 PDT 2021 on pts/0
    [root@ecc ~]## /var/opt/oracle/bkup_api/bkup_api get config --all --file=/var/opt/oracle/bkup_api/cfg/dbsg2.cfg --dbname dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : get_config
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_92303612_20210506125612.006275.log
    File /var/opt/oracle/bkup_api/cfg/dbsg2.cfg created
    
    

    Now I am going to edit it and make some changes.

    I changed to RMAN catalog settings to use my catalog.
    NOTE: The entry has to be the connect string, not a tnsnames.ora entry.

    #### This section is applicable when using a rman catalog ####
    # Enables RMAN catalog. Can be set to yes or no.
    bkup_use_rcat=yes
    
    ## Below parameters are required if rman catalog is enabled
    # RMAN catalog user
    bkup_rcat_user=rco
    
    
    # RMAN catalog password
    #bkup_rcat_passwd=RMan19c#_
    
    # RMAN catalog conn string
    bkup_rcat_conn=ecc-scan.bgrenn.com:1521:rmanpdb.bgrenn.com
    
    
    

    Now I am going to commit (set) the changes using the "set config" command
    # /var/opt/oracle/bkup_api/bkup_api set config --file=/var/opt/oracle/bkup_api/cfg/dbsg2.cfg --dbname dbsg2 
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : set_config
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_b800281f_20210506130824.084259.log
    cfgfile : /var/opt/oracle/bkup_api/cfg/dbsg2.cfg
    Using configuration file: /var/opt/oracle/bkup_api/cfg/dbsg2.cfg
    API::Parameters validated.
    UUID d0845ea0aea611eb98fb52540068a695 for this set_config(configure-backup)
    ** process started with PID: 86143
    ** see log file for monitor progress
    -------------------------------------
    
    


    And after a few minutes, I am going to check and make sure it was successful by using the configure_status command

    
    /var/opt/oracle/bkup_api/bkup_api configure_status --dbname dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : configure_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_fa81558e_20210507060019.504831.log
    * Last registered operation: 2021-05-07 12:58:41 UTC 
    * Configure backup status: finished
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 120531
    *
    * RETURN CODE:0
    ##################################################
    
    
    Everything looks good !  It removed my configuration file (which is good because it had the password in it).  
    I found that 2 things happened as part of adding an RMAN catalog
    1. The password  for the RMAN catalog user is now stored in the wallet file.
    2. There is an entry in my tnsnames file on all nodes for "CATALOG" which points to the rman catalog.

    NOTE: When part of this process is to register the database with the RMAN catalog. You do not have to manually register the database in the catalog.

    Step #3 - Take a manual backup

    Now logged in as OPC, and becoming Root, and can run a special backup using bkup_api


    # /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_start
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_9458c30f_20210510084341.430481.log
    UUID 7f6622f8b1a611eb865552540068a695 for this backup
    ** process started with PID: 336757
    ** see log file for monitor progress
    -------------------------------------
    
    

    I can see the status while it's running

    /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_46545e6f_20210510084812.014419.log
    (' Warning: unable to get current configuration of:', 'catalog')
    * Current backup settings:
    * Last registered Bkup: 05-10 15:44 UTC API::336757:: Starting dbaas backup process
    * Bkup state: running
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 336757
      API:: Starting dbaas backup process
    *
    * RETURN CODE:0
    ##################################################
    
    

    And I waited a few minutes, and I can see it was successful.

    # /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_8acd03e3_20210510085129.207757.log
    (' Warning: unable to get current configuration of:', 'catalog')
    * Current backup settings:
    * Last registered Bkup: 05-10 15:44 UTC API::336757:: Starting dbaas backup process
    * Bkup state: running
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 336757
      API:: Starting dbaas backup process
    *************************************************
    * Backup steps
     -> 2021-05-10 08:44:20.651787 - API:: invoked with args : -dbname=dbsg2 -uuid=7f6622f8b1a611eb865552540068a695 -level1 
     -> 2021-05-10 08:44:23.458698 - API:: Wallet is in open AUTOLOGIN state
     -> 2021-05-10 08:44:24.204793 - API:: Oracle database state is up and running
     -> 2021-05-10 08:44:25.686134 - API:: CATALOG SETTINGS 
     -> 2021-05-10 08:45:19.767284 - API:: DB instance: dbsg2
     -> 2021-05-10 08:45:19.767424 - API:: Validating the backup repository ...... 
     -> 2021-05-10 08:46:38.263401 - API::      All backup pieces are ok
     -> 2021-05-10 08:46:38.263584 - API:: Validating the TDE wallet ...... 
     -> 2021-05-10 08:46:41.842706 - API:: TDE check successful.
     -> 2021-05-10 08:46:42.446560 - API:: Performing incremental backup to shared storage
     -> 2021-05-10 08:46:42.448228 - API:: Executing rman instructions
     -> 2021-05-10 08:49:21.161884 - API:: ....... OK
     -> 2021-05-10 08:49:21.162089 - API:: Incremental backup to shared storage is Completed
     -> 2021-05-10 08:49:21.163822 - API:: Starting backup of config files
     -> 2021-05-10 08:49:21.699197 - API:: Determining the oracle database id
     -> 2021-05-10 08:49:21.726308 - API::  DBID: 2005517379
     -> 2021-05-10 08:49:22.040891 - API:: Creating directories to store config files
     -> 2021-05-10 08:49:22.085476 - API:: Enabling RAC exclusions for config files.
     -> 2021-05-10 08:49:22.114211 - API:: Compressing config files into tar files
     -> 2021-05-10 08:49:22.173842 - API:: Uploading config files to NFS location
     -> 2021-05-10 08:49:22.222493 - API:: Removing temporary location /var/opt/oracle/log/dbsg2/obkup/7f6622f8b1a611eb865552540068a695.
     -> 2021-05-10 08:49:22.224071 - API:: Config files backup ended successfully
     -> 2021-05-10 08:49:26.052494 - API:: All requested tasks are completed
    *
    * RETURN CODE:0
    ##################################################
    
    


    Step #4 - Check my periodic backups


    Now it's been a few days (I started on Thursday and it's now Monday).
    I am going to check on the incremental backups, and the archive log backups.

    There are 2 ways I can do this.

    Using the bkup_api command to list the backups that have run.

    # /var/opt/oracle/bkup_api/bkup_api list --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_eddcd4e1_20210510064145.497707.log
    -> Listing all backups
      Backup Tag             Completion Date (UTC)            Type          keep    
    ----------------------   -----------------------      -----------    --------
       TAG20210506T123203     05/06/2021 19:32:03       full        False
       TAG20210506T131438     05/06/2021 20:14:38       incremental        False
       TAG20210507T012240     05/07/2021 08:22:40       incremental        False
       TAG20210508T012315     05/08/2021 08:23:15       incremental        False
       TAG20210509T012438     05/09/2021 08:24:38       full        False
       TAG20210510T012322     05/10/2021 08:23:22       incremental        False
    
    

    Using the RMAN catalog

    Backup Type         Encrypted Tag                                Backup Piece                                                 Backup Time           Day Of Week
    -------------------- --------- --------------------------------- ------------------------------------------------------------ -------------------- --------------------
    Full L0              YES       DBAAS_FULL_BACKUP20210506122626     /backup/dbaas_bkup_DBSG2_2005517379_0dvu5rp2_13_1          05/06/21 12:29:32    THURSDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210506131110     /backup/dbaas_bkup_DBSG2_2005517379_2avu5ud1_74_1          05/06/21 13:14:18    THURSDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210507011926     /backup/dbaas_bkup_DBSG2_2005517379_72vu792b_226_1         05/07/21 01:22:27    FRIDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210508011939     /backup/dbaas_bkup_DBSG2_2005517379_lbvu9tf3_683_1         05/08/21 01:22:51    SATURDAY
    Full L0              YES       DBAAS_FULL_BACKUP20210509011940     /backup/dbaas_bkup_DBSG2_2005517379_u3vuchr8_963_1         05/09/21 01:22:59    SUNDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210510011940     /backup/dbaas_bkup_DBSG2_2005517379_6rvuf672_1243_1        05/10/21 01:22:49    MONDAY
    
    
    

    NOTE: I can see that a periodic L1 (differential) is executed at 1:22 AM, every day but Sunday where a Full backup is executed.

    Now to look at archive log backups -- I am going to show a subset.

    Again I can use the bkup_api "list_jobs" command and see all the backup jobs that have been run (which include archive logs).


    # /var/opt/oracle/bkup_api/bkup_api list_jobs --dbname dbsg2 | more
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list_jobs
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_b2532724_20210510070545.552300.log
    UUID                             | DATE                | STATUS  | TAG                 | ACTION              
    e7ad1ef6aea011eb9c8252540068a695 | 2021-05-06 19:26:23 | success | TAG20210506T123203  | create-backup-full  
    03616d68aea211eba5aa52540068a695 | 2021-05-06 19:34:12 | success | TAG20210506T123516  | archivelog-backup   
    33fae162aea611eba0ed52540068a695 | 2021-05-06 20:04:12 | success | TAG20210506T130518  | archivelog-backup   
    267c21daaea711eb9d3852540068a695 | 2021-05-06 20:11:07 | success | TAG20210506T131438  | create-backup-incremental
    650fd222aeaa11ebb58652540068a695 | 2021-05-06 20:34:12 | success | TAG20210506T133516  | archivelog-backup   
    961831e4aeae11ebb0d452540068a695 | 2021-05-06 21:04:11 | success | TAG20210506T140517  | archivelog-backup   
    c6919f28aeb211eb957e52540068a695 | 2021-05-06 21:34:12 | success | TAG20210506T143518  | archivelog-backup   
    f7ce0d0caeb611eb97c552540068a695 | 2021-05-06 22:04:12 | success | TAG20210506T150522  | archivelog-backup   
    286e8ea6aebb11eb864c52540068a695 | 2021-05-06 22:34:11 | success | TAG20210506T153516  | archivelog-backup   
    598f77eeaebf11eb92c052540068a695 | 2021-05-06 23:04:11 | success | TAG20210506T160518  | archivelog-backup   
    89f4919aaec311eb9a9452540068a695 | 2021-05-06 23:34:11 | success | TAG20210506T163516  | archivelog-backup   
    bb5ba95eaec711ebb1ed52540068a695 | 2021-05-07 00:04:11 | success | TAG20210506T170518  | archivelog-backup   
     
    

    Step #5 - On demand backups 

    Now that I have my database configured, I am going to demonstrate some of the options you can add to your backup.

    I am going to create a keep backup and give it a tag using bkup_start

    $ /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=dbsg2 --keep --tag=Maymonthlybackup
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_start
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_7d923417_20210507113940.052080.log
    UUID 958a58beaf6311eba98a52540068a695 for this backup
    ** process started with PID: 262102
    ** see log file for monitor progress
    -------------------------------------
    
    

    Now to list it.

    $ /var/opt/oracle/bkup_api/bkup_api list --dbname dbsg2 --keep
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_19714a18_20210507114254.007083.log
    -> Listing all backups
      Backup Tag                           Completion Date (UTC)      Type          keep    
    ----------------------                 -----------------------   -----------    --------
       Maymonthlybackup20210507T113125     05/07/2021 18:31:25       keep-forever   True
    
    

    Step #6 - Restore my database


    The last step I'm going to do in my database is to restore it to a previous point in time.

    Below is what you see in the console.
    NOTE - If you chose a specific time it will be in UTC time.


    I pick a time to restore to, and click on the 'Restore Database' option. I can follow the process by looking at 'Workload Requests'.




    Step #7 - Validating backups


    A great feature of the command tool is the ability to validate backups that have been taken.  This is easy to do with the 'bkup_api reval_start' command.

    I started my validate for my database dbbsg and I saved the uuid to monitor it.

    # /var/opt/oracle/bkup_api/bkup_api reval_start --dbname=dbbsg
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : reval_start
    -> logfile: /var/opt/oracle/log/dbbsg/bkup_api_log/bkup_api_d0647aa8_20210511032638.300613.log
    UUID 5f204c4cb24311eb887252540068a695 for restore validation
    ** process started with PID: 15281
    ** Backup Request uuid     : 5f204c4cb24311eb887252540068a695
    
    

    Now to monitor it using the uuid until it's done, and I can see it completed successfully.

    # /var/opt/oracle/bkup_api/bkup_api --uuid=5f204c4cb24311eb887252540068a695 --dbname=dbbsg
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    @ STARTING CHECK STATUS 5f204c4cb24311eb887252540068a695
    [ REQUEST TICKET ]
    [UUID    ->  5f204c4cb24311eb887252540068a695 
    [DBNAME  ->  dbbsg 
    [STATE   ->  success 
    [ACTION  ->  start-restore-validate 
    [STARTED ->  2021-05-11 10:26:39 UTC 
    [ENDED   ->  2021-05-11 10:28:00 UTC 
    [PID     ->  15281 
    [TAG     ->  None 
    [PCT     ->  0.0 
    [LOG     ->  2021-05-11 03:26:39.780830 - API:: invoked with args : -dbname=dbbsg -reval=default  
    [LOG     ->  2021-05-11 03:26:42.324669 - API:: Wallet is in open AUTOLOGIN state 
    [LOG     ->  2021-05-11 03:26:42.996885 - API:: Oracle database state is up and running 
    [LOG     ->  2021-05-11 03:28:00.857565 - API:: ....... OK 
    [LOG     ->  2021-05-11 03:28:00.857645 - API:: Restore Validation is Completed 
    [ END TICKET ]
    
    

    Step #8 - Restoring/listing/backups with API

    There are many options to restoring with the API for both the "database" which consists of the CDB and all PDBs, or just a specific PDB.

    Below are some of the commands that help with this.
    NOTE: All commands are executed using "bkup_api" from /var/opt/oracle/bkup_api as "oracle"


    Command Options Description
    bkup_start   Start new special backup now
    bkup_start --keep Create keep backup
    bkup_start --level0 Perform a new FULL level 0 backup 
    bkup_start --level1 Perform a new level1 incremental backup
    bkup_start --cron Creates an incremntal backup through Cron
    bkup_chkcfg   Verifies that backups have been configured
    bkup_status   Shows the status of the most recent backup
    list   Shows the list of the most recent backups
    reval_start   Starts a restore validation of datafiles
    archreval_start   Starts a revalidation of archive logs
    recover_start --latest Recover from latest backup
    recover_start --scn Recover to SCN #
    recover_start --b Recover using a specific backup tag and defuzzy to archivelog following
    recover_start -t Recover to time. Specify --nonutc to use a non-UTC timestamp
    recover_status   Show status of most recent recover of this database


    With recovery you can also just recover a single PDB
    • --pdb={pdbname} - Recovery just a single PDB
    You can also specify if the config files should be restored
    • --cfgfiles - store the configuration files (controlfiles, spfiles etc) along with database files.

    Step #9 - Configuration changes

    You can execute the "bkup_api get config --dbname={dbname}" to create a file containing the  current configuration.  In that file you can see some of the other changes you can be.
    Below is what I see it using the version at the time of writing this.

    Config Parameter Settings Description
    bkup_cron_entry yes/no Enable/Disable automatic backups
    bkup_archlog_cron_entry yes/no Enable automatic archive log cleanup when not using tooling
    bkup_cfg_files yes/no Enable backup of Config files
    bkup_daily_time hh24:mi Time to execute daily backup
    bkup_archlog_frequency 15,20,30… How many minutes apart to execute archive log backups
    bkup_disk yes/no Backups to the FRA
    bkup_disk_recovery_window 1-14 Recover window of FRA
    bkup_oss_xxx   Backup settings when backing up to Object Store in Public Cloud
    bkup_zdlra_xx   Backup settings when backing up to a ZDLRA
    bkup_nfs_xxx   Backup settings when backing up to NFS
    bkup_set_section_size yes/no Set to yes to over ride the default setting
    bkup_section_size   Value for Over riding the default setting for section size
    bkup_channels_node xx Number of channels to be used by RMAN
    bkup_use_rcat yes/no If you are using an RMAN catalog
    bkup_rcat_xxx   RMAN catalog settings

    Step #10 - Scheduled backups


    Backups are scheduled in the crontab on the first node of a cluster. You can view schedule by executing "sudo su - " to become root, and look at the /etc/crontab file.
    Below is what is there for my database (dbsg2)

    # Example of job definition:
    # .---------------- minute (0 - 59)
    # |  .------------- hour (0 - 23)
    # |  |  .---------- day of month (1 - 31)
    # |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
    # |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
    # |  |  |  |  |
    # *  *  *  *  * user-name  command to be executed
    
    15 * * * * oracle /var/opt/oracle/misc/backup_db_wallets.pl
    15 * * * * oracle /var/opt/oracle/dbaascli/dbaascli tde backup --alldb
    19 1 * * * oracle /var/opt/oracle/bkup_api/bkup_api bkup_start --cron --dbname=dbsg2
    4,34 * * * * oracle /var/opt/oracle/bkup_api/bkup_api bkup_archlogs --cron --dbname=dbsg2
    
    
    The jobs that are scheduled to execute are.

    1. backup_db_wallets.pl - Every 15 minutes this script executes from the Crontab.  This script goes through the list of databases (regardless if database backups have been turned on) and it makes a copy of the SEPS wallet file in the current wallet location adding the current date/time. The old copy is removed and only one back exists.

    The following 2 settings are in my configuration file (/var/opt/oracle/creg/dbsg2.ini) are used as the source location of the wallet, and the location for the backup

     wallet_loc=/var/opt/oracle/dbaas_acfs/dbsg2/db_wallet
     wallet_loc_bak=/u02/app/oracle/admin/dbsg2/db_wallet

    NOTE: This wallet is used for storing user credentials and is an autologin wallet.
    I can see the credentials stored. In my case it is both the "sys" password, and the password for "rco". If I was using OSS (object store) my login credential would be stored in this wallet, and if I backed up to ZDLRA, this wallet would contain my connection to the ZDLRA(s) I was backing up to.
    2: CATALOG rco
    1: dbsg2 sys

    2. /var/opt/oracle/dbaascli/dbaascli tde backup --alldb - Every 15 minutes this script executes from the Crontab.  This script goes through the list of databases (regardless if database backups have been turned on) and it makes a copy of the TDE wallet file in the $ORACLE_BASE directory.
    The location is $ORACLE_BASE/{db_name}/tde_wallet/tde/

    The output from this script is in /var/opt/oracle/log/misc/backup

    3. /var/opt/oracle/bkup_api/bkup_api bkup_start --cron --dbname={mydb}- Every at 1:19 AM,  This time is determined in the bkup_api configuration using the parameter  "bkup_daily_time".  This is the same API that is called to perform an on-demand backup from the command line, but with a '--cron' parameter also.

    4. /var/opt/oracle/bkup_api/bkup_api bkup_archlogs --dbname={mydb} Every 30 minutes this script is executed based on the 'bkup_archlog_frequency' bkup_api configuration setting. This script will backup my archive logs to the backup location.