Showing posts with label backup. Show all posts
Showing posts with label backup. Show all posts

Tuesday, September 5, 2023

Creating dynamic KEEP archival backups from ZDLRA

 This post covers how to utilize the new package DBMS_RA.CREATE_ARCHIVAL_BACKUP to dynamically create KEEP archival backups from a ZDLRA.

When using this package to schedule KEEP backups, I recommend creating restore points with every incremental backup.  Read this blog post to find out why.

PROCEDURE CREATE_ARCHIVAL_BACKUP(
   db_unique_name IN VARCHAR2,
   from_tag IN VARCHAR2 DEFAULT NULL,
   compression_algorithm IN VARCHAR2 DEFAULT NULL,
   encryption_algorithm IN VARCHAR2 DEFAULT NULL,
   restore_point IN VARCHAR2 DEFAULT NULL,
   restore_until_scn      IN VARCHAR2 DEFAULT NULL,
   restore_until_time     IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   attribute_set_name     IN VARCHAR2,
   format                 IN VARCHAR2 DEFAULT NULL,
   autobackup_prefix      IN VARCHAR2 DEFAULT NULL,
   restore_tag            IN VARCHAR2 DEFAULT NULL,
   keep_until_time        IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   max_redo_to_apply      IN INTEGER DEFAULT 14                    --> Added in 21.1 June PSU
   comments IN VARCHAR2 DEFAULT NULL);

NOTE: This blog post was updated to include the MAX_REDO_TO_APPLY parameter which is not documented as of writing this post.

 The documentation can be found here.  

These archival KEEP backups can be sent to either

  • TAPE - Using the copy-to-tape process you can send archival backups to physical tape, virtual tape, or any media manager that uses a "TAPE" backup type.
  • CLOUD - Using the copy-to-cloud process you can send archival backups to an OCI object store bucket which can be either on a local ZFSSA (using the OCI API protocol), or to the Oracle Cloud directly.



NOTE: When sending backups to a cloud location, retention rules can be set on the bucket LOCKING the cloud backups to ensure that they are immutable.  This is integrated with the new compliance settings on the RA21.



How to use this package

1. Identify the Database

Because this is more of an on demand process, you have to execute the package for each database separately (rather than by using a protection policy), and identify for each database the point-in-time you want to use for recovery..

2. Set Archival Restore Point

Because the archival backup is dynamically created using existing backups the restore point works differently than if you create the KEEP backup on demand from the protected database. 


When you create a KEEP backup from the protected database, the backup contains 

    • Full backup of all datafiles
    • Backup of spfile and controlfile
    • Backup of archive logs created during the backup starting with a log switch at the beginning of the backup.
    • Final archive logs created by performing a log switch at the end of the backup.

 When you create an Archival backup from the ZDLRA , the backup contains

    • Most current virtual full backup of each datafile prior to the point in time for recovery that you choose. 
    • Backup of spfile and controlfile 
    • Backup of the active archive logs generated when the oldest virtual full datafile backup started, up to the archive logs needed to recover until the point in time chosen for recovery.

As you can see a normal KEEP backup generated by the protected database is a a "self-contained" backup that can be recovered only to the point in time that the backup completed.  You can increase the recover point by adding additional KEEP archival log backups after the backup.

The dynamically created KEEP backup generated by the ZDLRA is also a "self-contained" backup that can be recovered to any point in time after the last datafile backup completed, but it also includes any point in time up to the restore point identified.  

Choices for a dynamic restore point 

 There are 3 options to choose a specific restore point. If you do not set one of these options, the KEEP backup will be created using the current restore point of the database.  

  •  RESTORE_POINT - If you set a unique restore point in the database immediately following an incremental backup (or  at a later point in time), you can create a KEEP backup that will recover to that point-in-time.  When using this process, after creating the restore point you should ensure that you also perform a log switch, and a log sweep to backup the archive logs.  This restore point name is used as the default RESTORE_TAG, and should be unique.  The recommended name (because it is the default KEEP restore tag) is "<KEEP_BACKUP_><yyyyMMddHH24miSS>".  BUT- in order to better identify the restore point, I would use a shorter name that just contains the date (assuming you are only performing an single daily incremental backup), for example "KEEP_BACKUP_MMDDYY".  By using a restore point, you can better control the amount of archive logs necessarily to recover the database.

 

    • Incremental forever backups ensure that the duration of the backup is much shorter than a typical full KEEP backup limiting the amount of archive logs necessary to have a recovery point.
    • Setting a restore point immediately following the backup ensures that the recovery window following the last datafile backup piece is short also limiting the amount of archive logs necessary.

  • RESTORE_UNTIL_SCN or RESTORE_UNTIL_TIME I am grouping these 2 choices together, because they are so similar.  Unlike using a restore point that is preset, using either of these options will create the KEEP archive backup with a recover point as the SCN number given or the UNTIL TIME given (using the databases timezone). 


FROM_TAG - The documentation states that only backups containing the FROM_TAG will be considered if a FROM_TAG is set. I am thinking this would make sense if you let the restore point default to the current time, and you want to choose which backup pieces to include.  I am not sure of the full use of this option however.


WARNING: This process only looks back 14 days for a full backup to start the KEEP backupset with.  If you do not have a full backup within the 14 day window this can be over ridden with the  MAX_REDO_TO_APPLY parameter in the package call. This was added in the 21.1 June PSU to allow customers to set a window farther than 14 days.

 RECOMMENDATIONS 

  •  Because you can create up to 2048 RESTORE_POINTs in a database, and normal restore points are automatically dropped when necessary, I would recommend creating a restore point following each incremental backup with the format mentioned above, This will allow you to create a self-contained FULL KEEP backup from any incremental backup as needed. This can be used to easily create an end-of-month KEEP backup (for example).

 

  • I would use the RESTORE_UNTIL options when it is necessary to create a KEEP backup as of a specific point-in-time regardless of when the backup completed. This would be used if the recovery point is critical.

WARNING

Before creating the archival backup, ensure you have the archive logs backed up that are needed to support the recover point, and ensure there is enough time for the incremental backups to virtualize.  You many need perform a log switch and execute an additional log sweep prior to scheduling the archival backup.

3. Set Archival Options


COMPRESSION_ALGORITHM
-  The default is no compression, and if the backup piece is already compressed, it will not try to compress the backup again.  The documentation does a good job of going through the options, and why you would chose one or the other.  Keep in mind that if your database uses TDE for all the datafiles, there will be no gain with compression, and the extra resources required for compression may slow down the restore.  Also, the compression is performed by the ZDLRA (RMAN compression), but the de-compression is performed by the protected database during restore.

 ENCRYPTION_ALGORITHM - The default is no encryption, but it is important to understand that any copy-to-cloud processing MUST have encryption set.  It is also important to understand that the ZDLRA must be using OKV (Oracle Key Vault) to store the encryption keys when encryption is set. The list of algorithms can be found in the documentation.

 

4. Set Archival Location and Name

ATTRIBUTE_SET_NAME - This must be specified, and this identifies the backup location to send the archival backups.

FORMAT - By default the  backup pieces are given handles automatically generated by the ZDLRA, this setting allows you to change the default backup piece format using normal RMAN formatting options.

AUTOBACKUP_PREFIX - - By default the autobackup pieces will retain the original names, but  you can add a prefix to the original autobackup names. 

 

5. Set Restore TAG

 By default the RESTORE_TAG defaults to  "<KEEP_BACKUP_><yyyyMMddHH24miSS>". This can be overridden to give the backup a more meaningful tag. For example, the end-of-month backup could be tagged as "MONTHLY_12_2023", making it easier to automate finding specific KEEP backups.

 RECOMMENDATIONS 

I would set the Restore Tag to a set format that makes the KEEP backups easy to find. You can see the example above. 

6. Set KEEP_UNTIL time

The default KEEP_UNTIL time is "FOREVER". In most cases you want to set an end date for the backup, allowing the ZDLRA to automatically remove the backup when it expires.  This date-time is based on the timezone of the protected database. 



 SUMMARY 

 If using this functionality to dynamically create Archival KEEP backups...

  • I would set a Restore Point in each database immediately following every incremental backup.  
  • I would schedule this procedure to create the archival backup with a formatted restore tag to make the backup easy to find.
  • If backing up to a CLOUD location, I would use retention rules to ensure the backups are immutable until they expire.

 

 

Tuesday, August 8, 2023

How to clone a single PDB onto another DB host.

Cloning a single PDB isn't always easy to do, especially if you are trying to use an existing backup rather copying from an existing database.  In this blog post I will walk through how to restore a PDB from an existing Multi-tenant backup to another host, and plug it into another CDB.




My environment is:

DBCS database  FASTDB

        db_name                                  fastdb

        db_unique_name                     = fastdb_67s_iad

        DB Version                              = 19.19

        TDE                                         = Using local wallet 

        Backup                                    = Object Storage using the Tooling 

        RMAN catalog                        = Using RMAN catalog to emulate ZDLRA

        PDB name                              = fastdb_pdb1


Step #1 - Prepare destination

The first step is to copy over all the necessary pieces for restoring the database using the object store library.

  • TDE wallet
  • Tape Library
  • Tape Library config file
  • SEPS wallet used by backup connection
  • SPFILE contents to build a pfile
NOTE: When using a ZDLRA as a source you need to copy over the following pieces.
  • TDE wallet
  • ZDLRA library (or use the library in the $ORACLE_HOME)
  • SEPS wallet used by the channel allocation to connect to the ZDLRA
  • SPFILE contents to build a pfile

Also create any directories needed (like audit file location).
  • mkdir /u01/app/oracle/admin/fastdb_67s_iad/adump
I added the entry to the /etc/oratab file and changed my environment to point to this database name.

In my case I copied the following directories and subdirectories to the same destination on the host.
  • scp /opt/oracle/dcs/commonstore/wallets/fastdb_67s_iad/*
  • scp /opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/*
Finally, I copied some of the contents in the spfile.  Below are the critical entries.
audit_file_dest='/u01/app/oracle/admin/fastdb_67s_iad/adump'
*.compatible='19.0.0.0'
*.control_files='+RECO/FASTDB_67S_IAD/CONTROLFILE/current.256.1143303659'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+RECO'
*.db_domain='subnet.vcn.oraclevcn.com'
*.db_files=1024
*.db_name='fastdb'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=8191g
*.db_unique_name='fastdb_67s_iad'
*.diagnostic_dest='/u01/app/oracle'
*.enable_pluggable_database=true
*.global_names=TRUE
*.log_archive_format='%t_%s_%r.dbf'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.processes=4000
*.sga_target=4g
*.tde_configuration='keystore_configuration=FILE'
*.undo_retention=900
*.undo_tablespace='UNDOTBS1'
*.wallet_root='/opt/oracle/dcs/commonstore/wallets/fastdb_67s_iad'




Step #2 - Restore controlfile

The next step is to restore the controlfile to my destination host

I grabbed 2 pieces of information from the source database
  • DBID  - This is needed to restore the controlfile from the backup.
  • Channel configuration.
With this I executed the following to restore the controlfile.

startup nomount;
set dbid=1292000107;

 run
 {
 allocate CHANNEL sbt1 DEVICE TYPE  'SBT_TAPE' FORMAT   '%d_%I_%U_%T_%t' PARMS  'SBT_LIBRARY=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/acefbba5-65ad-454c-b1fe-467dec1abde4/opc_fastdb_67s_iad.ora)';
 restore controlfile ;
 }

and below is my output.

RMAN>  run
 {
 allocate CHANNEL sbt1 DEVICE TYPE  'SBT_TAPE' FORMAT   '%d_%I_%U_%T_%t' PARMS  'SBT_LIBRARY=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/acefbba5-65ad-454c-b1fe-467dec1abde4/opc_fastdb_67s_iad.ora)';
 restore controlfile ;
 }2> 3> 4> 5>

allocated channel: sbt1
channel sbt1: SID=1513 device type=SBT_TAPE
channel sbt1: Oracle Database Backup Service Library VER=19.0.0.1

Starting restore at 08-AUG-23

channel sbt1: starting datafile backup set restore
channel sbt1: restoring control file
channel sbt1: reading from backup piece c-1292000107-20230808-04
channel sbt1: piece handle=c-1292000107-20230808-04 tag=TAG20230808T122731
channel sbt1: restored backup piece 1
channel sbt1: restore complete, elapsed time: 00:00:01
output file name=+RECO/FASTDB_67S_IAD/CONTROLFILE/current.2393.1144350823
Finished restore at 08-AUG-23


Step #3 - Restore Datafiles for CDB and my PDB

Below is the commands I am going to execute to restore the datafiles for my CDB , my PDB and the PDB$SEED.

First I'm going to mount the database, and I am going to spool the output to a logfile.



alter database mount;

SPOOL LOG TO '/tmp/restore.log';
set echo on;

run { 
            restore database root ;
            restore database FASTDB_PDB1;
            restore database "PDB$SEED";
     }

I went through the output, and I can see that it only restored  the CDB , my PDB, and the PDB$SEED.


Step #4 - execute report schema and review file locations


List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1040     SYSTEM               YES     +DATA/FASTDB_67S_IAD/DATAFILE/system.283.1144351313
3    970      SYSAUX               NO      +DATA/FASTDB_67S_IAD/DATAFILE/sysaux.284.1144351305
4    95       UNDOTBS1             YES     +DATA/FASTDB_67S_IAD/DATAFILE/undotbs1.280.1144351303
5    410      PDB$SEED:SYSTEM      NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/system.264.1143303695
6    390      PDB$SEED:SYSAUX      NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/sysaux.265.1143303695
7    50       PDB$SEED:UNDOTBS1    NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/undotbs1.266.1143303695
8    410      FASTDB_PDB1:SYSTEM   YES     +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/system.291.1144351333
9    410      FASTDB_PDB1:SYSAUX   NO      +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/sysaux.292.1144351331
10   70       FASTDB_PDB1:UNDOTBS1 YES     +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/undotbs1.281.1144351329
11   5        USERS                NO      +DATA/FASTDB_67S_IAD/DATAFILE/users.285.1144351303
12   5        FASTDB_PDB1:USERS    NO      +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/users.295.1144351329
13   420      RMANPDB:SYSTEM       YES     +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/system.285.1143999311
14   420      RMANPDB:SYSAUX       NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/sysaux.282.1143999317
15   50       RMANPDB:UNDOTBS1     YES     +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/undotbs1.281.1143999323
16   5        RMANPDB:USERS        NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/users.284.1143999309
17   100      RMANPDB:RMANDATA     NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/rmandata.280.1144001911

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +DATA/FASTDB_67S_IAD/TEMPFILE/temp.263.1143304005
2    131      PDB$SEED:TEMP        32767       +DATA/FASTDB_67S_IAD/017B5DDEB84167ACE063A100000AD816/TEMPFILE/temp.267.1143303733
4    224      FASTDB_PDB1:TEMP     4095        +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/TEMPFILE/temp.272.1143304235
6    224      RMANPDB:TEMP         4095        +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/TEMPFILE/temp.283.1143999305





Step #5 - Determine tablespaces to skip during recovery

I ran this on my primary database, and used it to build the RMAN command. This command will get the names of the tablespaces that are not part of this PDB so that I can ignore them.



select '''' ||pdb_name||''':'||tablespace_name ||',' 
    from cdb_tablespaces a,
         dba_pdbs b
         where a.con_id=b.con_id(+)
         and b.pdb_name not in ('FASTDB_PDB1')
order by 1;

From the above, I built the script below that skips the tablespaces for the PDB "RMANPDB".



recover database skip forever tablespace 
'RMANPDB':RMANDATA,
'RMANPDB':SYSAUX,
'RMANPDB':SYSTEM,
'RMANPDB':TEMP,
'RMANPDB':UNDOTBS1,
'RMANPDB':USERS;
And then ran my RMAN script to recover my datafiles that were restored.
NOTE: the datafiles for my second PDB were "offline dropped"


Starting recover at 08-AUG-23
RMAN-06908: warning: operation will not run in parallel on the allocated channels
RMAN-06909: warning: parallelism require Enterprise Edition
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=3771 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=4523 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=19.0.0.1
channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: +DATA/FASTDB_67S_IAD/DATAFILE/system.283.1144351313


...

Executing: alter database datafile 13, 14, 15, 16, 17 offline drop
starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=26
channel ORA_SBT_TAPE_1: reading from backup piece FASTDB_1292000107_5m23a29f_182_1_1_20230808_1144326447
channel ORA_SBT_TAPE_1: piece handle=FASTDB_1292000107_5m23a29f_182_1_1_20230808_1144326447 tag=TAG20230808T122727
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
archived log file name=+RECO/FASTDB_67S_IAD/ARCHIVELOG/2023_08_08/thread_1_seq_26.2389.1144352807 thread=1 sequence=26
channel default: deleting archived log(s)
archived log file name=+RECO/FASTDB_67S_IAD/ARCHIVELOG/2023_08_08/thread_1_seq_26.2389.1144352807 RECID=1 STAMP=1144352806
media recovery complete, elapsed time: 00:00:01
Finished recover at 08-AUG-23


Step #6 - Open database 

I opened the database and the PDB
SQL> alter database open;

Database altered.


SQL> alter pluggable database fastdb_pdb1 open;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 FASTDB_PDB1                    READ ONLY  NO
         4 RMANPDB                        MOUNTED


I also went and updated my init{sid}.ora to point to the controlfile that I restored.


Step #8 - Create shell PDB in the tooling

I created a new PDB that is going to be the name of the PDB I am going to plug in.  This is optional.




Step #7 - Switch my restored database to be a primary database

I found that the database was considered a standby database, and I needed to make it a primary to unplug my pdb



SQL> RECOVER MANAGED STANDBY DATABASE FINISH;
Media recovery complete.
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

SQL> alter database commit to switchover to primary with session shutdown;



Database altered.


Step #8 - unplug my PDB

I opened the database and unplugged my PDB.

SQL> alter database open;

Database altered.

SQL> alter pluggable database fastdb_pdb1 unplug into '/tmp/fastdb_pdb1.xml' ENCRYPT USING transport_secret;


Pluggable database altered.

SQL>
drop pluggable database fastdb_pdb1 keep datafiles;SQL>

Pluggable database dropped.



Step #9 - Drop the placeholder PDB from the new CDB

Now I am unplugging, and dropping the placeholder PDB.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 LAST21C_PDB1                   READ WRITE NO
         4 CLONED_FASTDB                  READ WRITE NO
SQL> alter pluggable database CLONED_FASTDB close;

Pluggable database altered.


SQL> alter pluggable database CLONED_FASTDB unplug into '/tmp/CLONED_FASTDB.xml' ENCRYPT USING transport_secret;

Pluggable database altered.

SQL> drop pluggable database CLONED_FASTDB keep datafiles;

Pluggable database dropped.



Step #10 - Plug in the PDB and open it up




create pluggable database CLONED_FASTDB USING '/tmp/fastdb_pdb1.xml' keystore identified by W3lCom3#123#123 decrypt using transport_secret
NOCOPY
TEMPFILE REUSE;
SQL>   2    3

Pluggable database created.

SQL> SQL>alter pluggable database cloned_fastdb open;



That's it.  it took a bit to track down the instructions, but this all seemed to work.


Step #11 - Clone the PDB to ensure that the tooling worked

I next cloned the PDB to make sure the tooling properly recognized my PDB and it all worked fine. You can that I know have a second copy of the PDB (test_clone).




Wednesday, August 2, 2023

ZDLRA - Copy-to-cloud steps by step explained

 One of the best features of the ZDLRA is the ability to dynamically create a full Keep backup and send it to Cloud (ZFSSA or OCI) for archival storage.

Here is a great article by Oracle Product Manager Marco Calmasini that explains how to use this feature.



In this blog post, I will go through the RACLI steps that you execute, and explain what is happening with each step

The documentation I am started with is the 21.1 administrators Guide which can be here.  If you are on a more current release, then you can find the steps in chapter named "Archiving Backups to Cloud".


Deploying the OKV Client Software

To ensure that all the backup pieces are encrypted, you must use OKV (Oracle Key Vault) to manage the encryption keys that are being used by the ZDLRA.  Even if you are using TDE for the datafiles, the copy-to-cloud process encrypts ALL backup pieces including the backup of the controlfile, and spfile which aren't already encrypted.

I am not going to go through the detailed steps that are in the documentation to configure OKV, but I will just go through the high level processes.

The most important items to note on this sections are

  • Both nodes of the ZDLRA are added as endpoints, and they should have a descriptive name that identifies them, and ties them together.
  • A new endpoint group should be created with a descriptive name, and both nodes should be added to the new endpoint group.
  • A new virtual wallet is created with a descriptive name, and this needs to both associated with the 2 endpoints, and be the default wallet for the endpoints.
  • Both endpoints of the ZDLRA are enrolled through OKV and during the enrollment process a unique enrollment token file is created for each node. It is best to immediately rename the files to identify the endpoint it is associated with using the format <myhost>-okvclient.jar.
  • Copy the enrollment token files to the /radump directory on the appropriate host.
NOTE: It is critical that you follow these directions exactly, and that each node has the appropriate enrollment token with the appropriate name before continuing.

#1 Add credential_wallet

racli add credential_wallet


Fri Jan 1 08:56:27 2018: Start: Add Credential Wallet
Enter New Keystore Password: <OKV_endpoint_password>
Confirm New Keystore Password:
Enter New Wallet Password: <ZDLRA_credential_wallet_password> 
Confirm New Wallet Password:
Re-Enter New Wallet Password:
Fri Jan 1 08:56:40 2018: End: Add Credential Wallet

The first step to configure the ZDLRA to talk to OKV is to have the ZDLRA create a password protected SEPS wallet file that contains the OKV password.
This step asks for 2 new passwords when executing
  1. New Keystore Password - This password is the OKV endpoint password.  This password is used to communicate with OKV by the database, and can be used with okvutil to interact with OKV directly
  2. New Wallet Password - This password is used to protect the wallet file itself that will contain the OKV keystore password.
This password file is shared across both nodes.

Update contents      -  "racli add credential"
Change password    - "racli alter credential_wallet"

#2 Add keystore

racli add keystore --type hsm --restart_db

RecoveryAppliance/log/racli.log
Fri Jan 1 08:57:03 2018: Start: Configure Wallets
Fri Jan 1 08:57:04 2018: End: Configure Wallets
Fri Jan 1 08:57:04 2018: Start: Stop Listeners, and Database
Fri Jan 1 08:59:26 2018: End: Stop Listeners, and Database
Fri Jan 1 08:59:26 2018: Start: Start Listeners, and Database
Fri Jan 1 09:02:16 2018: End: Start Listeners, and Database

The second step to configure the ZDLRA to talk to OKV is to have the ZDLRA database be configured to communicate with OKV. The Database on the ZDLRA will be configured to use the OKV wallet for encryption keys which requires a bounce of the database.  


Backout         - "racli remove keystore" 
Status            - "racli status keystore"
Update          - "racli alter keystore"
Disable          - "racli disable keystore"
Enable            - "racli enable keystore"

#3 Install okv_endpoint (OKV client software)

racli install okv_endpoint

23 20:14:40 2018: Start: Install OKV End Point [node01]
Wed August 23 20:14:43 2018: End: Install OKV End Point [node01]
Wed August 23 20:14:43 2018: Start: Install OKV End Point [node02]
Wed August 23 20:14:45 2018: End: Install OKV End Point [node02]

The third step to configure the ZDLRA to talk to OKV is to have the ZDLRA nodes (OKV endpoints) enrolled in OKV.  This step will install the OKV software on both nodes of the ZDLRA, and complete the enrollment of the 2 ZDLRA nodes with OKV.  The password that entered in step #1 for OKV is used during the enrollment process.

Status            - "racli status okv_endpoint"

NOTE: At the end of this step, the status command should return a status of online from both nodes.

Node: node02
Endpoint: Online
Node: node01
Endpoint: Online

#4 Open the Keystore

racli enable keystore

The fourth step to configure the ZDLRA to talk to OKV is to have the ZDLRA nodes open the encryption wallet in the database. This step will use the saved passwords from step #1 and open up the encryption wallet.

NOTE: This will need to be executed after any restarts of the database on the ZDLRA.

#5 Create a TDE master key for the ZDLRA in the Keystore

racli alter keystore --initialize_key

The final step to configure the ZDLRA to talk to OKV is to have the ZDLRA create the master encryption for the ZDLRA in the wallet.

Creating Cloud Objects for Copy-to-Cloud

These steps create the cloud objects necessary to send backups to a cloud location.

NOTE: If you are configuring multiple cloud locations, you may go through these steps for each location.

Configure public/private key credentials

Authentication with the object storage is done using an X.509 certificate.  The ZDLRA steps outlined in the documentation will generate a new pair of API signing keys and register the new set of keys.
You can also use any set of API keys that you previously generated by putting your private key in the shared location on the ZDLRA nodes..
In OCI each user can only have 3 sets of API keys, but the ZFSSA has no restrictions on the number of API signing keys that can be created.
Each "cloud_key" represents an API signing key pair, and each cloud_key contains 
  1. pvt_key_path - Shared location on the ZDLRA where the private key is located
  2. fingerprint      - fingerprint associated with the private key to identify which key to use.
You can use the same "cloud_key" to authenticate to multiple buckets, and even different cloud locations.

Documentation steps to create new key pair

#1 Add Cloud_key


racli add cloud_key --key_name=sample_key

Tue Jun 18 13:22:07 2019: Using log file /opt/oracle.RecoveryAppliance/log/racli.log
Tue Jun 18 13:22:07 2019: Start: Add Cloud Key sample_key
Tue Jun 18 13:22:08 2019: Start: Creating New Keys
Tue Jun 18 13:22:08 2019: Oracle Database Cloud Backup Module Install Tool, build 19.3.0.0.0DBBKPCSBP_2019-06-13
Tue Jun 18 13:22:08 2019: OCI API signing keys are created:
Tue Jun 18 13:22:08 2019:   PRIVATE KEY --> /raacfs/raadmin/cloud/key/sample_key/oci_pvt
Tue Jun 18 13:22:08 2019:   PUBLIC  KEY --> /raacfs/raadmin/cloud/key/sample_key/oci_pub
Tue Jun 18 13:22:08 2019: Please upload the public key in the OCI console.
Tue Jun 18 13:22:08 2019: End: Creating New Keys
Tue Jun 18 13:22:09 2019: End: Add Cloud Key sample_key

This step is used to generate a new set of API signing keys,
The output of this step is a shared set of files on the ZLDRA which are stored in:
/raacfs/raadmin/cloud/key/{key_name)/

In order to complete the cloud_key information, you need to add the public key to OCI, or to the ZFS and save the fingerprint that is associated with the public key. The fingerprint is used in the next step.

#2 racli alter cloud_key


racli alter cloud_key --key_name=sample_key --fingerprint=12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef

The fingerprint that is associated with the public key (from the previous step) is added to the ZDLRA cloud_key information so that it can be used for authentication.  
Both the private key, and the fingerprint are need to use the API signing key for credentials.

Using your own API signing key pair

#1 Add cloud_key

racli add cloud_key --key_name=KEY_NAME [--fingerprint=PUBFINGERPRINT --pvt_key_path=PVTKEYFILE]

You can add your own API signing keys to the ZDLRA by  using the "add cloud_key" command identifying both the private key file location (it is best to follow the format and location in the automated steps) and the fingerprint associated with the API signing keys.
It is assumed that the public key has already been added to OCI, or to the ZFSSA.

Status        - racli list cloud_key
Delete        - racli remove cloud_key
Update       - racli alter cloud_key

Documentation steps to create a new cloud_user 

This step is used to create the wallet entry on the ZDLRA that is used for authenticating to the object store.
This step combines the "cloud_key", which contains the API signing keys, the user login information and the compartment (on ZFSSA the compartment is the share ).
The cloud_user can be used for authentication with multiple buckets/locations that are identified as cloud_locations as long as they are within the same compartment (share on ZFSSA).

The format of the command to create a new cloud_user is below

racli add cloud_user 
--user_name=sample_user
--key_name=sample_key
--user_ocid=ocid1.user.oc1..abcedfghijklmnopqrstuvwxyz0124567901
--tenancy_ocid=ocid1.tenancy.oc1..abcedfghijklmnopqrstuvwxyz0124567902
--compartment_ocid=ocid1.compartment.oc1..abcedfghijklmnopqrstuvwxyz0124567903

The parameters for this command are

  • user_name        - This is the username that is associated with the cloud_user to unique identify it.
  • key_name         - This is name of the "cloud_key" identifying the API signing keys to be used.
  • user_ocid          - This is the Username for authentication. In OCI this is the users OCID, in ZFS, this combines the ocid format with the username on the ZFSSA that owns the share.
  • tenancy_ocid    - this is the tenancy OCID in OCI, on ZFSSA it is ignored
  • compartment_ocid - this is the OCID, on ZFSSA it is the share
For more information on configuring the ZFSSA see
How to configure Zero Data Loss Recovery Appliance to use ZFS OCI Object Storage as a cloud repository (Doc ID 2761114.1)


List                - racli list  cloud_user
Delete            - racli remove  cloud_user
Update           - racli alter cloud_user

Documentation steps to create a new cloud_location 

This step is used to associate the cloud_user (used for authentication) with both the location and the bucket that is going to be used for backups.

racli add cloud_location
--cloud_user=<CLOUD_USER_NAME>
--host=https://<OPC_STORAGE_LOCATION>
--bucket=<OCI_BUCKET_NAME>
--proxy_port=<HOST_PORT>
--proxy_host=<PROXY_URL>
--proxy_id=<PROXY_ID>
--proxy_pass=<PROXY_PASS>
--streams=<NUM_STREAMS>
[--enable_archive=TRUE]
--archive_after_backup=<number>:[YEARS | DAYS]
[--retain_after_restore=<number_hours>:HOURS]
--import_all_trustcert=<X509_CERT_PATH>
--immutable
--temp_metadata_bucket=<metadata_bucket>  


 

I am going to go through the key items that need to be entered here.  I am going to skip over the PROXY information and certificate.

  • cloud_user - This is the object store authentication information that was created in the previous steps.
  • host - This the URL for the object storage location. On ZFS the namespace in the URL is the "share"
  • bucket - This is the bucket where the backups will be sent. The bucket will be created if it doesn't exists. 
  • streams - The maximum number of channels to use when sending backups to the cloud
  • enable_archive - Not used with ZFS. With OCI the default TRUE allows you to set an archival strategy, FALSE will automatically put backups in archival storage.
  • archive_after_restore - Not used with ZFS. Automatically configures an archival strategy in OCI
  • retain_after_restore - Not used with ZFS. Sets the period of time that backups will remain in standard storage before returning to archival storage.
  • immutable - This allows you to set retention rules on the bucket by using the <metadata_bucket> for temporary files that need to be deleted after the backup. When using immutable you must also have a temp_metadata_bucket
  • temp_metadata_bucket - This is used with immutable to configure backups to go to 2 buckets, and this bucket will only contain a temporary object that gets deleted after the backup completes.
This command will create multiple attribute sets (between 1 and the number of streams) for the cloud_location that can be used for sending archival backups to the cloud with different numbers of channels.
The format of <copy_cloud_name> is a combination of  <bucket name> and <cloud_user>.
The format of the attributes used for the copy jobs is <Cloud_location_name>_<stream number>


Update          - racli alter cloud_location
Disable          - racli disable cloud_location  - This will pause all backups going to this location
Enable           - racli enable cloud_location  - This unpauses all backups going to this location
List                - racli list  cloud_location
Delete            - racli remove cloud_location

NOTE: There are quite a few items to note in this section.
  • When configuring backups to go to ZFSSA use the documentation previously mentioned to ensure the parameters are correct.
  • When executing this step with ZFSSA, make sure that the default OCI location on the ZFSSA is set to the share that you are currently configuring. If you are using multiple shares for buckets, then you will have to change the ZFSSA settings as you add cloud locations.
  • When using OCI for archival ensure that you configure the archival rules using this command. This ensures that the metadata objects, which can't be archived are excluded as part of the lifecycle management rules created during this step.


Create the job template using the documentation.


Monday, July 24, 2023

RMAN - Create weekly archival backup from weekly full backups

 This blog post demonstrates a process to create KEEP archival backups dynamically by using backups pieces within a  weekly full/daily incremental backup strategy.  

Thanks to Battula Surya Shiva Prasad and Kameswara RaoIndrakanti for coming up process of doing this.


KEEP backups

First let's go through what keep a keep backup is and how it affects your backup strategy.

  1. A KEEP backup is a self-contained backupset.  The archive logs needed to de-fuzzy the database files are automatically included in the backupset.  
  2. The archive logs included in the backup are only the archive logs needed to de-fuzzy.
  3. The backup pieces in the KEEP backup (both datafile backups and included archive log pieces) are ignored in the normal incremental backup strategy, and in any log sweeps.
  4. When a recovery window is set in RMAN, KEEP backup pieces are ignored in any "Delete Obsolete" processing.
  5. KEEP backup pieces, once past the "until time" are removed using the "Delete expired" command.

Normal  process to create an archival KEEP backup.

  • Perform a weekly full backup and a daily incremental backup that are deleted using an RMAN recovery window.
  • Perform archive log backups with the full/incremental backups along with log sweeps. These are also deleted using the an RMAN recovery window.
  • One of these processes are used to create an archival KEEP backup.
    • A separate full KEEP backup is performed along with the normal weekly full backup
    • The weekly full backup (and archive logs based on tag) are copied to tape with "backup as backupset" and marked as "KEEP" backup pieces.

Issues with this process

  • The process of copying the full backup to tape using "backup as backupset" requires 2 copies of the same backup for a period of time.  You don't want to wait until the end of retention to copy it to tape.
  • If the KEEP full backups are stored on disk, along with the weekly full backups you cannot use the backup as backupset, you must perform a  second, separate backup.

Proposal to create a weekly KEEP backup

Problems with simple solution

The basic idea is that you perform a weekly full backup, along with daily incremental backups that are kept for 30 days. After the 30 day retention, just the full backups (along with archive logs to defuzzy) are kept for an additional 30 days.

The most obvious way to do this is to

  •  Set the RMAN retention 30 days
  • Create a weekly full backup that is a KEEP backup with an until time of 60 days in the future.
  • Create a daily incremental backup that NOT a keep backup.
  • Create archive backups as normal.
  • Allow delete obsolete to remove the "non-KEEP" backups after 30 days.
.
Unfortunately when you create an incremental backups, and there is only KEEP backups proceeding it, the incremental Level 1 backup is forced into an incremental level 0 backups.  And with delete obsolete, if you look through MOS note "RMAN Archival (KEEP) backups and Retention Policy (Doc ID 986382.1)" you find that the incremental backups and archive logs are kept for 60 days because there is no proceeding non-KEEP backup.


Solution

The solution is to use tags, mark the weekly full as a keep after a week, and use the "delete backups completed before tag='xx'" command.

Weekly full backup scripts

run
{
   backup archivelog all filesperset=20  tag ARCHIVE_ONLY delete input;
   change backup tag='INC_LEVEL_0'  keep until time 'sysdate+53';
   backup incremental level 0 database tag='INC_LEVEL_0' filesperset=20  plus archivelog filesperset=20 tag='INC_LEVEL_0';

  delete backup completed before 'sysdate-61' tag= 'INC_LEVEL_0';
  delete backup completed before 'sysdate-31' tag= 'INC_LEVEL_1';
  delete backup completed before 'sysdate-31' tag= 'ARCHIVE_ONLY';
}

Daily Incremental backup scripts

run
{
  backup incremental level 1 database tag='INC_LEVEL_1'  filesperset=20 plus archivelog filesperset=20 tag='INC_LEVEL_1';
}

Archive log sweep backup scripts

run
{
  backup archivelog all tag='ARCHIVE_ONLY' delete input;
}


Example

I then took these scripts, and built an example using a 7 day recovery window.  My full backup commands are below.
run
{
   backup archivelog all filesperset=20  tag ARCHIVE_ONLY delete input;
   change backup tag='INC_LEVEL_0'  keep until time 'sysdate+30';
   backup incremental level 0 database tag='INC_LEVEL_0' filesperset=20  plus archivelog filesperset=20 tag='INC_LEVEL_0';

  delete backup completed before 'sysdate-30' tag= 'INC_LEVEL_0';
  delete backup completed before 'sysdate-8' tag= 'INC_LEVEL_1';
  delete backup completed before 'sysdate-8' tag= 'ARCHIVE_ONLY';
}


First I am going to perform a weekly backup and incremental backups for 7 days to see how the settings affect the backup pieces in RMAN.

for Datafile #1.


 File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            NO                              INC_LEVEL_0
     3 06-02-23 00:00:03          1   3334337   3334513 INCR1           NO                              INC_LEVEL_1
     3 06-03-23 00:00:03          1   3334513   3334665 INCR1           NO                              INC_LEVEL_1
     3 06-04-23 00:00:03          1   3334665   3334805 INCR1           NO                              INC_LEVEL_1
     3 06-05-23 00:00:03          1   3334805   3334949 INCR1           NO                              INC_LEVEL_1
     3 06-06-23 00:00:03          1   3334949   3335094 INCR1           NO                              INC_LEVEL_1
     3 06-07-23 00:00:03          1   3335094   3335234 INCR1           NO                              INC_LEVEL_1

for  archive logs

Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      625    3333260   3334274 15-JUN-23         NO                              ARCHIVE_ONLY
      626    3334274   3334321 01-JUN-23         NO                              INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         NO                              INC_LEVEL_0
      628    3334375   3334440 01-JUN-23         NO                              ARCHIVE_ONLY
      629    3334440   3334490 01-JUN-23         NO                              INC_LEVEL_1
      630    3334490   3334545 02-JUN-23         NO                              INC_LEVEL_1
      631    3334545   3334584 02-JUN-23         NO                              ARCHIVE_ONLY
      632    3334584   3334633 02-JUN-23         NO                              INC_LEVEL_1
      633    3334633   3334695 03-JUN-23         NO                              INC_LEVEL_1
      634    3334695   3334733 03-JUN-23         NO                              ARCHIVE_ONLY
      635    3334733   3334782 03-JUN-23         NO                              INC_LEVEL_1
      636    3334782   3334839 04-JUN-23         NO                              INC_LEVEL_1
      637    3334839   3334876 04-JUN-23         NO                              ARCHIVE_ONLY
      638    3334876   3334926 04-JUN-23         NO                              INC_LEVEL_1
      639    3334926   3334984 05-JUN-23         NO                              INC_LEVEL_1
      640    3334984   3335023 05-JUN-23         NO                              ARCHIVE_ONLY
      641    3335023   3335072 05-JUN-23         NO                              INC_LEVEL_1
      642    3335072   3335124 06-JUN-23         NO                              INC_LEVEL_1
      643    3335124   3335162 06-JUN-23         NO                              ARCHIVE_ONLY
      644    3335162   3335211 06-JUN-23         NO                              INC_LEVEL_1
      645    3335211   3335273 07-JUN-23         NO                              INC_LEVEL_1
      646    3335273   3335311 07-JUN-23         NO                              ARCHIVE_ONLY


Next I'm going to execute the weekly full backup script that changes the last backup to a keep backup to see how the settings affect the backup pieces in RMAN.

for Datafile #1.
 File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-02-23 00:00:03          1   3334337   3334513 INCR1           NO                              INC_LEVEL_1
     3 06-03-23 00:00:03          1   3334513   3334665 INCR1           NO                              INC_LEVEL_1
     3 06-04-23 00:00:03          1   3334665   3334805 INCR1           NO                              INC_LEVEL_1
     3 06-05-23 00:00:03          1   3334805   3334949 INCR1           NO                              INC_LEVEL_1
     3 06-06-23 00:00:03          1   3334949   3335094 INCR1           NO                              INC_LEVEL_1
     3 06-07-23 00:00:03          1   3335094   3335234 INCR1           NO                              INC_LEVEL_1
     3 06-08-23 00:00:07          0         0   3335715 FULL            NO                              INC_LEVEL_0


for archive logs


Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      625    3333260   3334274 15-JUN-23         NO                              ARCHIVE_ONLY
      626    3334274   3334321 01-JUN-23         YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      628    3334375   3334440 01-JUN-23         NO                              ARCHIVE_ONLY
      629    3334440   3334490 01-JUN-23         NO                              INC_LEVEL_1
      630    3334490   3334545 02-JUN-23         NO                              INC_LEVEL_1
      631    3334545   3334584 02-JUN-23         NO                              ARCHIVE_ONLY
      632    3334584   3334633 02-JUN-23         NO                              INC_LEVEL_1
      633    3334633   3334695 03-JUN-23         NO                              INC_LEVEL_1
      634    3334695   3334733 03-JUN-23         NO                              ARCHIVE_ONLY
      635    3334733   3334782 03-JUN-23         NO                              INC_LEVEL_1
      636    3334782   3334839 04-JUN-23         NO                              INC_LEVEL_1
      637    3334839   3334876 04-JUN-23         NO                              ARCHIVE_ONLY
      638    3334876   3334926 04-JUN-23         NO                              INC_LEVEL_1
      639    3334926   3334984 05-JUN-23         NO                              INC_LEVEL_1
      640    3334984   3335023 05-JUN-23         NO                              ARCHIVE_ONLY
      641    3335023   3335072 05-JUN-23         NO                              INC_LEVEL_1
      642    3335072   3335124 06-JUN-23         NO                              INC_LEVEL_1
      643    3335124   3335162 06-JUN-23         NO                              ARCHIVE_ONLY
      644    3335162   3335211 06-JUN-23         NO                              INC_LEVEL_1
      645    3335211   3335273 07-JUN-23         NO                              INC_LEVEL_1
      646    3335273   3335311 07-JUN-23         NO                              ARCHIVE_ONLY
      647    3335311   3335652 07-JUN-23         NO                              ARCHIVE_ONLY
      648    3335652   3335699 08-JUN-23         NO                              INC_LEVEL_0
      649    3335699   3335760 08-JUN-23         NO                              INC_LEVEL_0
      650    3335760   3335833 08-JUN-23         NO                              ARCHIVE_ONLY


Finally I'm going to execute the weekly full backup script that changes the last backup to a keep backup and this time it will delete the older backup pieces to see how the settings affect the backup pieces in RMAN.

for Datafile #1.

File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-08-23 00:00:07          0         0   3335715 FULL            YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-09-23 00:00:03          1   3335715   3336009 INCR1           NO                              INC_LEVEL_1
     3 06-10-23 00:00:03          1   3336009   3336183 INCR1           NO                              INC_LEVEL_1
     3 06-11-23 00:00:03          1   3336183   3336330 INCR1           NO                              INC_LEVEL_1
     3 06-12-23 00:00:03          1   3336330   3336470 INCR1           NO                              INC_LEVEL_1
     3 06-13-23 00:00:03          1   3336470   3336617 INCR1           NO                              INC_LEVEL_1
     3 06-14-23 00:00:04          1   3336617   3336757 INCR1           NO                              INC_LEVEL_1
     3 06-15-23 00:00:07          0         0   3336969 FULL            NO                              INC_LEVEL_0



for archive logs

Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      626    3334274   3334321 01-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      647    3335311   3335652 07-JUN-23         NO                              ARCHIVE_ONLY
      648    3335652   3335699 08-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      649    3335699   3335760 08-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      650    3335760   3335833 08-JUN-23         NO                              ARCHIVE_ONLY
      651    3335833   3335986 08-JUN-23         NO                              INC_LEVEL_1
      652    3335986   3336065 09-JUN-23         NO                              INC_LEVEL_1
      653    3336065   3336111 09-JUN-23         NO                              ARCHIVE_ONLY
      654    3336111   3336160 09-JUN-23         NO                              INC_LEVEL_1
      655    3336160   3336219 10-JUN-23         NO                              INC_LEVEL_1
      656    3336219   3336258 10-JUN-23         NO                              ARCHIVE_ONLY
      657    3336258   3336307 10-JUN-23         NO                              INC_LEVEL_1
      658    3336307   3336359 11-JUN-23         NO                              INC_LEVEL_1
      659    3336359   3336397 11-JUN-23         NO                              ARCHIVE_ONLY
      660    3336397   3336447 11-JUN-23         NO                              INC_LEVEL_1
      661    3336447   3336506 12-JUN-23         NO                              INC_LEVEL_1
      662    3336506   3336544 12-JUN-23         NO                              ARCHIVE_ONLY
      663    3336544   3336594 12-JUN-23         NO                              INC_LEVEL_1
      664    3336594   3336639 13-JUN-23         NO                              INC_LEVEL_1
      665    3336639   3336677 13-JUN-23         NO                              ARCHIVE_ONLY
      666    3336677   3336734 13-JUN-23         NO                              INC_LEVEL_1
      667    3336734   3336819 14-JUN-23         NO                              INC_LEVEL_1
      668    3336819   3336857 14-JUN-23         NO                              ARCHIVE_ONLY
      669    3336857   3336906 14-JUN-23         NO                              ARCHIVE_ONLY
      670    3336906   3336953 15-JUN-23         NO                              INC_LEVEL_0
      671    3336953   3337041 15-JUN-23         NO                              INC_LEVEL_0
      672    3337041   3337113 15-JUN-23         NO                              ARCHIVE_ONLY


Result

For my datafiles, I still have the weekly full backup, and it is a keep backup. For my archive logs, I still have the archive logs that were part of the full backup which are needed to de-fuzzy my backup.


Restore Test


Now for the final test using the next chg# on the June 1st archive logs 3334375;


RMAN> restore database until scn=3334375;

Starting restore at 15-JUN-23
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=259 device type=DISK
...
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/da1tiok6_1450_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1
...
channel ORA_DISK_1: reading from backup piece /u01/ocidb/backups/db1tiola_1451_1_1
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/db1tiola_1451_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1

RMAN> recover database until scn=3334375;
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=627
channel ORA_DISK_1: reading from backup piece /u01/ocidb/backups/dd1tiom8_1453_1_1
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/dd1tiom8_1453_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/oracle/product/19c/dbhome_1/dbs/arch1_627_1142178912.dbf thread=1 sequence=627
media recovery complete, elapsed time: 00:00:00
Finished recover at 15-JUN-23
RMAN> alter database open resetlogs;

Statement processed



Success !

Monday, June 5, 2023

Autotuned_reserved_space is a new feature on the ZDLRA that you should be using

 Autotuned_reserved_space is a new policy setting that got released with 21.1 and you should be using it. When I talk to customers about how to manage databases on a ZDLRA, the biggest confusion comes in when I talk about reserved space.  Reserved space needs to be understood, and properly managed. This new feature in 21.1 allows the ZDLRA to handle the reserved space for you, and I explain how to use it in this blog post.  First let's go through space usage, and reserved space in general.

space usage ZDLRA

Space usage on the ZDLRA. 


Recovery Window goal (which drives the space utilization)

The recovery window goal is set at the policy level, and this value (in days) is the number of days that you want to keep as a recovery window for all databases that are a member of this policy.  This will drive the space utilization.

Total space

The ZDLRA comes with all the space pre-allocated.  When you are looking at OEM, or in the SAR report you will see the total space listed. You want to make sure that you have enough space for your database backups and any incoming new backups.

Used Space

When the ZDLRA purges backups beyond the the Recovery Window Goal that you set, if does a bulk purge of backups.  This can be controlled by setting the maximum disk backup retention in days (which defaults to 1.5 times the recovery window goal).  Because of the bulk purge, more space is shown as used than is needed to support your recovery window goal.

Recovery Window Space

This is the amount space that is needed to support the recovery window goal.  Because, of the bulk purge, the recovery window space is less than the used space.


Reserved space

In order to control what happens with space, the concept of reserved space is used.  When a database is added to the ZLDRA, the reserved space value is set for this database.  This value should be updated regularly to ensure that there is enough space for the database backups to be stored.

The important things to know about reserved space are:
  • The sum of all the reserved space cannot be greater than the total space available on the ZDLRA.
  • When adding a new database, it's reserved space must fit within the unreserved space.
  • When a new database is added, the reserved space must be set to least the size of the database, and defaults to 2.5 times the size of the database.
  • The reserved space for a database needs to be at least the size of the largest datafile.
  • The reserved space should be larger than the amount of space needed to support the recovery window goal space for the database.  For databases with fluctuation, you need to reserve space for the peak usage. 
The reserved space serves two purposes when properly set
  1. It can be used to determine how much space is available for new database backups.
  2. If the ZDLRA determines that it does not have enough space to support the recovery window goal of the supported databases, space is reclaimed from databases whose reserved space is too small.
It is critical to keep the reserved space updated, and many customers have used an automated process to set the reserved space to "recovery window space needed" + 10%

Unfortunately configuring an automated process for all databases does not take into account any fluctuations in usage.  Let's say I have a database which is much busier at months end, I want to make that sure my reserved space is not adjusted down to the low value, I want it to stay adjusted based on the highest space usage value.

Autotuned_reserved_space 


This where autotuned reserved space can help you manage the reserved space.  This setting is controlled at the policy level.  

AUTOTUNED_RESERVED_SPACE

This value is set at the protection policy level and contains either "YES" or "NO", and defaults to "NO". "YES" will allow the ZDLRA to manage reserved space automatically for all databases (whose disk_reserved_space is not set) and are a member of this policy.

MAX_RESERVED_SPACE


This value is also set at the protection policy level.  This value is optional for autotuned_reserved_space, but if set, it will control the maximum amount of reserved space that can be set for an individual database in the protection policy. 

AUTOTUNE_SPACE_LIMIT


This value is set at the storage level for ALL databases. This sets a reserved space usage limit, where autotuning can slow down large reserved space increases. When reached, autotune will limit databases from increasing their reserved space growth to 10% per week.  This value is optional and will default to the total space if not set.  


SUMMARY:

  • autotuned_reserved_space - Enables autotuning of space within a protection policy
  • max_reserved_space - Controls the maximum reserved space of databases in a protection policy
  • autotune_space_limit - Slows the reserved space growth when a specified space limit is reached.

What does autotune reserved space do ?

  • On a regular basis, if needed, the reserved space for each autotune controlled database is adjusted to reserve space for the recovery window goal, and incoming backups.
  • If the database has a disk_reserved_space set, autotuning will not be used for this database.  It is assumed that the disk_reserved_space will be set manually for this database

Autotune  will replace the need for the ZDLRA admin to constantly update the reserved space for each database, as it's space needs change over time. It will also allow them to configure a constant reserved space for databases with fluctuating storage usage.