Monday, July 29, 2019

Block change tracking and the ZDLRA (part 1)

Oracle Block Change Tracking(BCT) and the ZDLRA



One of the most commonly asked question I get when explaining how the ZDLRA works has to do with Block Change tracking.

The question is usually something like...

"If the the Block Change Tracking file only keeps 7 days of history by default, how does it work with the ZDLRA where you only perform a full backup once"?

And the second question is

"If I only perform incremental cumulative backups, how does it use the BCT if it get's recreated/invalidated ?"

How does the BCT work ?

Well let's walk through what the block change does from a high level.

If you want the detail of the internals, Alex Gorbachev wrote a great paper and MOS note explaining it all.  ORACLE 10G BLOCK CHANGE TRACKING INSIDE OUT (Doc ID 1528510.1)

I'm going to start with a simple example.
First I'm going to turn on block change tracking, and then check the File# for my database files.


SQL> alter database enable block change tracking using file '/home/oracle/app/oracle/oradata/BSG18/bct.dbf';

SQL>
TABLESPACE_NAME                   FILE_ID
------------------------------ ----------
USERS                                   7
UNDOTBS1                                4
SYSTEM                                  1
SYSAUX                                  3




I am going to use 2 internal structures to walk through the BCT information.

X$KRCFH - This structure contains the lowscn of the BCT file.  It is used by RMAN to determine
                      the beginning SCN of the BCT file.  If no changes have been captured for a datafile,
                      then RMAN knows at which point it can make that assumption.

X$KRCFBH - This structure contains the versions of block changes available to use for backups.
                         This structure contains a row for each datafile, and each version.
                         This row points to a bitmap containing the block changes associated with the version.
                          The key columns I am interested in to explain how it works are

                                FNO          - File number
                                VERCNT - Version number identifying the change records.
                                                    This starts at 1 and increases over the life of the BCT file.
                                 LOW        - Low SCN of the block changes. It is 0 for the first record
                                                     indicating that it not complete
                                 HIGH        - High SCN of the block changes. The last SCN number
                                                      identifying the block changes.

I am going to look the internal structure X$KRCFBH to view what's in the block change tracking.

select (select tablespace_name from dba_data_files where file_id=fno) tablespace_name,vercnt,to_char(vertime,'mm/dd/yy hh24:mi:ss') vertime,low, high from x$krcfbh
   where fno in (select file_id from dba_data_files);
SQL> SQL>   2
TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
SYSTEM              1 07/29/19 09:21:17          0          0
SYSAUX              1 07/29/19 09:21:18          0          0
UNDOTBS1            1 07/29/19 09:21:18          0          0
USERS               1 07/29/19 09:21:18          0          0



I can see that there are entries for all my tablespaces, with a "version count" of 1 and low/high time of 0.  This is telling me that that no backups have been executed yet usingthe BCT.
I am also going to look at X$KRCFH to see what the starting SCN is for the BCT file.


select lowscn from X$KRCFH;

LOWSCN
----------
8361930


Now I am going to execute a full backup and see what happens.



TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
SYSTEM              1 07/29/19 09:21:17          0    8362001
SYSAUX              1 07/29/19 09:21:18          0    8362001
UNDOTBS1            1 07/29/19 09:21:18          0    8362001
USERS               1 07/29/19 09:21:18          0    8362001




You can see that the high SCN set to 8362001.
Now I am going to look at the RMAN backup (USERS tablespace) to see what the Checkpoint SCN number was for the backup.

  File LV Type Ckp SCN    Ckp Time          Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- ----------------- ----------- ------ ----
  7    0  Incr 8362002    07/29/19 09:46:59              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf


Ahhh.. Now I can see how it fits together. The high SCN for the BCT file is SCN number right before the checkpoint taken with the backup.

Now let's execute an incremental backup and see what happens.

TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
SYSAUX              1 07/29/19 09:21:18          0    8362001
SYSTEM              1 07/29/19 09:21:17          0    8362001
UNDOTBS1            1 07/29/19 09:21:18          0    8362001
USERS               1 07/29/19 09:21:18          0    8362001
SYSAUX              2 07/29/19 09:46:59    8362001    8363961
SYSTEM              2 07/29/19 09:46:59    8362001    8363961
UNDOTBS1            2 07/29/19 09:46:59    8362001    8363961


1    0  Incr 8362002    07/29/19 09:46:59              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
1    1  Incr 8363962    07/29/19 10:04:00              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  


By looking at the SYSTEM tablespace (file 1) I can see exactly what is happening with the BCT file.
The first version marks the starting SCN prior to the first backup after creating the file.
The second version marks the checkpoint SCN of the first backup (low), and the SCN prior to the second backup(high).
These marks, LOW/HIGH SCN, can be used to identify the blocks that changed between the backups.
Now I am going to perform a few more incremental backups with a few changes to the USERS tablespace and see what happens to the SYSTEM and USERS tablespaces versions.

First here is my query.

col tablespace_name format a10

select (select tablespace_name from dba_data_files where file_id=fno) tablespace_name,vercnt,to_char(vertime,'mm/dd/yy hh24:mi:ss') vertime,low, high from x$krcfbh
   where fno in (select file_id from dba_data_files)
    order by 1,2;

Now let's see what what my backups look for File 1 (SYSTEM tablespace).

List of Backup Sets
===================
  File LV Type Ckp SCN    Ckp Time          Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- ----------------- ----------- ------ ----
  1    0  Incr 8362002    07/29/19 09:46:59              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8363962    07/29/19 10:04:00              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8364371    07/29/19 10:13:21              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8364482    07/29/19 10:15:19              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8364637    07/29/19 10:16:38              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8365075    07/29/19 10:26:23              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8365373    07/29/19 10:29:13              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8365578    07/29/19 10:30:21              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8365763    07/29/19 10:30:58              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf


Let's see what my backups look like for File 7 (USERS tablespace)

List of Backup Sets
===================
  File LV Type Ckp SCN    Ckp Time          Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- ----------------- ----------- ------ ----
    7    0  Incr 8362002    07/29/19 09:46:59              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8363962    07/29/19 10:04:00              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8364371    07/29/19 10:13:21              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8364482    07/29/19 10:15:19              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8364637    07/29/19 10:16:38              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8365075    07/29/19 10:26:23              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8365373    07/29/19 10:29:13              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8365578    07/29/19 10:30:21              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8365763    07/29/19 10:30:58              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf




My backups for both of these match. They have the same Ckp SCN.
Notice that I performed an Incremental level 0 backup, and then 8 Incremental level 1 backups.
I actually performed both differential backups and cumulative backups but it didn't matter.

Now let's look at the block change tracking file for these 2 tablespaces (system and users)

TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
SYSTEM              3 07/29/19 10:04:00    8363961    8364370
SYSTEM              4 07/29/19 10:13:21    8364370    8364481
SYSTEM              5 07/29/19 10:15:19    8364481    8364636
SYSTEM              6 07/29/19 10:16:38    8364636    8365074
SYSTEM              7 07/29/19 10:26:23    8365074    8365372
SYSTEM              8 07/29/19 10:29:13    8365372    8365577
SYSTEM              9 07/29/19 10:30:20    8365577    8365762
USERS               1 07/29/19 09:21:18          0    8362001
USERS               2 07/29/19 09:46:59    8362001    8364481
USERS               3 07/29/19 10:15:19    8364481    8364636


Very interesting.. Since I only made few changes to the users tablespace it has 3 versions, the oldest of which is the full backup.
The system tablespace has gone over 7 versions and it no longer has the original version from the level 0 backup.

Now let's see if it used the BCT files for the backups.

     FILE# Creation Time     INCREMENTAL_LEVEL INCREMENTAL_CHANGE# CHECKPOINT_CHANGE# USED BCT
---------- ----------------- ----------------- ------------------- ------------------ ---
         1 07/29/19 09:47:24                 0                   0            8362002 YES
         1 07/29/19 10:04:02                 1             8362002            8363962 YES
         1 07/29/19 10:13:21                 1             8363962            8364371 YES
         1 07/29/19 10:15:19                 1             8364371            8364482 YES
         1 07/29/19 10:16:38                 1             8364482            8364637 YES
         1 07/29/19 10:26:24                 1             8364637            8365075 YES
         1 07/29/19 10:29:14                 1             8362002            8365373 YES
         1 07/29/19 10:30:22                 1             8362002            8365578 YES
         1 07/29/19 10:31:10                 1             8362002            8365763 NO

         7 07/29/19 09:47:00                 0                   0            8362002 YES
         7 07/29/19 10:04:02                 1             8362002            8363962 YES
         7 07/29/19 10:13:21                 1             8364300            8364371 YES
         7 07/29/19 10:15:19                 1             8364371            8364482 YES
         7 07/29/19 10:16:38                 1             8364482            8364637 YES
         7 07/29/19 10:26:24                 1             8364637            8365075 YES
         7 07/29/19 10:29:14                 1             8362002            8365373 YES
         7 07/29/19 10:30:22                 1             8362002            8365578 YES
         7 07/29/19 10:30:59                 1             8362002            8365763 YES



WOW...  Notice that the system tablespace (FILE #1) could not use the BCT file for the last backup, but the backup of the user tablespace (FILE #7) could because there were no changes between a few of the backups.

I also noticed the creation time of the RMAN backup, and the creation time of the BCT file record.  The new BCT file record (if changes occured) is created BEFORE the backup begins.

Finally, I can also see that the high SCN for users, 8364636,is older than the high SCN for system.


I am going to change users, and perform an incremental to see what happens to the LOW/HIGH scn in the next version.

TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
USERS               1 07/29/19 09:21:18          0    8362001
USERS               2 07/29/19 09:46:59    8362001    8364481
USERS               3 07/29/19 10:15:19    8364481    8364636
USERS               4 07/29/19 10:16:38    8364636    8366975

After the backup, there is no gap in the SCN numbers.  It created a new version that contained the changes between the previous version HIGH and the SCN of the incremental backup.


So what have we learned about the BCT ?



  1. The BCT has a structure that contains the starting SCN of changes captured.
  2. The BCT file creates version records associated with block changes before the backup begins.
  3. If no BCT changes are captured for a datafile, it is assumed that no changes occured since the starting SCN of the BCT file. 
  4. The BCT file keeps a bit map of changes between 2 SCN numbers
  5. The BCT file keeps changes on a datafile level.
  6. If a datafile didn't change between backups, it doesn't create a new record in the BCT file.  It doesn't matter if the backup is an incremental or full backup.
  7. By default 7 versions are kept for EACH DATAFILE. After 7 days of backups, some datafiles may still use the BCT if they haven't changed.

Below is a flow chart of what happens.



I explain how it is used in part 2 found here.

Tuesday, May 14, 2019

Oracle Security on your database -- Combining SEPS and EUS Security

Oracle offers many methods to authenticate users, and often it becomes difficult to determine how to combine multiple methods.

In this blog I will talk about how to combine 2 commonly used methods.


Security Methods

SEPS 

What is SEPS ?

The first method is SEPS (Secure External Password Store).
SEPS uses a wallet to store default credentials for connecting to a database.  The wallet can be used as the central location for username/password information when connecting from a system/application.  Using the wallet credentials in scripts etc, keeps the credentials secure, and makes it easier to periodically update the password to follow security rules.

Creating the wallet

The location of the wallet file (WALLET_LOCATION) must be added to the sqlnet.ora file that is used by the OS session.
There is also a second parameter WALLET_OVERRIDE that must also be set to true. This parameter determines if SEPS security is used for this database.

SQLNET.ORA entries

WALLET_LOCATION= < Location of wallet on OS >
WALLET_OVERRIDE=true

******  Note that the many databases can share the same wallet if they each point to the same sqlnet.ora file. *****

Below is an example of how to add an entry into the wallet.

mkstore -wrl  -createCredential < db_connect_string > < username > < password >
The creation of the wallet entry takes 3 parameters

1) db_connect_string - The connect string for the database that you wish to add. This can be either an entry in the tnsnames.ora file OR EZCONNECT
2) username - This is the user that you wish to connect to the database with
3) password - This is the current password for the user

The wallet is encrypted to obscure the password and permissions should limit the OS users than can read the wallet.

Using the wallet


To access the database using the entry in the wallet, use the "/@" prior to the connection.

Examples for sqlplus and RMAN

SQLPLUS> connect /@<db_connect_string>

RMAN> connect catalog /@<db_connect_string>

OID/EUS


OID (Oracle Internet Directory)/EUS (Enterprise User Security) uses LDAP or other methods to validate user credentials.

OID/EUS also uses a wallet that typically contains a unique certificate to authenticate a database.
Each database that utilized OID/EUS has it's own wallet file.

OID/EUS wallets

The default location for OID/EUS wallets is with the ${ORACLE_BASE} directory structure. By default the wallet for a database (like other database files) is contained with ${ORACLE_BASE}/admin/${DB_NAME}/wallet.

Also, if utilizing the multitenant option, each PDB has a wallet, and the wallet's default location is ${ORACLE_BASE}/admin/${DB_NAME}/pdbguid/wallet.


Combining SEPS and OID/EUS.


Question . Then how do we combine SEPS and OID/EUS security ?  SEPS requires WALLET_LOCATION to be set in the sqlnet.ora, and OID/EUS needs individual wallets for each database.  Setting the WALLET_LOCATION to a central wallet location will "break" OID/EUS authentication.

Answer. The way to combine both these authentication methods is to utilize an environmental variable in the sqlnet.ora file.  A variable needs to be created identifying the current database.

Step 1 - Ensure that there is a variable set that identifies the database name.
  - This can be either on the OS level (setenv DB_NAME=) 
       or
  - Through svrctl creating a variable that is set for all nodes in a cluster.

Step 2 - use this variable to set the wallet location in the sqlnet.ora file
   - WALLET_LOCATION = ${ORACLE_BASE}/admin/${DB_NAME}/wallet

*** Note: If multitenancy is used, the WALLET_LOCATION will be the root directory for PDB wallets.

Step 3 - ensure that the SEPS  connection/username/password entry is in each wallet that will utilize the SEPS entry.  

Conclusion


That's all there is to it.  By using environmental variables within the sqlnet.ora file you can point to the individual wallet for each database (satisfying OID/EUS requirements), and still use SEPS (by adding the SEPS entry to each wallet).




Monday, April 8, 2019

RMAN restore as encrypted

There is a new feature that was added to 12.2 of RMAN that allows you to restore tablespaces or a whole database "AS ENCRYPTED" or "AS DECRYPTED".





There is documentation of this feature here for 18c, but for some reason it isn't mentioned as a new feature for 12.2, or documented in 12.2.

Now I am going to demonstrate how to use this.

I have a database (bsg) which is a 12.2 database. I have created an encryption wallet, but none of the datafiles are encrypted.
You can see this from the dba_tablespaces view.

[oracle@oracle-server BSG]$ sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Mon Apr 8 15:37:52 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>  select tablespace_name,encrypted from dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       NO
TEMP                           NO
USERS                          NO

SQL>


Now I ran a Full backup of my database

[oracle@oracle-server BSG]$ rman

Recovery Manager: Release 12.2.0.1.0 - Production on Mon Apr 8 14:43:50 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: BSG (DBID=589892482)

RMAN> backup incremental level 0 database;




I have a full backup and now I am going to restore the tablespace users as encrypted.
NOTE : I am using "RESTORE TABLESPACE xxx AS ENCRYPTED";

[oracle@oracle-server BSG]$ rman

Recovery Manager: Release 12.2.0.1.0 - Production on Mon Apr 8 14:43:50 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: BSG (DBID=589892482)

RMAN> alter tablespace users offline;

Statement processed

RMAN> restore tablespace users as encrypted;

Starting restore at 04/08/19 14:53:22
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T144422_gbq5n71p_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T144422_gbq5n71p_.bkp tag=TAG20190408T144422
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 04/08/19 14:53:23
RMAN> recover tablespace users;

Starting recover at 04/08/19 14:53:44
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 04/08/19 14:53:44

RMAN> alter tablespace users online;

Statement processed

RMAN> exit


Now that I restored the tablespace and recovered it let's see if it is encrypted in the catalog.

[oracle@oracle-server BSG]$ sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Mon Apr 8 14:54:13 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select tablespace_name,encrypted from dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       NO
TEMP                           NO
USERS                          YES



Now to backup the tablespace.  Since it is newly encrypted, I need to make sure I perform a new Level 0 incremental backup to capture the encrypted datafiles.


[oracle@oracle-server datafile]$ rman

Recovery Manager: Release 12.2.0.1.0 - Production on Mon Apr 8 16:12:08 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: BSG (DBID=589892482)

RMAN> backup incremental level 0 tablespace users;

Starting backup at 04/08/19 16:12:25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=265 device type=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
channel ORA_DISK_1: starting piece 1 at 04/08/19 16:12:26
channel ORA_DISK_1: finished piece 1 at 04/08/19 16:12:27
piece handle=/home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T161226_gbqbsbmv_.bkp tag=TAG20190408T161226 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 04/08/19 16:12:27

Starting Control File and SPFILE Autobackup at 04/08/19 16:12:27
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of Control File and SPFILE Autobackup command on ORA_DISK_1 channel at 04/08/2019 16:12:28
ORA-00237: snapshot operation disallowed: control file newly created



Now the backup was successful, but the autobackup of the controlfile failed.

There is a MOS note on this (Doc ID 2320446.1)

You cannot backup the controlfile until the database is bounced because there is a data dictionary check pending.  Once I bounce the database all is good.

Now now let's try it the other way..

RESTORE TABLESPACE xxx AS DECRYPTED;

[oracle@oracle-server trace]$ rman

Recovery Manager: Release 12.2.0.1.0 - Production on Mon Apr 8 16:29:30 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: BSG (DBID=589892482)

RMAN> alter tablespace users offline;

using target database control file instead of recovery catalog
Statement processed

RMAN> restore tablespace users as decrypted;

Starting restore at 04/08/19 16:29:43
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=265 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T162748_gbqcp4vg_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T162748_gbqcp4vg_.bkp tag=TAG20190408T162748
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 04/08/19 16:29:45

RMAN> recover tablespace users;

Starting recover at 04/08/19 16:29:54
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 04/08/19 16:29:54

RMAN> alter tablespace users online;

Statement processed

RMAN>



Finally let's double check.

[oracle@oracle-server trace]$ sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Mon Apr 8 16:31:15 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select tablespace_name,encrypted from dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       NO
TEMP                           NO
USERS                          NO



Tuesday, February 26, 2019

ZDLRA - Space management





When backing up to a ZDLRA you want to optimize  your storage allocation to ensure each database can meet its recovery window goal.

You also want to make sure that the ZDLRA doesn't needs to aggressively reclaim space.

***  NOTE - This was updated on 9/13/19 as I've learned more.


Values you can set to control space and purging

Mandatory values

  • Recovery Window Goal (RWG) - How far back (in days) do I need to ensure a point in time recovery for my databases. This is set at the policy level.
        ** NOTE. This is a GOAL, not a guarantee.

  • Disk Reserved Space - How much storage to I need to ensure I keep my recovery window goal for this database.    This is set at the database level and should be adjusted as the database grows.  The sum of Disk_Reserved_Space for all databases can not be greater than the amount of space available on the storage location.  

Optional values


  • Max_Retention_Window -  What is the maximum number of days I want to keep backups for.  The MOS note on purging does a great job of documenting what the recommended value is (if you want to set it), and what the current default value is based on RA version.
  • Guaranteed_Copy -  This is set at the policy level and has 2 possible values
    • NO (default) - Follow purge rules regardless if backups have been replicated to tape/downstream.
    • YES - Backups that have not been successfully copied to tape/replicated are NOT eligible for deletion.  If the space usage for the database is above the Disk_Reserved_Space, incoming backups (both through RMAN and real-time redo) will be rejected.

Derived values that control space and purging


Each storage location has the following values that are used to control purging.

  • Free_Space - The amount of space (GB) that is available for storing backups . This is for the storage location
  • Free_Space_Goal - The estimated amount of space that is needed to support incoming backups to the storage location. This is evaluated every 3 hours. This estimate is the sum of space needed for the next 6 hours taking into account 2 types of incoming backups .
    • Daily backups from databases that have historical data.  The amount of space required to support normal incoming backup space is calculated in the Free_Space_Goal.
    • Newly added databases. In the case of databases that do not have good historical data, the Disk_Reserved_Space is used to calculate how much space needs to be available to support incoming backups.
  • Recovery_Window_Space - The amount of space needed by the database to support the RWG. This is available for each database.
  • Space_Usage - The amount of space currently being used by this protected database


Purge Process


How backups are prioritized for purging is described in this MOS note.
Zero Data Loss Recovery Appliance Purging Rules (Doc ID 2353188.1)


There are 3 rules of purging and this is outlined in the MOS note.




  • Rule 1 - This is the normal housekeeping purging and occurs on a regular basis for all databases.  Any backups whose recovery window is greater than the Max_Retention_Window are periodically removed from the ZDLRA to make space for incoming backups. This is a low priority background process.
The way this rule works is that the retention window for each database (maximum based on policy) grows until the number of days set for Max_Retention_Window is reached for the database.  When this occurs, the retention period is purged down to the Recovery_Window_goal for that database.  For example if my database has a Recovery_Window_Goal of  15 days, and a Max_Retention_Window of 20 days, when the database reaches 20 days of retention, the oldest 5 days of backups will be removed and I will have 15 days of backups after purging.  This ensures the purge is more of a bulk process, and ensures that the purge workload is spread out for all databases.

  • Rule 2 - When the Free_Space for a storage location falls below the Free_Space_Goal regular purging begins.  The database whose current retention is most above it's RWG is added to the purging queue and purging begins at a low priority process.

The way this rule works is similar to Rule 1.  Beginning with the database with the largest current  number of days of retention beyond the Recovery_Window_goal, databases, each database is purged down to its Recovery_Window_Goal until enough space is released. As I said, the processing is similar to rule 1, except it doesn't wait until the Max_Retention_Window is reached.

     
  • Rule 3 - When the Free_Space is not enough  to allow incoming backups, a more aggressive purging begins starting with the database that is most above (by percentage) its allocated Disk_Reserved_Space.  This could impact the RWG if the Disk_Reserved_Space is not large enough to support the RWG.

** NOTE there are a few exceptions to removing backups.
  • Any backup that is currently being restored by a client is exempt from the purging rules. THis ensures that any restore in process is successful
  • Any replication or copy-to-tape process that is currently executing is exempt.  This ensures that the replication/tape copy completes successfully.

How this works.

As long as there is enough space in the storage location to support the Max_Retention_Window for all databases, there is nothing to note on purging.  Rule 1 executes on a periodic basis.

Rule 2 - Free_Space_Goal.


As I tried to explain above, Free_Space_Goal is a estimate of how much space may be needed to ingest new backups.
If the RA believes that it will need more space available ingest backups than what is currently available, it will start purging older backups that are not needed to support the RWG for databases.
This purging will continue until the Free_Space_Goal is more than Free_Space, or all databases only have backups that support their RWG.

This makes sense for a couple of reasons.

  • It takes into account newly added databases that don't have history.  This ensures that when you send the first few backups to the RA, there is sufficient space to ingest these backups.
  • If space has become tight on the RA, and it can no longer support the Max_Retention_Window for databases, there is enough space available to keep ingesting new backups.
Rule 2 only goes as far as removing backups that are not needed to support the RWG for all databases.

As you can imagine this might not free enough space if your Disk_Reserved_Space is not adequate for all databases.


Rule 3 - Disk_Reserved_Space

Rule 3 kicks in when you run out of Free_Space.  This is the most aggressive purging, and it's priority is higher than most processes that occur on the ZDLRA.  Accepting incoming backups is one of the highest priorities and freeing up space for backups is critical.

From this document, you can see that this is where the Disk Reserved Space allocation for each database becomes important.

The amount of Disk Reserved Space allocated is compared to the amount of space used by this database.  The % of space used greater than the space reserved is used to decide the order in which databases backups are purged.

Backups will continue to be purged to free up space.  

A good recommendation is to set the Disk_Reserved_Space to at least 10% greater than what is needed to support your RWG. This needs to be updated on a regular basis.

*** NOTE   -- Also, you need to keep in mind that you might have some basis with seasonal peaks, ie cyber Monday.  For databases that have periodic peak periods, the Disk_Reserved_space should reflect the amount of space needed to support the peak.  This should be exempt from any automated adjustment using the RWG needed space.

Keep in mind that the combined Disk Reserved Space for all databases cannot be larger than the amount space available in the storage pool.


Important notes :

It is critical that you accurately set the Disk_Reserved_Space for all databases.  If you consistently set the Disk_Reserved_Space to be 20% greater than what is needed for the RWG for ALL databases, this ensures that the ZDLRA will not get more 80% full, and give you some buffer when space becomes tight.

Incorrectly setting the Disk_Reserved_Space for an existing  database may cause the RWG to be lost.  If the Disk_Reserved_Space is less than what is needed, Rule 3 may purge backups needed to support the RWG to free up space. THE RWG is NOT GUARANTEED.

Correctly setting the Disk_Reserved_Space for newly added databases is important.  This value is used by the Free_Space_Goal, and may force unnecessary purging to make room for the incoming backups.


Adjusting RWG


There is a package you can use to estimate how much storage is needed to support a given recovery window goal.  This is can be used to determine if a databases's reserved space is enough to meet the goal.  DBMS_RA.ESTIMAT_SPACE is the package you call.

ESTIMATE_SPACE

This procedure estimates the amount of storage in GB required for recovery of a given database and a desired recovery window.
Syntax
FUNCTION estimate_space (
   db_unique_name IN VARCHAR2,
   target_window IN DSINTERVAL_UNCONSTRAINED) RETURN NUMBER;
Parameters
Table 12-23 ESTIMATE_SPACE Parameters
ParameterDescription
db_unique_name
The name of the database needing the storage estimate.
target_window
The desired recovery window for the database. Specify the goal as any valid INTERVAL DAY TO SECOND expression, such as INTERVAL '2' DAY (2 days), INTERVAL '4' HOUR (4 hours), and so on.


Recommendations on Purging and Space Management.


  • Correctly setting the Disk_Reserved_Space is critical to managing space
  • Periodically adjusting the Disk_Reserved_Space as the databases grows or shrinks is important.  This can be done through the PLSQL package, and can be scheduled on a regular basis.
  • Know which databases, if any, have seasonal peaks and ensure the Disk_Reserved_Space reflects any peak.
  • Properly set the Disk_Reserved_Space for new database.  This will help prevent unnecessary purging, and possibly affecting ingesting of backups.
  • Use the estimate space package to adjust Disk_Reserved_Space if adjusting RWG for a policy.
  • Create an alert on the Disk_Reserved_Space falling below the Recovery_Window_Space. This tells you that the RWG may be lost if the ZDLRA needs to reclaim space.
  • Keep track of the Space_Usage vs Disk_Reserved_Space vs Recovery_Window_Space to understand how each databases space will be managed.
  • Periodically check SAR report (MOS note 2275176.1) for issues.  Process issues, etc. that cause ordering waits affect the estimated "needed space for RWG".  This may quickly inflate the Disk_Reserved_space when using an automated process.
  • Use the Size_Estimate from the RA_DATABASE view to ensure the space utilization makes sense.  This can be a check to verify that the storage utilization of backups on the ZDLRA seems reasonable for the size of the database.

Thursday, January 31, 2019

TDE Implementation first 30 days

This blog post is on the immediate effect of implementing TDE in your database backup storage.


In the last blog post I mentioned that after implementing TDE you need to explicitly perform a full backup on all tablespaces you encrypt.
  • If you are performing an incremental strategy (level 0 and level 1) then explicitly perform a level 0 backup.
  • If you are using an incremental merge backup (BACKUP INCREMENTAL LEVEL 1 for RECOVERY of COPY), you must create a new image copy and start a new divergent backup
The reason a full explicit backup is needed is that the SCN number of the underlying data blocks do not change when you implement encryption.  Because of this, any backup strategy you are using does not recognize that the the blocks have changed.  This is true even with the ZDLRA.

Above is the reason for this post.  If you have an unencrypted database, and you implement TDE on all tablespaces, then perform a full backup of all tablespaces, you will have 2 copies of your backups for a period of time.

I am going to take a closer look at what this means to my backup storage when implementing TDE.

For my example, I'm going to use the same example I used in the last post.
Below is what it looks like.











Using my example dataset for a just a weekly full backup and a daily incremental backup (using deduplication) you can see that

The data in my database is 35 GB ( the fully allocated size is 50 GB).
The amount of change over 30 days is 70 GB.
Compressing both of these, the final size is 41.5 GB used.

Prior to implementing TDE, I am using 41.5 consistently to store 30 days of compressed backups for my 50GB database.

The next column shows what happens when after implementing TDE.  The full backup size compressed is now 43.9 GB and the amount of change over 30 days is 54 GB compressed.
After implementing TDE, I am using 97.9 GB consistently to store 30 days of compressed backups for my 50 GB database.

Now let's see what first 30 days looks like.



This shows the unencrypted backups (purple) and how they start to age off and get removed on day 31.  You can also see how the encrypted backups (gray) grow over time and the will stabilize to the size you see on day 31.

You can see that there is an immediate impact to the backup usage. That growth continues until the old backup is finally ages off.

You need to plan for TDE ahead of time.  For my example dataset, the storage needed more than doubled.



Friday, January 25, 2019

Implementing TDE

This blog post is about what happens when you implement TDE (Transparent Data Encryption) in your database.

I started with 2 other blog posts, but I soon realized that this is a HUGE topic with ramifications throughout the database system.

Well let's start at the beginning.
I wanted to know what happens when you implement TDE.
The 3 items that I keep hearing are

  1. Encrypted data will not compress
  2. Encrypted data will not dedupe
  3. Implementing Advanced compression in the database will mitigate the loss of compression to encryption by compressing the data before encrypting


Now in order to investigate this further, I had some restrictions that affected my testing.

  • I had no access to a deduplicating appliance to test point 2
  • I didn't have a real data set to use.  I ended up using the SOE schema in swingbench 
  • The only way I could test the effect of compression was to compress the backupset.  Most appliances (including the ZDLRA) break the file up into pieces and compress each piece. I couldn't emulate the affect of doing that.
Now for my testing dataset.  I created a dataset in the SOE tablespace that in uncompressed.  I then took that dataset and created a second copy of the schema with advanced compression in place in the SOE_COMPRESSED tablespace.

When I finished this I had 2 tablespaces comprised of 10 x 5 GB datafiles.
SOE                              --> uncompressed copy of data
SOE_COMPRESSED --> Advanced compression (ACO) copy of data









In the graphs above you can see  that out of the 50 GB of allocated space
uncompressed  --> 35.8 GB of used space
compressed      -->  24.9 GB of used space

These are the datasets I am going to use throughout my testing.

I started by taking these 2 datasets and seeing what happens with the 2 full backup types
L0               --> Full backups saved as backupset
image copy --> Full copy of datafile.

Below is what I found when I compressed both these backup types.























This was very helpful to see what happens.

For my uncompressed dataset (35 GB), I was able to compress the file down to 20 GB.
For my uncompressed image copy (35 GB used) I was able to compress the file down to 18 GB
For my compressed dataset (24 GB), I was able to compress the file down to 18 GB
For my compressed image copy (24 GB), I was able to compress the file down to 18 GB.

So what I learned is no matter how I compressed the data, in the database (ACO), or just compress the backup, the final size is still ~18 GB.

Now we have some benchmarks on what happens when I just compress the backups.

I started with encrypting the image copy.  This was one area that I wanted to learn more about for 2 reasons.

  1. Many of the newer backup strategies use the "incremental merge" process that started with 10G of oracle. I was curious what happens to image copies when you implement TDE.
  2. Many of the Flash arrays use compression to get more usable storage from the smaller SSD drives.  Looking at what happens with compressing the datafile copies gives a lot of insight into how those flash arrays will be affected by TDE.










You can see from the above testing there was a very significant impact on the compressed size of the image copy.  Both compressed and uncompressed data compressed nicely when it was unencrypted.  After encrypting the dataset, the compressed size is about 3x larger than the compressed size.

This gives me the answer on how it will affect these 2 areas.


  1. If using an "incremental merge" based backup that is stored on compressed storage, you will need 3x the amount of storage for your backups.
  2. If using Flash storage for your database files that utilizes compression, you will need 3x the amount of database storage.
OK.. There's the first answer I was looking for.

Now let's see what happens with my Level 0 backups.












With level 0 backups, I can see the benefit of compressing the data first.
I can also see the effect of encryption.

Compressing the data first saved me a about a 1/3 of the size of the data, and about 1/3 of the size of the backupset compressed (once encrypted).


Now let's take a look of the effect of encryption on the change rate.  I updated one of the tables in the database to create a change rate, and I looked at both the archive logs and the incremental backup.













Wow.. What stood out to me on this testing is the size of the archive logs.  Adding Advanced Compression to the database decreased the size of the incremental backups (expected), but increased the size of the archive logs.  Then when I looked at the compressed size (both with and without encryption) compressing the data in the database increased the size of the archive logs.

This showed me a lot of what happens with TDE.  What I learned through this was.


  1. Encrypted data indeed does not compress, in fact it can get much bigger.
  2. Implementing Advanced Compression does not always mitigate the effects of encryption. Depending on the dataset, it might have very limited effect.
  3. The compressed size of datafiles and image copy backups may increase by 3x or more depending on the amount of free space you typically have in your datafiles..



Thursday, January 3, 2019

Verifying you have a recoverable backup

As you probably know the best way to validate that you can recover your database to a specific point-in-time is to perform a point-in-time recovery of the database and successfully open it. That isn't always possible due to the size of the database, infrastructure necessary, and the time it takes to go through the process.  To verify recoverability there are several commands that give you all the pieces necessary, but it is important to understand what each command does.

Restore Validate -  The restore validate command can be performed for any database objects including the whole database to verify the files are valid.  This command can be used to restore as of the current time (default), but you can also specify a previous point-in-time.  The validate verifies that the files are available and also checks for corruption.  By default it checks for physical corruption, but it can also check for logical corruption. This command reads through the backup pieces 
Examples of restore validate for different cases.
verify current backup -
RMAN> RESTORE DATABASE VALIDATE;
RMAN> RESTORE ARCHIVELOG ALL VALIDATE;
This will verify that the last full backup (level 0) does not contain corruption, and it verifies that archivelogs exist throughout the recovery window, and that they do not contain corruption. 
verify previous backup -
RMAN> RESTORE DATABASE VALIDATE UNTIL TIME "to_date('xx/xx/xx xx:xx:xx','mm/dd/yy hh24:mi:ss')";
This will verify that the full backup performed prior to the "until time" is available and does not contain corruption.
Restore Preview - The restore preview can be used to identify the backup pieces necessary for a restore AND recovery of the database.  The preview will show all the datafile backups (both full and incremental), along with the archive logs needed.  The restore preview does not check for corruption, it simply lists the backup pieces necessary.

Validate -  The validate command can be performed on any database objects along with backupsets. Unlike restore, you cannot give it an "until time", you must identify the object you want to validate.  By default it checks for physical corruption, but it can also check for logical corruption.

It is critical to understand exactly what all three commands do to ensure you can efficiently recover to a previous point-in-time.  It takes a combination of 2 commands (plus an extra to be sure).

So let's see how I can test if I can recover to midnight on the last day of the previous month.  My backup strategy is "Weekly full/Daily incremental" backups.  I perform my full backups on Saturday night and keep my backups for 90 days. My archive logs are kept on disk for 20 days.

It is now 12/15/18 and I want to verify that I can recover my database to 23:59:59 on 11/30.

First I want to perform a restore validate of my database using until time.
RMAN > restore validate database UNTIL TIME "to_date('11/30/18 23:59:59','mm/dd/yy hh24:mi:ss')";
By looking through the logs, I can see the this command performed a validation of the Level 0 (full) backup I performed on 11/24/18.  It did not validate any of the incremental backups or archive logs that I will need to recover the database.
Now I want to perform a restore validate of my archive logs using until time.
RMAN > restore validate archivelog UNTIL TIME "to_date('11/30/18 23:59:59','mm/dd/yy hh24:mi:ss')";
By looking through the logs, I can see that this command validated the backup of all my older archive logs (older than 20 days), and it validated the FRA copy of my archive logs that were 20 days or newer.
There are couple of issues with this that jump out at me.
1) My incremental backups are NOT validated.  Yes I know I can recover to the  point-in-time I specified, but all I verified is that it is possible by applying 6 days of archive logs.
2) I don't know if any of my newer backups of archive logs are corrupt.  The validation of archive logs only validated backups of archivelogs that are NOT on disk. I still need to validate the archive log backups for archive logs that will age off from the FRA in a few days to be sure..
3) The process doesn't check or validate a backup of the controlfile.  If I am restoring to another server (or if I lose my database completely) I want to make sure I have a backup of the controlfile to restore.
The only way to be sure that I have an efficient, successful recovery to this point in time is to combine the 2 commands and use that information to validate all the pieces necessary for recovery.
Here are the steps to ensure I have valid backups of all the pieces necessary to restore efficiently.
1) Perform a restore preview and write the output to a log file.
2) Go through the output from step 1 and identify all the "BS KEY" (backupset key) values.
3) Perform a "validate backupset xx;" using the keys from step 2 to ensure all datafile backups (both full and incremental backups) are valid.
4) Go through the output from step 1 and identify the archive log sequences needed.
5) Identify the backupsets that contain all log sequences needed (my restore preview pointed to the archive logs on disk).
6) Perform a "validate backupset xx;" using the keys from step 5.
7) Perform a restore controlfile to a dummy location using the UNTIL time.
As you can see the "restore validate" is not enough to ensure efficient recovery. It only ensures that the you have a valid RESTORE not a valid recovery.
The ZDLRA does constant recovery checks using the virtual full backups, and archive logs.
The ZDLRA ensures you can quickly AND successfully recover to any point-in-time within your retention window.