Autotuned_reserved_space is a new policy setting that got released with 21.1 and you should be using it. When I talk to customers about how to manage databases on a ZDLRA, the biggest confusion comes in when I talk about reserved space. Reserved space needs to be understood, and properly managed. This new feature in 21.1 allows the ZDLRA to handle the reserved space for you, and I explain how to use it in this blog post. First let's go through space usage, and reserved space in general.
Space usage on the ZDLRA.
Recovery Window goal (which drives the space utilization)
The recovery window goal is set at the policy level, and this value (in days) is the number of days that you want to keep as a recovery window for all databases that are a member of this policy. This will drive the space utilization.
Total space
The ZDLRA comes with all the space pre-allocated. When you are looking at OEM, or in the SAR report you will see the total space listed. You want to make sure that you have enough space for your database backups and any incoming new backups.
Used Space
When the ZDLRA purges backups beyond the the Recovery Window Goal that you set, if does a bulk purge of backups. This can be controlled by setting the maximum disk backup retention in days (which defaults to 1.5 times the recovery window goal). Because of the bulk purge, more space is shown as used than is needed to support your recovery window goal.
Recovery Window Space
This is the amount space that is needed to support the recovery window goal. Because, of the bulk purge, the recovery window space is less than the used space.
Reserved space
In order to control what happens with space, the concept of reserved space is used. When a database is added to the ZLDRA, the reserved space value is set for this database. This value should be updated regularly to ensure that there is enough space for the database backups to be stored.
The important things to know about reserved space are:
The sum of all the reserved space cannot be greater than the total space available on the ZDLRA.
When adding a new database, it's reserved space must fit within the unreserved space.
When a new database is added, the reserved space must be set to least the size of the database, and defaults to 2.5 times the size of the database.
The reserved space for a database needs to be at least the size of the largest datafile.
The reserved space should be larger than the amount of space needed to support the recovery window goal space for the database. For databases with fluctuation, you need to reserve space for the peak usage.
The reserved space serves two purposes when properly set
It can be used to determine how much space is available for new database backups.
If the ZDLRA determines that it does not have enough space to support the recovery window goal of the supported databases, space is reclaimed from databases whose reserved space is too small.
It is critical to keep the reserved space updated, and many customers have used an automated process to set the reserved space to "recovery window space needed" + 10%
Unfortunately configuring an automated process for all databases does not take into account any fluctuations in usage. Let's say I have a database which is much busier at months end, I want to make that sure my reserved space is not adjusted down to the low value, I want it to stay adjusted based on the highest space usage value.
Autotuned_reserved_space
This where autotuned reserved space can help you manage the reserved space. This setting is controlled at the policy level.
AUTOTUNED_RESERVED_SPACE
This value is set at the protection policy level and contains either "YES" or "NO", and defaults to "NO". "YES" will allow the ZDLRA to manage reserved space automatically for all databases (whose disk_reserved_space is not set) and are a member of this policy.
MAX_RESERVED_SPACE
This value is also set at the protection policy level. This value is optional for autotuned_reserved_space, but if set, it will control the maximum amount of reserved space that can be set for an individual database in the protection policy.
AUTOTUNE_SPACE_LIMIT
This value is set at the storage level for ALL databases. This sets a reserved space usage limit, where autotuning can slow down large reserved space increases. When reached, autotune will limit databases from increasing their reserved space growth to 10% per week. This value is optional and will default to the total space if not set.
SUMMARY:
autotuned_reserved_space - Enables autotuning of space within a protection policy
max_reserved_space - Controls the maximum reserved space of databases in a protection policy
autotune_space_limit - Slows the reserved space growth when a specified space limit is reached.
What does autotune reserved space do ?
On a regular basis, if needed, the reserved space for each autotune controlled database is adjusted to reserve space for the recovery window goal, and incoming backups.
If the database has a disk_reserved_space set, autotuning will not be used for this database. It is assumed that the disk_reserved_space will be set manually for this database
Autotune will replace the need for the ZDLRA admin to constantly update the reserved space for each database, as it's space needs change over time. It will also allow them to configure a constant reserved space for databases with fluctuating storage usage.
One of the key features of the ZDLRA is the ability to capture changes from the database "real-time" just like a standby database does. In this blog post I am going to demonstrate what is happening during this process so that you can get a better understanding of how it works.
If you look at the GIF above, I will explain what is happening, and show what happens with a demo of the process.
The ZDLRA uses the same process as a standby database. In fact if you look at the flow of the real-time redo you will notice the redo blocks are sent to BOTH the local redo log files, AND to the staging area on the ZDLRA. The staging area on the ZDLRA acts just like a standby redo does on a standby database.
As the ZDLRA receives the REDO blocks from the protected database they are validated to ensure that they are valid Oracle Redo block information. This ensures that a man-in-the-middle attack does not change any of the backup information. The validation process also assures that if the database is attacked by ransomware (changing blocks), the redo received is not tainted.
The next thing that happens during the process is the logic when a LOG SWITCH occurs. As we all know, when a log switch occurs on a database instance, the contents of the redo log are written to an archive log. With real-time redo, this causes the contents of the redo staging area on the ZDLRA (picture a standby redo log) to become a backup set of an archive log. The RMAN catalog on the ZDLRA is then updated with the internal location of the backup set.
Log switch operation
I am going to go through a demo of what you see happen when this process occurs.
ZDLRA is configured as a redo destination
Below you can see that my database has a "Log archive destination" 3 configured. The destination itself is the database on the ZDLRA (zdl9), and also notice that the log information will be sent for ALL_ROLES, which will send the log information regardless if it is a primary database or a standby database.
List backup of recent archive logs from RMAN catalog
Before I demonstrate what happens with the RMAN catalog, I am going to list out the current archive log backup. Below you see that the current archive log backed up to the ZDLRA has the "SEQUENCE #10".
Perform a log switch
As you see in the animation at the top of the post, when a log switch occurs, the contents of the redo log in the "redo staging area" are used to create an archive log backup that is stored and cataloged. I am going to perform a log switch to force this process.
List backup of archive logs from RMAN catalog
Now that the log switch occurred, you can see below that there is a new backup set created from the redo staging area.
There are a couple of interesting items to note when you look at the backup set created.
The backup of the archive log is compressed. As part of the policy on the ZDLRA you have the option to have the backup of the archive log compressed when it is created from the "staged redo". This does NOT require the ACO (Advanced Compression) license. The compressed archive log will be sent back to the DB compressed during a restore operation, and the DB host will uncompress it. This is the default option (standard compression) and I recommend changing it. If you decide to compress, then MEDIUM or Low is recommended. Keep this in mind that he this may put more workload on the client to uncompress the backup sets which may affect recovery times. NOTE: When using TDE, there will be little to no compression possible.
The TAG is automatically generated. By looking at the timestamp in the RMAN catalog information, you can see that the TAG is automatically generated using the timestamp to make it unique.
The handle begins with "$RSCN_", this is because the backup piece was generated by the ZDLRA itself, and archivelog backup sets will begin with these characters.
Restore and Recovery using partial log information
Now I am going to demonstrate what happens when the database crashes, and there is no time for the database to perform a log switch.
List the active redo log and current SCN
Below you can see that my currently active redo log is sequence # 12. This is where I am going to begin my test.
Create a table
To demonstrate what happens when the database crashes I am going to create a new table. In the table I am going to store the current date, and the current SCN. Using the current SCN we will be able to determine the redo log that contains the table creation.
Abort the database
As you probably know, if I shut down the database gracefully, the DB will automatically clean out the redo logs and archive it's contents. Because I want to demonstrate what happens with crash I am going to shut the database down with an ABORT to ensure the log switch doesn't occur. Then start the database mount so I can look at the current redo log information
Verify that the log switch did not occur
Next I am going to look at the REDO Log information and verify that my table creation (SCN 32908369) is still in the active redo log and did not get archived during the shutdown.
Restore the database
Next I am going to restore the database from backup.
Recover the database
This is where the magic occurs so I am going to show that happens step by step.
Recover using archive logs on disk
The first step the database does is to use the current archive logs to recover the database. You can see in the screenshot below that the database recovers the database using archive logs on disk up to sequence #11 for thread 1. This contains all the changes for this thread, but does not include what is in the REDO log sequence #12. Sequence #12 contains the create table we are interested in.
Recover using partial redo log
This step is where the magic of the ZDLRA occurs. You can see from the screen shot below that the RMAN catalog on the ZDLRA returns the redo log information for Sequence #12 even though it was never archived. The ZDLRA was able to create an archive log backup from the partial contents it had in the Redo Staging area.
Open the database and display table contents.
This is where it all comes together. Using the partial redo log information from Redo Log sequence #12, you can see that when the database is opened, the table creation transaction is indeed in the database even though the redo did not become an archive log.
'
Conclusion : I am hoping this post gives you a better idea of how Real-time redo works on the ZDLRA, and how it handles recovering transactions after a database crash
This is post #1 in a series of posts explaining how to implement TDE (Transparent Data Encryption). In this first post I will take my Multitenant 19c database (remember Multitenant is mandatory with 21c) and configure TDE in my 3 (no extra license needed) pluggable databases.
The database I created for this example contains 3 PDBs as this will give me the flexibility to unplug and move PDBs around.
The names I used are
TDETEST - This is my CDB, and I will only be encrypting users tablespace and the catalog.
TDEPDB1,TDEPDB2, TDEPDB3 - My 3 PDBs. I will be encrypted all user tablespaces that make up these 3 PDBS .
Since I have only a single instance, the location I chose to put the local wallet file is under $ORACLE_BASE/admin/$ORACLE_SID/wallet (/home/oracle/app/oracle/admin/tdetest/admin).
In a RAC environment you have a few choices of where to put the wallet file. The most important thing though, is to ensure each node has an identical copy in a RAC environment.
In a RAC environment you can put the wallet file:
On the local file system on each node, but be sure they are all identical. It is best to put them within the $ORACLE_BASE to make it easier in an out-of-place upgrade.
On a shared filesystem. This can be ACFS, DBFS, or just NFS.
With ASM. It is possible to set the location to be an ASM file for example "+DATA/TDETEST/wallet"
NOTE: When deciding where to store your TDE wallet there are 2 items to keep in mind.
1) You need to backup your wallet. Without the wallet file you can't access your database. Once encryption is implemented, the wallet needs to be available.
2) The wallet needs to be backed up separate from the database. If the wallet and the Database backup are stored together, anyone can read the backup.
In my further blogs I will be converting from using a local wallet file to store my encryption keys, to using OKV along with a local wallet that caches the autologin information.
To migrate to TDE, I chose to perform the quickest method "Restore tablespace as encrypted". With my test database, I did not have a standby database.
NOTE: With a standby database the fastest way to convert to TDE would be with a "standby first" strategy. This is explained in this MAA document which includes an automated procedure. With this strategy you would convert your standby database to utilize TDE with a restore as encrypted, while the primary database stays untouched. Once the standby is converted, you would perform a switchover (to the standby) and encrypt the "current standby", which was the primary. Once both are encrypted, you would switch back and the process is completed.
Step 1 - Perform a full backup of the database immediately prior to starting the process. Since I am using "restore tablespace as encrypted" this will allow me to open the database with minimal recovery. Once backed up, you also should create a restore point to quickly identify the point after the full backup prior to the encryption.
create restore point pretde;
Step 2 - Set the location of the wallet_root, and the tde configuration. I chose to use the WALLET_ROOT parameter (new with 19 I believe) because it gives the most flexibility. Keep in mind in order to go through step 2 completely the database will need to be bounced.
alter system set WALLET_ROOT='/home/oracle/app/oracle/admin/tdetest/wallet/' scope=spfile;
startup force;
alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE' scope=both;
Step 3 - We are going to take a look at the database and the parameters that are set for the encryption wallet. Below is the formatted query I am going to be using throughout this post.
Below is the output of the query and the current settings as of this point. You can see that there are rows for all my PDBs, and that the status is "NOT_AVAILABLE" since I have not created a wallet or any master keys yet. You can also see that the keystore is UNITED, meaning that all the keys (both for the CDB and all the PDBs) are assumed to be contained in the same Wallet file.
Also note that the WRL_PARAMETER is set based on the WALLET_ROOT setting. The TDE wallet file is defaulted to be the WALLET_ROOT/tde directory for the CDB.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ NOT_AVAILABLE UNKNOWN NONE UNDEFINED
PDB$SEED FILE NOT_AVAILABLE UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE NOT_AVAILABLE UNKNOWN UNITED UNDEFINED
TDEPDB2 FILE NOT_AVAILABLE UNKNOWN UNITED UNDEFINED
TDEPDB3 FILE NOT_AVAILABLE UNKNOWN UNITED UNDEFINED
Step 4. Now I need to create the keystore and open it for the CDB, and all my individual PDBs. Note that each PDB shares the keystore with the CDB. In isolated mode, I would create an individual keystore for each PDB and they would be in subdirectories under the WALLET_ROOT location.
But first I need to create the directory to hold the keystore wallet.
ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde' IDENTIFIED BY "F1LE2021!";
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!"; alter session set container=tdepdb1;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT; alter session set container=tdepdb2;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT; alter session set container=tdepdb3;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT;
Now let's look at the encryption settings in v$encryption_wallet. Below you can see that there is a single wallet setting (UNITED keystore), and the status is "OPEN_NO_MASTER_KEY". The master key has not been set for CDB, or the PDBs.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN_NO_MASTER_KEY PASSWORD NONE UNDEFINED
PDB$SEED FILE CLOSED UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE OPEN_NO_MASTER_KEY PASSWORD UNITED UNDEFINED
TDEPDB2 FILE OPEN_NO_MASTER_KEY PASSWORD UNITED UNDEFINED
TDEPDB3 FILE OPEN_NO_MASTER_KEY PASSWORD UNITED UNDEFINED
Step 5. Now we create the master keys for the CDB and each PDB. A master key is needed to encrypt the tablespace encryption keys stored in the datafiles.
NOTE: I added a tag that identifies the key with the CDB or PDB it is created for. I highly recommend using tags to identify the keys within the wallet. Identifying the master encryption key for an individual PDB will be important when moving PDBs between CDBs.
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'TDETEST MASTERKEY_APRIL19' IDENTIFIED BY "F1LE2021!" WITH BACKUP USING 'TDETEST_TDEKEY_APR1_backup';
alter session set container=tdepdb1;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'TDEPDB1 MASTERKEY_APRIL19' IDENTIFIED BY "F1LE2021!" WITH BACKUP USING 'TDEPDB1_TDEKEY_APR1_backup' container=current; alter session set container=tdepdb2;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'TDEPDB2 MASTERKEY_APRIL19' IDENTIFIED BY "F1LE2021!" WITH BACKUP USING 'TDEPDB2_TDEKEY_APR1_backup' container=current; alter session set container=tdepdb3;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'TDEPDB3 MASTERKEY_APRIL19' IDENTIFIED BY "F1LE2021!" WITH BACKUP USING 'TDEPDB3_TDEKEY_APR1_backup' container=current;
And once again let's look at the settings in v$encryption_wallet. This time you will see that the wallet is open for all CDBs/PDBs except for the PDB$SEED. The wallet type is "PASSWORD" which means that the wallet needs to be manually opened with a password.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN PASSWORD NONE NO
PDB$SEED FILE CLOSED UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE OPEN PASSWORD UNITED NO
TDEPDB2 FILE OPEN PASSWORD UNITED NO
TDEPDB3 FILE OPEN PASSWORD UNITED NO
Step 6 - We have the master keys set and the wallets are open. We now need to implement TDE by encrypted the tablespaces in my PDBs. As I said before, in my example, I used "restore tablespace as encrypted".
Another option is to encrypt online (as of 12c). In this process the database will encrypt each datafile sequentially while the database is online and active.
NOTE : If using online encryption be aware that
It takes much longer than performing a restore, as datafiles are encrypted sequentially. Using "restore tablespace as encrypted" You can parallelize the restore across multiple channels.
The process needs enough space for 2 copies of the largest datafile. If using bigfiles, this can be quite a bit of storage.
You need to monitor the process to ensure it completes successfully.
Next step is I am going to startup mount and open the wallets, restore my 3 PDBs users tablespaces, along with the users tablespace in my CDB, and then recover and open the database.
sqlplus> shutdown immediate;
sqlplus> startup mount;
sqlplus> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!";
sqlplus> alter session set container=tdepdb1;
sqlplus> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT;
sqlplus> alter session set container=tdepdb2;
sqlplus> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT;
sqlplus> alter session set container=tdepdb3;
sqlplus> SDMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT;
rman target / catalog rmancat/oracle@rmancat
rman> restore tablespace users as encrypted;
rman> restore tablespace tdepdb1:users as encrypted;
rman> restore tablespace tdepdb2:users as encrypted; rman> restore tablespace tdepdb3:users as encrypted;
rman> recover database;
rman> alter database open;
Step 7 - Make sure all new tablespaces are encrypted by default
In order to ensure all new tablespaces are encrypted I am going to set the database parameter.
sql>alter system set encrypt_new_tablespaces = ALWAYS scope = both sid = '*';
Step 8 - Encrypt all credentials that contained in the root container
In order to encrypt all credentials (like scheduler credentials, and DB Link credentials) that are stored in the system catalogs, you need to login as a user granted "SYSKM" role and execute
sql> alter
database dictionary encrypt credentials container = current;
Step 9 - I am going to verify that the pluggable databases are encrypted, along with the catalog.
First let's look at the existing keys using the query below
I can see the keys that are created in each container, including the ROOT
PDB Name Activation Time Key ID Tag
--------------- -------------------------- ------------------------------------------------------- ----------------------------------------
CDB$ROOT 19-Apr-2021 05:12:41pm AbwcWGicr0+rvyhrSB+rKQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA TDETEST MASTERKEY_APRIL19
TDEPDB1 19-Apr-2021 05:19:11pm AX9pkB+zQE/Wv6Qek13TeLkAAAAAAAAAAAAAAAAAAAAAAAAAAAAA TDEPDB1 MASTERKEY_APRIL19
TDEPDB2 19-Apr-2021 05:19:11pm AUKkROD1TE8wv0jfJhN63JYAAAAAAAAAAAAAAAAAAAAAAAAAAAAA TDEPDB2 MASTERKEY_APRIL19
TDEPDB3 19-Apr-2021 05:19:13pm AZUWZDWpxk9sv0GrljDFr7sAAAAAAAAAAAAAAAAAAAAAAAAAAAAA TDEPDB3 MASTERKEY_APRIL19
I am going to use the query below to look at the encryption setting on each datafile.
Below is the output. I see that all the datafiles were properly encrypted and are available.
PDB Name Tablespace Name Encrypted Master Key ID
--------------- ------------------------------ ---------- ------------------------------------------------------------
CDB$ROOT SYSAUX NO AbwcWGicr0+rvyhrSB+rKQA=
SYSTEM NO AbwcWGicr0+rvyhrSB+rKQA=
TEMP NO AbwcWGicr0+rvyhrSB+rKQA=
UNDOTBS1 NO AbwcWGicr0+rvyhrSB+rKQA=
USERS YES AbwcWGicr0+rvyhrSB+rKQA=
TDEPDB1 SYSAUX NO AX9pkB+zQE/Wv6Qek13TeLk=
SYSTEM NO AX9pkB+zQE/Wv6Qek13TeLk=
TEMP NO AX9pkB+zQE/Wv6Qek13TeLk=
UNDOTBS1 YES AX9pkB+zQE/Wv6Qek13TeLk=
USERS YES AX9pkB+zQE/Wv6Qek13TeLk=
TDEPDB2 SYSAUX NO AUKkROD1TE8wv0jfJhN63JY=
SYSTEM NO AUKkROD1TE8wv0jfJhN63JY=
TEMP NO AUKkROD1TE8wv0jfJhN63JY=
UNDOTBS1 YES AUKkROD1TE8wv0jfJhN63JY=
USERS YES AUKkROD1TE8wv0jfJhN63JY=
TDEPDB3 SYSAUX NO AZUWZDWpxk9sv0GrljDFr7s=
SYSTEM NO AZUWZDWpxk9sv0GrljDFr7s=
TEMP NO AZUWZDWpxk9sv0GrljDFr7s=
UNDOTBS1 YES AZUWZDWpxk9sv0GrljDFr7s=
USERS YES AZUWZDWpxk9sv0GrljDFr7s=
And I am going to look at the catalog to be sure
select enforcement from DICTIONARY_CREDENTIALS_ENCRYPT;
ENFORCEM
--------
ENABLED
Step 10 - I am adding the ability to access the keystore without having to enter the password. This can make it much easier to open the keystore if you don't chose to make the Keystore Wallet AUTO_LOGON.
I am putting my keystore password in an AUTO_LOGIN wallet as a secret. This wallet needs to be created in a directory called TDE_SEPS. In my case since, I am using WALLET_ROOT, the directory containing the secret must be WALLET_ROOT/tde_seps.
The password portion of "ADMINISTER KEY" becomes IDENTIFIED by EXTERNAL STORE;
NOTE: I am using the phrase "FOR CLIENT 'TDE_WALLET'"
'TDE_WALLET' is necessary to
ADMINISTER KEY MANAGEMENT
ADD SECRET 'F1LE2021!' FOR CLIENT 'TDE_WALLET'
USING TAG 'TDE file keystore password'
TO LOCAL AUTO_LOGIN KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde_seps';
Now to verify it I am going to restart my database and open the wallet using the external store (rather than hardcoding in the password).
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY EXTERNAL STORE;
alter session set container=tdepdb1;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY EXTERNAL STORE CONTAINER = CURRENT;
alter pluggable database open;
alter session set container=tdepdb2;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY EXTERNAL STORE CONTAINER = CURRENT;
alter pluggable database open;
alter session set container=tdepdb3;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY EXTERNAL STORE CONTAINER = CURRENT;
alter pluggable database open;
And finally verify that that the Keystores are open (using the external store) for my CDB and PDBs.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN PASSWORD NONE NO
PDB$SEED FILE CLOSED UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE OPEN PASSWORD UNITED NO
TDEPDB2 FILE OPEN PASSWORD UNITED NO
TDEPDB3 FILE OPEN PASSWORD UNITED NO
Step 11 - I am going to change the wallets to be AUTO_LOGIN, bounce the database and verify that the encryption settings are all correct.
sqlplus / as sysdba
sql> ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY EXTERNAL STORE; sql> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde/' IDENTIFIED BY "F1LE2021!";
sql> shutdown immediate
sql> startup
And v$encryption_wallet shows me that my wallets are all open, and that they are AUTOLOGIN.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN AUTOLOGIN NONE NO
PDB$SEED FILE OPEN AUTOLOGIN UNITED NO
TDEPDB1 FILE OPEN AUTOLOGIN UNITED NO
TDEPDB2 FILE OPEN AUTOLOGIN UNITED NO
TDEPDB3 FILE OPEN AUTOLOGIN UNITED NO
Now I am ready to perform a new FULL backup of the pluggable databases, and they are ready for use.
That's all there is to implementing TDE with a wallet file. Next post, I am going to convert my wallet to OKV managed wallets.
Recently version 19.1.1.2 of ZDLRA software was released, and one the features is something called "Smart Incremental". I will walk through how this feature works, and help you understand why features like this are "ZDLRA Only".
I am going to start by walking through how incremental backups become "virtual full backups", and that will give you a better picture of how "smart incremental" is possible.
The most important thing to understand about these features is that the RMAN catalog itself is within the ZDLRA AND the ZDLRA has the ability to update the RMAN catalog.
How does a normal backup strategy work ?
That is probably the best place to start. What DBAs typically do is perform a WFDI (Weekly Full Daily Incremental) backup. To keep my example simple, I will use the following assumptions.
My database contains 3 datafile database. SYSTEM, SYSAUX, USERS, but I will only use the example of backing up datafile users.
Each of these 3 datafiles are 50 GB in size
I am only performing a differential backup which creates a backup containing the changes since the last backup (full OR incremental).
My database is in archivelog *
* NOTE: With ZDLRA you can back up a nologging database, and still take advantage of virtual fulls. The database needs to be in a MOUNTED state when performing the incremental backup.
If placed in a table the backups for datafile USERS would look this. Checkpoint SCN is the current SCN number of the database at the start of the backup.
If I were to look at what is contained in the RMAN catalog (RC_BACKUP_DATAFILE), I would see the same backup information but I would see the SCN information 2 columns.
Incremental change # is the oldest SCN contained in the backupset. This is the starting SCN number of the previous backup, this backup is based on.
Checkpoint Change # is starting SCN number of the backup. Everything newer than this SCN (including this SCN) needs to be defuzzied.
Normal backup progression (differential)
When performing an incremental RMAN backup of a datafile, the first thing that RMAN does is decide which blocks needs to be backed up. Because you are performing an incremental backup, you may be backing up all of the blocks, only some of the blocks, or even none of the blocks if the file has not changed.
This is a decision RMAN makes by querying the RMAN catalog entries (or the controlfile entries if you not using an RMAN catalog).
Now let's walk through this decision process. Each RMAN incremental differential's starting SCN is based on the beginning SCN of the previous backup (except for the full).
By looking at the RMAN catalog (or controlfile), RMAN determines which blocks need to be contained in each incremental backup.
Normal backup progression (cumulative differential)
Up to release 19.1.1.2, the recommendation was to perform a Cumulative Differential backup. The cumulative differential backup compares the starting SCN number of the last full backup to determine the starting point of the incremental backup (rather than the last incremental backup) .
The advantage of the cumulative over differential, is that a cumulative backups can be applied to the last full and take the place of applying multiple differential backups. However, cumulative backups are bigger every day that passes between full backups because they contain all blocks since the last full.
Below is what a cumulative schedule would look like and you can compare this to the differential above.
You can see that each cumulative backups starts with the Checkpoint SCN of the last full to ensure that all blocks changed since the full backup started are captured.
The RMAN catalog entries would look like this.
If you were astute, you would notice a few things happened with the cumulative differential vs the differential.
The backup size got bigger every day
The time it took to perform the incremental backup got longer
The range of SCNs contained in the incremental is larger for a cumulative backup.
As you most likely know, one the most important features of the ZDLRA is the ability to create a "virtual full" from an incremental backup.,
If we look at what happens with a cumulative differential (from above), I will fill in the virtual full RMAN catalog entries by shading them light green.
The process of performing backups on the ZDLRA is exactly the same as it is for the above cumulative, but the RMAN catalog looks like this.
What you will noticed by looking at this compared to the normal cumulative process that
For every cumulative incremental backup there is a matching virtual full backup The Virtual full backup appears (from the newly inserted catalog entry) to have beeen taken at the same time, and the same starting SCN number as the cumulative incremental. Virtual full backups, and incremental backups match time, and SCN as catalog entries.
The size of the virtual full is 0 since it is virtual and does not take up any space.
The completion time for the cumulative incremental backup is the same as the differential backups. Because the RMAN logic can see the virtual full entry in the catalog, it executes the cumulative incremental EXACTLY as if it is the first differential incremental following a full backup.
Smart Incremental backups -
Now all of this led us to smart incremental backups. Sometimes the cumulative backup process doesn't work quite right. A few of the reasons this can happen are.
You perform a full backup to a backup location other than the ZDLRA. This could be because you are backing up to the ZDLRA for the first time replacing a current backup strategy, or maybe you created a special backup to disk to seed a test environment (Using a keep backup for this will alleviate this issue). The cumulative incremental backup will compare against the last full regardless of where it was taken (there is exceptions if you always use tags to compare).
You implement TDE or rekey the database. Implementing TDE (Transparent Data Encryption) changes the blocks, but does not change the SCN numbers of the blocks. A new full backup is required.
Previously, you would have to perform a special full backup to correct these issues. In the example below you can see what happens (without smart incremental) to the RMAN catalog if you perform a backup on Thursday at 12:00 to disk to refresh a development environment.
Since the cumulative backups are based on the last full backup, the Thursday - Saturday backups contain all the blocks that have changed since the disk backup started on Thursday at 12:00.
And, since it is cumulative, each days backup is larger, and takes longer.
This is when you would typically have to force a new level 0 backup of the datafile.
What the smart incremental does
Since the RMAN catalog is controlled by the ZDLRA it can correct the problem for you. You no longer need to perform cumulative backups as the ZDLRA can fill in any issues that occur.
In the case of the Full backup to disk, it can "hide" that entry, and continue to correctly perform differential backups. It would "hide" the disk backup that occured, and inform the RMAN client that the last full backup as of Thursday night is NOT the disk backup, but it is the previous virtual full backup.
\
In the case of the TDE, it can "hide" all of the Level 0 virtual full backups, and the L1 differential backups (which will force a new level 0).
All of this is done without updating the DB client version. All the magic is done within the RMAN catalog on the ZDLRA.
One the questions that keeps coming up with ZDLRA is how to manage the backups for a database that has either
Been migrated to another ZDRA
Been retired, but the backup needs to be kept for a period of time
The best way to deal with this by the use of Protection Policies.
How Protection Policies work:
If you remember right, Protection Policies are way of grouping databases together that have the same basic characteristics.
The most important of which are :
Name/Description - Used to identify the Protection Policy
Recovery Window Goal - How many days of recovery do you want to store at a minimum
Max Retention Window - (Optional) Maximum number of days of backups you want to keep
Unprotected Window - (Optional) Used to set alerts for databases that are no longer receiving recovery data.
One of the common questions I get is.. What happens if I change the Protection Policy associated with my database ?
Answer : By changing the Protection Policy a database is associated with, you are only changing the metadata. Once the change is made, the database follows the Protection Policy rules it is now associated with, and no longer is associated with the old Protection Policy
How this plays out with a real example is...
My Database (PRODDB) is a member of a Protection Policy (GOLD) which has a Recovery Window Goal of 20 days, and a Max Retention Window of 40 days (the default value being 2x the Recovery Window Goal).
My Database (PRODDB) currently has 30 days of backups, which is right in the middle.
What would normally happen for this database is (given enough space), backups will continue to be kept until PRODDB has 40 days of backups. On day 41, a maintenance job (which runs daily) will execute, and find that my database, PRODDB, has exceeded it's Recovery Window Goal. This job will remove all backups (in a batch process for efficiency) that are older than 20 days.
BUT ........................
Today, I moved my database, PRODDB, to a new protection policy (Silver) which only has a 10 day Recovery Window Goal, and a Max Recovery Window of 20 Days.
As I pointed out, the characteristics of the NEW Protection Policy will be used, and the next time the daily purge occurs, this database will be flagged, and all backups greater than the Recovery Window Goal will be purged.
Retiring databases: -
One very common question how to handle the retiring of database. As you might know, when you remove a database from the ZDLRA, ALL backups are removed from ZDLRA.
When a database is no longer sending backups to the ZDLRA, the backups will continue to be purged until only a single level 0 backup remains. This is to ensure that at least one backup is kept, regardless of the Max Recovery Window.
The best way to deal with Retiring database (and still keep the last Level 0 backup) through the use of Protection Policies.
In my example for my database PRODDB, I am going to retire the database instead of moving it to the Silver policy. My companies standard is to keep the final backup for my database available for 90 days, and on day 91 all backups can be removed.
These are requirements from the above information.
At least 1 backup is kept for 90 days, even though my Max Recovery Window was 40 days.
I want to know when my database has been retired for 90 days so I can remove it from the ZDLRA.
In order to accomplish both of these items, I am going to create a Protection Policy named RETIRED_DB with the following attributes
Recovery Window Goal of 2 days
Max Recovery Window of 3 Days
Unprotected Data Window of 90 days
New Alert in OEM to tell me when a database in this policy violates its Unprotected Data Window
If you look closely at the attributes, you will noticed that I decreased the Recovery Window Goal to allow backups to be removed after 3 days. I also set the Unprotected Data Window to be 90 days.
What this looks like over time is
As you can see by moving it to the new policy, within a few days, all backups except for the most recent Full back is removed. You can also see that on day 91 (when it's time to remove this database) I will be getting an alert.
Migrating Databases:
Migrating databases is very similar to retiring databases, except that I don't want remove the old backups until they naturally expire. For my example of PRODB with a Recovery Window Goal of 20 days, as soon as I have a new Level 0 on the new ZDLRA, I will move this database to a new policy (GOLD_MIGRATED) with the following attributes.
Recovery Window Goal of 20 days, since I still need to preserve old backups
Max Recovery Window goal of 21 days. This will remove the old backups as they age off.
Unprotected Data Window of 21 days, which will alert me that it time to remove this database.
How this would look over time time is.
Conclusion:
When retiring or migrating databases, Protection Policies can be leveraged to both
Ensure backups are removed as they age out until only a single L0 (Full) remains
Alert you when it is time to remove the database from the ZDLRA.
One of the key items to concentrate on is the DISK_RESERVED_SPACE for each database. A simple explanation of the DISK_RESERVED_SPACE is this setting represents the amount of space you set aside on a database-by-database basis to keep a backup window to support the recovery window goal.
A simple example of the DISK_RESERVED_SPACE is.....
My Full backup takes up 40 TB. Keeping 14 days of recovery (which is my recovery window goal) takes up an additional 1 TB/day.
In this example, I need 54 TB of storage to keep 14 days of recovery.
For this database, I would set the reserved space to be 56.5 TB to ensure I have an extra 5% of space available to handle any unexpected peaks.
Easy right ? The value for RECOVERY_WINDOW_SPACE in the RA_DATABASE view gives you the space needed to support the Recovery Window.
But.. the reason I called this a checkup is that I wanted to make sure some thought is given to the setting. If your database is changing (which it almost always is), then this needs to be reviewed and adjusted.
Below are some simple rules of thumb of what to think about when adjusting DISK_RESERVED_SPACE
Stable Mature Databases - If your database is mature, and the workload is consistent, the DISK_RESERVED_SPACE should be 5-10% larger than the RECOVERY_WINDOW_SPACE. This setting should be reviewed at least 4 times year to be sure it is accurate.
Actively Changing Databases - If the database has a changing workload. Maybe it is still growing, or maybe new features are being added to the application. The DISK_RESERVED_SPACE should be set at 5-10% larger than RECOVERY_WINDOW_SPACE + Include a percentage for growth. This should be reviewed monthly (at a minimum) OR if a big growth spurt is planned.
Databases with Peaks - For some business, there may be databases that have peaks. Maybe they support "Black Friday", or maybe they support huge sales around the superbowl. The DISK_RESERVED_SPACE should be 5-10% larger than the RECOVERY_WINDOW_SPACE needed during this peak. This will ensure that the space is available when the peak comes.
TDE databases - when a database migrates to TDE, there is a time period where storage is needed for the Pre-TDE backup, and the Post-TDE backup. You need to adjust the DISK_RESERVED_SPACE to take this into account. NOTE: Staggering the migration when you migrate to TDE can avoid running out of DISK_RESERVED_SPACE for all databases.
Databases with ILM - if you have databases performing ILM activities this affects backup space needed. A simple example would be a database whose historical data is moved to an HCC tablespace when it becomes inactive. Space needs to be reserved in DISK_RESERVED_SPACE to hold the old structure, the new structure, and the archive logs created during this change.
My suggestion to simplify this is to use PROTECTION POLICIES. Each type of database can be in it's own protection policy. Review the DISK_RESERVED_SPACE at the appropriate time for each policy.
One of the items that comes up with the ZDLRA is a storage checkup. The DBAs want to know more detail about the storage utilization of each database.
Once you understand the above concepts you realize that are there 2 major pieces that affect the storage utilization for a database.
1) How much space a level 0 backup takes. Since the ZDLRA virtualizes full backups, each database has at least 1 copy of each block on the ZDLRA. It would be only 1 if it doesn't change, or it could 30 copies of the same block if it changes every day (like system tablespace data). What you are interested is the size of 1 full backup
2) The amount of storage 1 day of changes takes up (on average). This would be the stored size of an incremental backup (if you perform an incremental every day), and it would be the stored size of the archive logs for a day of workload.
By combining these 2 pieces you can then calculate how much storage is needed for X number of days of backups.
Now how do I do this ? below is the query I use, and I will explain the columns in it.
Estimated space usage (GB) for a single day of backups
RECOVERY_WINDOW_SPACE
How much space is needed for a 14 day recovery window.
DISK_RESERVED_SPACE
How much space is set aside for backups ?
ESTIMATE_DB_SIZE
How big is the database (GB) estimated to be ?
ESTIMATE_RWG_SPACE
This returns the space (GB) needed for the recovery
window from RA_DATABASE which may not match calculating using the columns returned.
Now let's take a look at what I can do with this..
This is an example, where I summarize the space utilization for a couple of ZDLRAs.
And here I looked at the detail for the databases.
This report (just above this) gives me some really useful information.
I can see DB02 has a really big change rate. The database is only about 2.5 TB, but it is using 14 TB of storage.
I can see the disk_reserved_space is way too small for DB01. DB01 needs about 15 TB to keep it's recovery window goal, but the disk_reserved_space is only 500 GB
DB03 looks good. Change rate is not significant, and disk_reserved_space is set a little higher than the RECOVERY_WINDOW_SPACE.
Now finally, I was able to use the one_day_space, and graph out the space utilization for each days Recovery window.
This graph shows each day of RWG and it's storage needs, the currently USED, and the USABLE space. I can see that even though my used space is close to the usable space, there is still room for growth. I can also use this to see what happens to my storage utilization if I changed the RWG to 10 days.
I highly recommend periodically doing a health check on your storage utilization, and review the disk_reserved_space.
I hope this gives some information you can use to take a closer look.
**NOTE ** this query is accurate as 9/30/20. It might need to be adjusted with future releases.
Also, it is only as accurate as the data. The longer a database has been backing up with a consistent workload, the better the estimate.