This post covers what you need to consider around backup/recovery when implementing true multitenant databases (More than 1 PDB).
First the architecture of CDB and PDBs that matter with recovery.
What you notice (and what I highlighted) is that redo logs, archived redo logs, and flashback logs are associated with the CDB.
I verified this by querying against the v$parameter view to display where these can be modified
First the architecture of CDB and PDBs that matter with recovery.
What you notice (and what I highlighted) is that redo logs, archived redo logs, and flashback logs are associated with the CDB.
I verified this by querying against the v$parameter view to display where these can be modified
Parameter Name Session System Instance PDB
------------------------------ ---------- ---------- ---------- ----------
log_archive_dest_1 TRUE IMMEDIATE TRUE FALSE
db_recovery_file_dest FALSE IMMEDIATE FALSE FALSE
This is important to understand when considering backup/recovery.
This means
- Archiving is an all or nothing choice. The CDB is either in ARCHIVELOG or NOARCHIVELOG mode. All PDB's inherit this from CDB.
- Force logging is at the CDB level, the PDB level and the tablespace level.
- There is one set of archive logs and redo logs.
- DG is all or nothing.
- There is one location for flashback logs
- There is one BCT file (block change tracking).
- PDB recovery is within the same CDB
Let's understand the implications one step at time.
Archiving is all or nothing.
If you do want to perform a large load of a PDB nologging, which I commonly do for POC's, you cannot turn off archive logging for a single PDB.
Make sure you group together databases that have the requirement for noarchivelog. In my case where I perform POCs and to quickly load data, I can load the PDB in a noarchive container, and later unplug and plug it into a the proper container later.
Force logging is at the CDB level, the PDB level and the tablespace level.
There are now 3 levels you can set/override force logging. Here is a great article going through the possible logging levels. The most interesting one I saw in this was the PDB logging and the introduction of the command..
SQL alter pluggable database pdb1 disable force logging;
Pluggable database altered.
SQ select pdb_name, logging, force_logging, force_nologging from cdb_pdbs;
PDB_NAME LOGGING FORCE_LOGGING FORCE_NOLOGGING
---------- --------- --------------- ---------------
PDB1 LOGGING NO NO
Enabling force logging is recommended when using Dataguard, or GoldenGate to ensure you capture all changes. However there may some controlled circumstances where you want to have nologging operations. In my prior blog I wrote about the issues that can occur with logical corruption after recovery.
There is one set of archive logs and redo logs.
This is something you need to keep in mind for active very active databases. If you combine 2 databases into the same CDB, you may have to double the size/quantity of the redo logs. It will also affect the recovery time of a single PDB. The archive logs restored and read will contain all transactions for all PDBs.
The extra workload of the combined logging, is something to keep in mind.
DG is an all or nothing.
Yes, as you would imagine from above, all the redo log stream goes to the dataguard database and gets applied, and all PDBs have a Dataguard PDB.
This is something to think about as it's a little different from non-CDB procedures.
For new PDBs
If you have active DG, the Dataguard PDB will be cloned from seed database just like it does on the primary database
If you don't have active DG, you have to copy over the datafiles from the primary database (or restore) onto the Dataguard database server.
If you want to keep DG you have to plug into a CDB that already has DG configured. You would unplug from the primary database, and unplug from the standby database.
When plugging in, you would plug into the standby CDB first, then plug into the Primary CDB. This ensures that the standby database is ready when log information starts coming over from the primary CDB
This item isn't a big deal. All the archive logs, and all the flashback logs go to the same location. The size of the fast recovery area is managed as a whole.
This means that when you unplug/plug between CDBs, there is no chance of performing an incremental backup.
I did notice that the V$BLOCK_CHANGE_TRACKING view contains the container ID. I also noticed that 18.1 contains the ability to restore between CDBs ( I believe).
If you don't have active DG, you have to copy over the datafiles from the primary database (or restore) onto the Dataguard database server.
For existing PDBs unplugging and plugging
If you want to keep DG you have to plug into a CDB that already has DG configured. You would unplug from the primary database, and unplug from the standby database.
When plugging in, you would plug into the standby CDB first, then plug into the Primary CDB. This ensures that the standby database is ready when log information starts coming over from the primary CDB
There is one location for flashback logs
This item isn't a big deal. All the archive logs, and all the flashback logs go to the same location. The size of the fast recovery area is managed as a whole.
There is one BCT file (block change tracking).
This means that when you unplug/plug between CDBs, there is no chance of performing an incremental backup.
I did notice that the V$BLOCK_CHANGE_TRACKING view contains the container ID. I also noticed that 18.1 contains the ability to restore between CDBs ( I believe).
PDB recovery is within the same CDB
I believe this changed with 18.1, but I'm not positive. When you register your "database" in the RMAN catalog, the CDB database gets registered. All the PDBs within the CDB are backed up with the CDB's DBID. Each individual PDBs backup are cataloged within the CDB by container ID. Yes you can backup PDB's individually and you can recover the PDBs individually. However, once you unplug from the CDB and plug into another CDB, you need to take a new full backup. From an RMAN catalog standpoint, the PDB is now part of a new database.
Below is an example of the output from
RMAN> List backup of pluggable database pocpdb2;
RMAN> List backup of pluggable database pocpdb2;
Piece Name: +RECOC1/POCCDB/675073AC990B6478E05370A1880AEC9C/BACKUPSET/2018_03_14/nnndn1_tag20180314t131530_0.330.970751731
List of Datafiles in backup set 498531
Container ID: 4, PDB Name: POCPDB2
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
15 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/system.298.970659025
16 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/sysaux.299.970659025
17 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/undotbs1.297.970659025
18 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/undo_2.301.970659041
19 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/users.302.970659041
Notice that Container ID, and the PDB name is given.
Backing up a PDB
PDBs can be backed up with the CDB, or by themselves (just like datafiles can).
RMAN > backup incremental level 1 database plus archivelog not backed up;
A single PDB is backed up by specifying the PDB.
RMAN > backup incremental level 1 pluggable database pocpdb2 plus archivelog not backed up;
Restoring a PDB
RMAN > restore pluggable database pocpdb1;
RMAN > recover pluggable database pocpdb1;
RMAN> alter pluggable database pocpdb1 open;
if you are migrating from non-CDB to using multiple PDB's (true multi tenant) you should think through all the ramifications of backup/recovery.