In this post I will explain what typically happens when RMAN either compresses, or encrypts backups and how the new space efficient encrypted backup feature of the ZDLRA solves these issues.
TDE - What does a TDE encrypted block look like ?
In the image above you can see that only the data is encrypted with TDE. The header information (metadata) remains unencrypted. The metadata is used by the database to determine the information about the block, and is used by the ZDLRA to create virtual full backups.
Normal backup of TDE encrypted datafiles
First let's go through what happens when TDE is utilized, and you perform a RMAN backup of the database.
In the image below, you can see that the blocks are written and are not changed in any way.
NOTE: Because the blocks are encrypted, they cannot be compressed outside of the database.
Compressed backup of TDE encrypted datafiles
Next let's go through what happens if you perform an RMAN backup of the database AND tell RMAN to create compressed backupsets. As I said previously, the encrypted data will not compress., and because the data is TDE the backup must remain encrypted.
Below you can see that RMAN handles this with series of steps.
RMAN will
Decrypt the data in the block using the tablespace encryption key.
Compress the data in block (it is unencrypted in memory).
Re-encrypt the whole block (including the headers) using a new encryption key generated by the RMAN job
You can see in the image below, after executing two RMAN backup jobs the blocks are encrypted with two different encryption keys. Each subsequent backup job will also have new encryption keys.
Compression or Deduplication
This leaves you with having to chose one or the other when performing RMAN backup jobs to a deduplication appliance. If you execute a normal RMAN backup, there is no compression available, and if you utilize RMAN compression, it is not possible to dedupe the data. The ZDLRA, since it needs to read the header data, didn't support using RMAN compression.
How space efficient encrypted backups work with TDE
So how does the ZDLRA solve this problem to be able provide both compression and the creation of virtual full backups?
The flow is similar to using RMAN compression, BUT instead of using RMAN encryption, the ZDLRA library encrypts the blocks in a special format that leaves the header data unencrypted. The ZDLRA library only encrypts the data contents of blocks.
Decrypt the data in the block using the tablespace encryption
Compress the data in block (it is unencrypted in memory).
Re-encrypt the data portion of the block (not the headers) using a new encryption key generated by the RMAN job
In the image below you can see the flow as the backup is migrating to utilizing this feature. The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full backup.
This allows the ZDLRA to both compress the blocks AND provide space efficient virtual full backups
How space efficient encrypted backups work with non-TDE blocks
So how does the ZDLRA feature work with non-TDE data ?
The flow is similar to that of TDE data, but the data does not have to be unencrypted first. The blocks are compressed using RMAN compression, and are then encrypted using the new ZDLRA library.
In the image below you can the flow as the backup is migrating to utilizing this feature. The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full.
I hope this helps to show you how space efficient encrypted backups work, and how it is a much more efficient way to both protect you backups with encryption, and utilize compression.
NOTE: using space efficient encrypted backups does not require with the ACO or the ASO options.
Cloning a single PDB isn't always easy to do, especially if you are trying to use an existing backup rather copying from an existing database. In this blog post I will walk through how to restore a PDB from an existing Multi-tenant backup to another host, and plug it into another CDB.
My environment is:
DBCS database FASTDB
db_name= fastdb
db_unique_name = fastdb_67s_iad
DB Version = 19.19
TDE = Using local wallet
Backup = Object Storage using the Tooling
RMAN catalog = Using RMAN catalog to emulate ZDLRA
PDB name = fastdb_pdb1
Step #1 - Prepare destination
The first step is to copy over all the necessary pieces for restoring the database using the object store library.
TDE wallet
Tape Library
Tape Library config file
SEPS wallet used by backup connection
SPFILE contents to build a pfile
NOTE: When using a ZDLRA as a source you need to copy over the following pieces.
TDE wallet
ZDLRA library (or use the library in the $ORACLE_HOME)
SEPS wallet used by the channel allocation to connect to the ZDLRA
SPFILE contents to build a pfile
Also create any directories needed (like audit file location).
mkdir /u01/app/oracle/admin/fastdb_67s_iad/adump
I added the entry to the /etc/oratab file and changed my environment to point to this database name.
In my case I copied the following directories and subdirectories to the same destination on the host.
The next step is to restore the controlfile to my destination host
I grabbed 2 pieces of information from the source database
DBID - This is needed to restore the controlfile from the backup.
Channel configuration.
With this I executed the following to restore the controlfile.
startup nomount;
set dbid=1292000107;
run
{
allocate CHANNEL sbt1 DEVICE TYPE 'SBT_TAPE' FORMAT '%d_%I_%U_%T_%t' PARMS 'SBT_LIBRARY=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/acefbba5-65ad-454c-b1fe-467dec1abde4/opc_fastdb_67s_iad.ora)';
restore controlfile ;
}
and below is my output.
RMAN> run
{
allocate CHANNEL sbt1 DEVICE TYPE 'SBT_TAPE' FORMAT '%d_%I_%U_%T_%t' PARMS 'SBT_LIBRARY=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/acefbba5-65ad-454c-b1fe-467dec1abde4/opc_fastdb_67s_iad.ora)';
restore controlfile ;
}2> 3> 4> 5>
allocated channel: sbt1
channel sbt1: SID=1513 device type=SBT_TAPE
channel sbt1: Oracle Database Backup Service Library VER=19.0.0.1
Starting restore at 08-AUG-23
channel sbt1: starting datafile backup set restore
channel sbt1: restoring control file
channel sbt1: reading from backup piece c-1292000107-20230808-04
channel sbt1: piece handle=c-1292000107-20230808-04 tag=TAG20230808T122731
channel sbt1: restored backup piece 1
channel sbt1: restore complete, elapsed time: 00:00:01
output file name=+RECO/FASTDB_67S_IAD/CONTROLFILE/current.2393.1144350823
Finished restore at 08-AUG-23
Step #3 - Restore Datafiles for CDB and my PDB
Below is the commands I am going to execute to restore the datafiles for my CDB , my PDB and the PDB$SEED.
First I'm going to mount the database, and I am going to spool the output to a logfile.
alter database mount;
SPOOL LOG TO '/tmp/restore.log';
set echo on;
run {
restore database root ;
restore database FASTDB_PDB1;
restore database "PDB$SEED";
}
I went through the output, and I can see that it only restored the CDB , my PDB, and the PDB$SEED.
Step #4 - execute report schema and review file locations
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 1040 SYSTEM YES +DATA/FASTDB_67S_IAD/DATAFILE/system.283.1144351313
3 970 SYSAUX NO +DATA/FASTDB_67S_IAD/DATAFILE/sysaux.284.1144351305
4 95 UNDOTBS1 YES +DATA/FASTDB_67S_IAD/DATAFILE/undotbs1.280.1144351303
5 410 PDB$SEED:SYSTEM NO +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/system.264.1143303695
6 390 PDB$SEED:SYSAUX NO +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/sysaux.265.1143303695
7 50 PDB$SEED:UNDOTBS1 NO +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/undotbs1.266.1143303695
8 410 FASTDB_PDB1:SYSTEM YES +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/system.291.1144351333
9 410 FASTDB_PDB1:SYSAUX NO +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/sysaux.292.1144351331
10 70 FASTDB_PDB1:UNDOTBS1 YES +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/undotbs1.281.1144351329
11 5 USERS NO +DATA/FASTDB_67S_IAD/DATAFILE/users.285.1144351303
12 5 FASTDB_PDB1:USERS NO +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/users.295.1144351329
13 420 RMANPDB:SYSTEM YES +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/system.285.1143999311
14 420 RMANPDB:SYSAUX NO +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/sysaux.282.1143999317
15 50 RMANPDB:UNDOTBS1 YES +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/undotbs1.281.1143999323
16 5 RMANPDB:USERS NO +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/users.284.1143999309
17 100 RMANPDB:RMANDATA NO +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/rmandata.280.1144001911
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 20 TEMP 32767 +DATA/FASTDB_67S_IAD/TEMPFILE/temp.263.1143304005
2 131 PDB$SEED:TEMP 32767 +DATA/FASTDB_67S_IAD/017B5DDEB84167ACE063A100000AD816/TEMPFILE/temp.267.1143303733
4 224 FASTDB_PDB1:TEMP 4095 +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/TEMPFILE/temp.272.1143304235
6 224 RMANPDB:TEMP 4095 +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/TEMPFILE/temp.283.1143999305
Step #5 - Determine tablespaces to skip during recovery
I ran this on my primary database, and used it to build the RMAN command. This command will get the names of the tablespaces that are not part of this PDB so that I can ignore them.
select '''' ||pdb_name||''':'||tablespace_name ||','
from cdb_tablespaces a,
dba_pdbs b
where a.con_id=b.con_id(+)
and b.pdb_name not in ('FASTDB_PDB1')
order by 1;
From the above, I built the script below that skips the tablespaces for the PDB "RMANPDB".
And then ran my RMAN script to recover my datafiles that were restored.
NOTE: the datafiles for my second PDB were "offline dropped"
Starting recover at 08-AUG-23
RMAN-06908: warning: operation will not run in parallel on the allocated channels
RMAN-06909: warning: parallelism require Enterprise Edition
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=3771 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=4523 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=19.0.0.1
channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: +DATA/FASTDB_67S_IAD/DATAFILE/system.283.1144351313
...
Executing: alter database datafile 13, 14, 15, 16, 17 offline drop
starting media recovery
channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=26
channel ORA_SBT_TAPE_1: reading from backup piece FASTDB_1292000107_5m23a29f_182_1_1_20230808_1144326447
channel ORA_SBT_TAPE_1: piece handle=FASTDB_1292000107_5m23a29f_182_1_1_20230808_1144326447 tag=TAG20230808T122727
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
archived log file name=+RECO/FASTDB_67S_IAD/ARCHIVELOG/2023_08_08/thread_1_seq_26.2389.1144352807 thread=1 sequence=26
channel default: deleting archived log(s)
archived log file name=+RECO/FASTDB_67S_IAD/ARCHIVELOG/2023_08_08/thread_1_seq_26.2389.1144352807 RECID=1 STAMP=1144352806
media recovery complete, elapsed time: 00:00:01
Finished recover at 08-AUG-23
Step #6 - Open database
I opened the database and the PDB
SQL> alter database open;
Database altered.
SQL> alter pluggable database fastdb_pdb1 open;
Pluggable database altered.
SQL> show pdbs;
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 FASTDB_PDB1 READ ONLY NO
4 RMANPDB MOUNTED
I also went and updated my init{sid}.ora to point to the controlfile that I restored.
Step #8 - Create shell PDB in the tooling
I created a new PDB that is going to be the name of the PDB I am going to plug in. This is optional.
Step #7 - Switch my restored database to be a primary database
I found that the database was considered a standby database, and I needed to make it a primary to unplug my pdb
SQL> RECOVER MANAGED STANDBY DATABASE FINISH;
Media recovery complete.
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
SWITCHOVER_STATUS
--------------------
TO PRIMARY
SQL> alter database commit to switchover to primary with session shutdown;
Database altered.
Step #8 - unplug my PDB
I opened the database and unplugged my PDB.
SQL> alter database open;
Database altered.
SQL> alter pluggable database fastdb_pdb1 unplug into '/tmp/fastdb_pdb1.xml' ENCRYPT USING transport_secret;
Pluggable database altered.
SQL>
drop pluggable database fastdb_pdb1 keep datafiles;SQL>
Pluggable database dropped.
Step #9 - Drop the placeholder PDB from the new CDB
Now I am unplugging, and dropping the placeholder PDB.
SQL> show pdbs;
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 LAST21C_PDB1 READ WRITE NO
4 CLONED_FASTDB READ WRITE NO
SQL> alter pluggable database CLONED_FASTDB close;
Pluggable database altered.
SQL> alter pluggable database CLONED_FASTDB unplug into '/tmp/CLONED_FASTDB.xml' ENCRYPT USING transport_secret;
Pluggable database altered.
SQL> drop pluggable database CLONED_FASTDB keep datafiles;
Pluggable database dropped.
Step #10 - Plug in the PDB and open it up
create pluggable database CLONED_FASTDB USING '/tmp/fastdb_pdb1.xml' keystore identified by W3lCom3#123#123 decrypt using transport_secret
NOCOPY
TEMPFILE REUSE;
SQL> 2 3
Pluggable database created.
SQL> SQL>alter pluggable database cloned_fastdb open;
That's it. it took a bit to track down the instructions, but this all seemed to work.
Step #11 - Clone the PDB to ensure that the tooling worked
I next cloned the PDB to make sure the tooling properly recognized my PDB and it all worked fine. You can that I know have a second copy of the PDB (test_clone).
Migrating an Oracle database from on-premise to OCI is especially challenging when the database is quite large. In this blog post I will walk through the steps to migrate to OCI leveraging an on-disk local backup copied to object storage.
The basic steps to perform this task are on on the image above.
Step #1 - Upload backup pieces to object storage.
The first step to migrate my database (acmedb) is to copy the RMAN backup pieces to the OCI object storage using the OCI Client tool.
In order to make this easier, I am breaking this step into a few smaller steps.
Step #1A - Take a full backup to a separate location on disk
This can also be done by moving the backup pieces, or creating them with a different backup format. By creating the backup pieces in a separate directory, I am able to take advantage of the bulk upload feature of the OCI client tool. The alternative is to create an upload statement for each backup piece.
For my RMAN backup example (acmedb) I am going to change the location of the disk backup and perform a disk backup. I am also going to compress my backup using medium compression (this requires the ACO license). Compressing the backup sets allows me to make the backup pieces as small as possible when transferring to the OCI object store.
Below is the output from my RMAN configuration that I am using for the backup.
RMAN> show all;
RMAN configuration parameters for database with db_unique_name ACMEDBP are:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/acmedb/ocimigrate/backup_%d_%U';
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;
I created a new level 0 backup including archive logs and below is the "list backup summary" output showing the backup pieces.
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4125 B A A DISK 21-JUN-22 1 1 YES TAG20220621T141019
4151 B A A DISK 21-JUN-22 1 1 YES TAG20220621T141201
4167 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4168 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4169 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4170 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4171 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4172 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4173 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4174 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4175 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4176 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4208 B A A DISK 21-JUN-22 1 1 YES TAG20220621T141309
4220 B F A DISK 21-JUN-22 1 1 YES TAG20220621T141310
From the output you can see that there are a total of 14 backup pieces
3 Archive log backup sets (two created before the backup of datafiles, and one after).
TAG20220621T141019
TAG20220621T141201
TAG20220621T141309
10 Level 0 datafile backups
TAG20220621T141202
1 controlfile backup
TAG20220621T141310
Step #1B - Create the bucket in OCI and configure OCI Client
Now we need a bucket to upload the 14 RMAN backup pieces to.
Before I can upload the objects, I need to download and configure the OCI Client tool. You can find the instructions to do this here.
Once the client tool is installed I can create the bucket and verify that the OCI Client tool is configured correctly.
The command to create the bucket is.
Below is the output when I ran it for my compartment and created the bucket "acmedb_migrate"
Step #2 - Create the manifest for the backup pieces.
The next step covers creating the "metadata.xml" for each object which is the manifest the the RMAN library uses to read the backup pieces.
Again this is broken down into a few different steps.
Step #2A - Download an configure the Oracle Database Cloud Backup Module.
The link for the instructions (which includes the download can be found here.
I executed the jar file which downloads/created the following files.
libopc.so - This is the library used by the Cloud Backup module, and I downloaded it into "/home/oracle/ociconfig/lib/" on my host
acmedb.ora - This is the configuration file for my database backup. This was created in "/home/oracle/ociconfig/config/" on my host
This information is used to allocate the channel in RMAN for the manifest.
Step #2b - Generate the manifest create for each backup piece.
The next step is to dynamically create the script to build the manifest for each backup piece. This needs to be done for each backup piece, and the command is
The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.
Step #2c - Execute the script with an allocated channel.
The next step is to execute the script in RMAN within a run block after allocating a channel to the bucket in object storage. This needs to be done for each backup piece. You create a run block with one channel allocation followed by "send" commands.
NOTE: This does not have be executed on the host that generated the backups. In the example below, I set my ORACLE_SID to "dummy" and performed create manifest with the "dummy" instance started up nomount.
Below is an example of allocating a channel to the object storage and creating the manifest for one of the backup pieces.
export ORACLE_SID=dummy
rman target /
RMAN> startup nomount;
startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/19c/dbhome_1/dbs/initdummy.ora'
starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started
Total System Global Area 1073737792 bytes
Fixed Size 8904768 bytes
Variable Size 276824064 bytes
Database Buffers 780140544 bytes
Redo Buffers 7868416 bytes
RMAN> run {
allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
send channel t1 'export backuppiece backup_RADB_3r10k6ec_123_1_1';
}
2> 3> 4>
allocated channel: t1
channel t1: SID=19 device type=SBT_TAPE
channel t1: Oracle Database Backup Service Library VER=23.0.0.1
sent command to channel: t1
released channel: t1
Step #2d - Validate the manifest is created.
I logged into the OCI console, and I can see that there is a directory called "sbt_catalog". This is the directory containing the manifest files. Within this directory you will find a subdirectory for each backup piece. And within those subdirectories you will find a "metadata.xml" object containing the manifest.
Step #3 - Catalog the backup pieces.
The next step covers cataloging the backup pieces in OCI. You need to download the controlfile backup from OCI and start up mount the database.
Again this is broken down into a few different steps.
Step #3A - Download an configure the Oracle Database Cloud Backup Module.
The link for the instructions (which includes the download can be found here.
Again, you need to configure the backup module (or you can copy the files from your on-premise host).
Step #3b - Catalog each backup piece.
The next step is to dynamically create the script to build the catalog each backup piece. This needs to be done for each backup piece, and the command is
"catalog device type 'sbt_tape' backuppiece <object name>';
The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.
Step #3c - Execute the script with a configured channel.
I created a configure channel command, and cataloged the backup pieces that in the object store.
RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
run {
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3r10k6ec_123_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3s10k6hh_124_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3t10k6hj_125_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3u10k6hj_126_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3v10k6hj_127_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4010k6hj_128_1_1';
catalog device type 'sbt_tape' backuppiece ' backup_RADB_4110k6hk_129_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4210k6id_130_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4310k6ie_131_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4410k6ie_132_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4510k6jh_133_1_1';
}
old RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete
RMAN>
RMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13>
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=406 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=22 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=407 device type=SBT_TAPE
...
...
...
channel ORA_SBT_TAPE_4: SID=23 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: cataloged backup piece
backup piece handle=backup_RADB_4510k6jh_133_1_1 RECID=212 STAMP=1107964867
RMAN>
Step #3d - List the backups pieces cataloged
I performed a list backup summary to view the newly cataloged tape backup pieces.
RMAN> list backup summary;
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4220 B F A DISK 21-JUN-22 1 1 YES TAG20220621T141310
4258 B A A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141019
4270 B A A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141201
4282 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4292 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4303 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4315 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4446 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4468 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4490 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4514 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4539 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
RMAN>
Step #4 - Restore the database.
The last step is restore the cataloged backup pieces. Remember you might have to change the location of the datafiles.
The process above can be used to upload and catalog both additional archive logs (to bring the files forward) and incremental backups to bring the database forward.
The latest release of ZFSSA software OS8.8.45 includes file retention locking, joining object retention lock and snapshot retention lock providing both versatility and protection of your data.
3 types of retention lock
Legal Hold
You might need to preserve certain business data in response to potential or on-going lawsuits. A legal hold does not have a defined retention period and remains in effect until removed. Once the legal hold is removed, all protected data is immediately eligible for deletion unless other retention rules still apply.
NOTE: Both Data Governance and Regulatory Compliance can be use to protect from any kind of cyber/ransomware attack.
Data Governance
Data Governance locks data sets (snapshot, object or file) for a period of time protecting the data from deletion. You might need to protect certain data sets as a part of internal business process requirements or protect data sets as part of your cyber protection strategy. Data Governance allows for adjustments in the retention strategy from privileged users.
Regulatory Compliance
Your industry might require you to retain a certain class of data for a defined length of time. Your data retention regulations might also require that you lock the retention settings. Regulatory compliance only allows you to increase the retention time if at all. Regulatory Compliance is the most restrictive locking strategy and often does not allow anyone, even an administrator, to make changes affecting retention.
3 implementations of retention lock
Object storage
Object storage retention is managed through the OCI client tool and Object retention is enforced through the API. Current retention settings are applied to all objects when they are accessed. Adding a rule immediately takes affect for all objects.
Administration of retention rules can be managed through the use of RSA certificates. It is recommended to create a separation of duties between a security administrator, and the object owner.
Retention on object storage is implemented in the following way based on the retention lock type.
Legal hold
Legal holds are implemented by placing an indefinite retention rule on a bucket. Creating this rule ensures that all objects within the bucket can not be deleted, and cannot be changed. Only new objects can be stored.
Data Governance
Data Governance is implemented by placing a time bound retention rule on a bucket. The rule sets a lock on all objects for a set length of time. The rule can be later deleted. For cyber protection it is recommended to implement this with a separation of duties.
Regulatory Compliance
Regulatory Compliance is implemented by placing a locked time bound retention rule on a bucket with a grace period. When a locked time bound retention rule is created it immediately takes effect, but there is a grace period of at least 14 days before the rule becomes permanent which allows you to test the rule. Once the grace period expires (defined by a specific date and time) the rule cannot be deleted even by an administrator.
Snapshots
Snapshot locking is managed the BUI, or CLI. Individual snapshots can be locked, and scheduled snapshots can be created and automatically locked. Permission for controlling snapshot locking can be assigned to ZFSSA users allowing you to create a separation of duties. Shares or projects cannot be removed if they contained locked snapshots.
Retention on snapshots is implemented in the following way based on the retention lock type.
Legal hold
Because snapshots only affect data that is on the project/share when the snapshot is taken, it is not possible to lock all new data as it is written. Manual snapshots can be used to provide a mechanism to capture the content of a share as of the current time. This could suffice for a Legal Hold. A manual snapshot can be created with a "retention lock" of UNLOCKED creating a snapshot that cannot be removed. The only way to remove the snapshot is by changing the "retention lock" to OFF, unlocking it for deletion. This creates a hold on the current data for an indefinite period of time. Permissions for releasing the hold on a the snapshot can be assigned to specific individual account allowing for a separation of duties.
Data Governance
Data governance of snapshots is handled through the use of scheduled locked snapshots and enabling the retention policy for scheduled snapshots. A LOCKED schedule is created with both a retention, and "keep at most" setting. This allows you to manage snapshots for a locked number of snapshots, while automatically cleaning up snapshots that are past the retention number. The snapshots within the retention number can not be unlocked, and the schedule can not be removed as long as there is data contained in the snapshot.
Regulatory Compliance
Regulatory compliance of snapshots is handled through the same method as Data Governance. Snapshots cannot be be removed when they are locked, and the schedule remains locked.
File Retention
File retention is set at the share or project level and controls updating and deletion of all data contained on the share/project. A default file retention length is set and all new files will inherit the default setting in effect when the file is created. It is also possible to manually set the retention on a file increasing the default setting inherited by the file.
Legal Hold
Legal Holds on files is implemented by manually increasing the retention on individual files. Because a Legal Hold may be required for an indefinitely period of time, it is recommended to periodically extend the retention on files needed within the legal hold. This allows the files retention to expire once the need the for the Legal Hold has passed.
Data Governance
Data governance is implemented by creating a NEW project and share with a file retention policy of privileged. Privileged mode allows you to create a default retention setting for all new files, and change that setting (longer or shorter) going forward. Files created inherit the retention setting in effect when they are created. Retention can also be adjusted manually to be longer by changing the unlock timestamp. Projects/shares cannot be deleted as long as they have locked files remaining on them.
Regulatory Compliance
Regulatory compliance is implemented by creating a NEW project and share with a file retention policy of mandatory (no override). Mandatory mode does not allow you to decrease the default file retention. Retention can also be adjusted manually to be longer by changing the unlock timestamp. Regulatory Compliance uses the same mechanisms as Data Governance but is much more restrictive. The project/share cannot be removed when locked files exist, and the storage pool cannot be removed when locked files exist within the pool. This mode also requires an NTP server be utilized, and root is locked out of any remote access.
The best way to explore these new features is by using the ZFSSA image in OCI to test different scenarios.
This post is going to go a little deeper on how to quickly download objects from the OCI object store onto your host.
In my example, I needed to download RMAN disk backup files that were copied to the Object Store in OCI.
I have over 10 TB of RMAN backup pieces, so I am going to create an ACFS mount point to store them on.
1) Create ACFS mount point
Creating the mount point is made up of multiple small steps that are documented here. This is a link to the 19c documentation so note it is subject to change over time.
Use ASMCMD to create a volume on the data disk
group of 20 TB
- Start ASMCMD connected to the Oracle ASM instance. You must be a user in the OSASM operating system group.
- Create the volume "volume1" on the "data" disk group
ASMCMD [+] > volcreate -G data -s 20G volume1
Use ASMCMD to list the volume information NOTE: my volume name is volume1-123
ASMCMD [+] > volinfo -G data volume1
Diskgroup Name: DATA
Volume Name: VOLUME1
Volume Device: /dev/asm/volume1-123
State: ENABLED
...
SQL> SELECT volume_name, volume_device FROM V$ASM_VOLUME
WHERE volume_name ='VOLUME1';
VOLUME_NAME VOLUME_DEVICE
----------------- --------------------------------------
VOLUME1 /dev/asm/volume1-123
Create the file system with mkfs from the volume "/dev/asm/volume1-123"
$ /sbin/mkfs -t acfs /dev/asm/volume1-123
mkfs.acfs: version = 19.0.0.0.0
mkfs.acfs: on-disk version = 46.0
mkfs.acfs: volume = /dev/asm/volume1-123
mkfs.acfs: volume size = 21474836480 ( 20.00 GB )
mkfs.acfs: Format complete.
The next step is to look at the objects I want to copy to my new ACFS file system. The format of accessing the object store in the commands is
"rclone {command} [connection name]:{bucket/partial object name - optional}.
NOTE: For all examples my connection name is oci_s3
I am going to start with the simplest command list buckets (lsd).
NOTE: We are using the s3 interface to view the objects in the namespace. There is a single namespace space for the entire tenancy. With OCI there is the concept of "compartments" which can be used to separate applications and users. The S3 interface does not have this concept, which means that all buckets are visible.
rclone lsd - This is the simplest command to list the buckets, and as I noted previously, it lists all buckets, not just my bucket.
If I want to list what is within my bucket (bsgbucket) I can list that bucket. In this case it treats the flat structure of the object name as if it is a file system, and lists only the top "directories" within my bucket.
3) Use rclone to copy the objects to my local file system.
There are 2 command you can use to copy the files from the object store to the local file system.
copy - This is as you expect. It copies the files to the local file system and overwrites the local copy
sync - This syncronizes the local file system with the objects in the object store, and will not copy down the object if it already has a local copy.
In my case I am going to use the sync command. This will allow me to re-start copying the objects and it will ignore any objects that were previously successfully copies.
Below is the command I am using to copy (synchronize) the objects from my bucket in the object store (oci_s3:bsgbucket) to the local filesystem (/home/opc/acfs).
-vv This option to rclone gives me "verbose" output so I can see more of what is being copied as the command is executed.
-P This option to rclone gives me feedback on how much of the object has downloaded so far to help me monitor it.
--multi-threaded-streams 12 This option to rclone breaks larger objects into chunks to increase the concurrency.
--transfers 64 This option to rclone allows for 64 concurrent transfers to occur. This increases the download throughput
oci-s3:bsgbucket - This is the source to copy/sync
/home/opc/acfs - this is the destination to copy/.sync with
Finally, this is the what the command looks like when it is executing.
opc@rlcone-test rclone]$ ./rclone -vv sync -P --multi-thread-streams 12 --transfers 64 oci_s3:bsgbucket /home/opc/acfs
2021/08/15 00:15:32 DEBUG : rclone: Version "v1.56.0" starting with parameters ["./rclone" "-vv" "sync" "-P" "--multi-thread-streams" "12" "--transfers" "64" "oci_s3:bsgbucket" "/home/opc/acfs"]
2021/08/15 00:15:32 DEBUG : Creating backend with remote "oci_s3:bsgbucket"
2021/08/15 00:15:32 DEBUG : Using config file from "/home/opc/.config/rclone/rclone.conf"
2021/08/15 00:15:32 DEBUG : Creating backend with remote "/home/opc/acfs"
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : expdat.dmp: md5 = f97060f5cebcbcea3ad6fadbda136f4e OK
2021-08-15 00:15:33 INFO : expdat.dmp: Copied (new)
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for checks to finish
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for transfers to finish
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: Starting multi-thread copy with 2 parts of size 160.875Mi
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 2/2 (168689664-337379328) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001: Copied (new)
2021-08-15 00:15:34 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi finished
Transferred: 333.398Mi / 356.554 MiByte, 94%, 194.424 MiByte/s, ETA 0s
Transferred: 6 / 7, 86%
Elapsed time: 2.0s
Transferring:
NOTE: it broke up the larger object into chunks, and you can see that it downloaded 2 chunks simultaneously. At the end you can see the file that it was in the middle of transferring.
Conclusion.
rclone is great alternative to the OCI CLI to manage your objects and download them. It has more intuitive commands (like "rclone ls"). And the best part is that it doesn't require python and special privleges to install.