Showing posts with label backup. Show all posts
Showing posts with label backup. Show all posts

Wednesday, September 17, 2025

Building a Cyber Vault ? Don't forget your keys

When building a cyber vault, one of the most important items to manage is encryption keys.  Encrypting your data is a fundamental pillar of ransomware protection, but encryption key management is often forgotten.  
Ensuring your Cyber Vault has a current copy of your encryption keys associated with your Oracle Databases is critically important for a successful restore and recover after an attack.

Starting with the Oracle Key Vault (OKV) 21.11 release, Oracle Key Vault includes a preview of the Key Transfer Across Oracle Key Vault Clusters Integration Accelerator. You can use this feature to transfer security objects from one OKV cluster to another.

You can find more detail on this feature here, and I will describe it's benefits in this post.

The diagram below shows a typical cyber vault architecture to protect Oracle Databases with the Zero Data Loss Recovery Appliance (ZDLRA) and OKV .

Transferring keys into a cyber vault with OKV

Encryption Key Architecture

Encryption key management is a critical piece of data protection, and it is important to properly manage your keys.  Good cyber protection begins with proper encryption key management.

Local Wallets

With Oracle databases, the default (and simplest) location for encryption keys is in a locally management wallet file.  The keys are often stored in an auto-login wallet, which is automatically opened by the database at startup making key management transparent and simple, but not very secure.

Why aren't wallets secure ?

  • They are often auto-login (cwallet.sso) which allows the database to open the wallet without requiring a password. This wallet file gives full access to the encryption keys.
  • The wallet file is stored with the database files.  A privileged administrator, such as a DBA has access to both the database and the keys to decrypt the data directly from the hosts.  They also have the ability to delete the wallet file.
  • Often the wallet file is backed up with the database, which includes an auto-login wallet.  This allows anyone who has access to backups, to also be able to decrypt the data.
  • Securely backing up the wallet file separate from the database is often forgotten, especially when ASM is used as the wallet location.  Not having the wallet file when restoring the database makes restoration and recovery impossible.

Steps you can take

  • Create both a passworded wallet (ewallet.p12) and local auto-login wallet (cwallet.sso). With a local auto-login wallet, the wallet can only be opened on the host where the wallet was created.
  • Backup the passworded wallet only (ewallet.p12).  The auto-login wallet can always be recreated from the passworded wallet.
  • Properly store the password for you passworded encryption wallet. You will need the password to rotate your encryption keys, and create the auto-login wallet.

Oracle Key Vault (OKV)

 The best way to securely manage encryption keys is with OKV.

Why ?

  • Keys are managed outside of the database and cannot be accessed locally outside of the database.
  • Access to keys is granted to a specific database instance.
  • OKV is clustered for High Availability, and the OKV cluster can be securely backed up.
  • Key access is audited to allow for early detection of access.

Encryption Keys in a cyber vault architecture

Below is a diagram of a typical cyber vault architecture using ZDLRA.  Because the backups are encrypted, either because the databases are using TDE and/or they are creating RMAN encrypted backups sent to the ZDLRA, the keys need to be available in the vault also.
Not only is the cyber vault a separate, independent database and backup architecture, the vault also contains a separate, independent OKV cluster.
This isolates the cyber vault from any attack to the primary datacenter, including any attack that could compromise encryption key availability.


Encryption Keys in an advanced cyber vault architecture

Below is a diagram of an advanced cyber vault architecture using ZDLRA.  Not only are the backups replicated to a separate ZDLRA in the vault, they are internally replicated to an Isolated Recovery Environment (IRE).  In this architecture, the recover area is further isolated, and the OKV cluster is even further isolated from the primary datacenter. This provides the highest level of protection.


OKV Encryption Key Transfer


This blog post highlights the benefit of the newly released (21.11) OKV feature to allow for the secure transfer of encryption keys.
Periodic rotation of encryption keys is a required practice to protect encryption keys, and ensuring you have the current key available in a cyber vault is challenging.
OKV solves this challenge by providing the ability to transfer any new or changed keys between clusters.

Implementing OKV in a cyber Vault

When building a cyber vault, it is recommend to build an independent OKV cluster.  The OKV cluster in the vault is isolated from the primary datacenter and protected by an airgap.  The nodes in this cluster are  not be able to communicate with the OKV cluster outside of the vault.
The OKV cluster in the vault can be created using a full, secure, backup from the OKV cluster in the primary datacenter. The backup can be transferred into the vault, and then restored to the new, independent OKV cluster providing a current copy of the encryption keys.

Keeping OKV managed keys updated in a cyber Vault

The challenge, once creating an isolated OKV cluster has been keeping the encryption keys within the cluster current when new keys are created.  This was typically accomplished by transferring a full backup of OKV into the vault, and rebuilding the cluster using this backup.

OKV 21.11 provides the solution with secure Encryption Key Transfer.  Leveraging this feature you can securely transfer just the keys that have recently changed allowing you to manage independent OKV clusters that are synchronized on a regular basis.

The diagram below shows the flow of the secure Encryption Key Transfer package from the primary OKV cluster into the vault when the air-gap is opened.

This new OKV feature provides a much better way to securely manage encryption keys in a Cyber Vault.



Summary

As ransomware attacks increase, it is critical to protect a backup copy of your critical database in a cyber vault.  It is also critical to protect a copy of your encryption keys to ensure you can recovery those databases.  OKV provides the architecture for key management in both your primary datacenter and in a cyber vault.  The new secure key transfer feature within OKV allows you to synchronize keys across independent OKV clusters.

Wednesday, December 11, 2024

Listing Databases on an Oracle DB node

 In this blog post I am sharing a script that I wrote that will give you the list of databases running on a DB node.  The information  provided by the script is

  • DB_UNIQUE_NAME
  • ORACLE_SID
  • DB_HOME

WHY


I have been working on a script to automatically configure OKV for all of the Oracle Databases running on a DB host.  In order to install OKV in a RAC cluster, I want to ensure the unique OKV software files are in the same location on every host when I set the WALLET_ROOT variable for my database.  The optimal location is to put the software under $ORACLE_BASE/admin/${DB_NAME} which should exist on single instance nodes, and RAC nodes.

Easy right?


I thought it would be easy to determine the name of all of the databases on a host so that I could make sure the install goes into $ORACLE_BASE/admin/{DB_NAME}/okv directory on each DB node.

The first item I realized is that the directory structure under $ORACLE_BASE/admin is actually the DB_UNIQUE_NAME rather than DB_NAME. This allows for 2 different instances of the same DB_NAME (primary and standby) to be running on the same DB node without any conflicts. 

Along with determining the DB_UNIQUE_NAME, I wanted to take the following items into account
  • A RAC environment with, or without srvctl properly configured
  • A non-RAC environment 
  • Exclude directories that are under $ORACLE_BASE/admin that are not a DB_UNQUE_NAME running on the host.
  • Don't match on ORACLE_SID.  The ORACLE_SID name on a DB node can be completely different from the DB_UNIQUE_NAME.

Answer:

After searching around Google and not finding a good answer I checked with my colleagues.  Still no good answer.. There were just suggestions like "srvctl config", which would only work on a RAC node where all databases are properly registered.  

The way I decided to this was to 
  • Identify the possible DB_UNIQUE_NAME entries by looking in $ORACLE_BASE/admin
  • Match the possible DB_UNIQUE_NAME with ORACLE_SIDs by looking in $ORACLE_BASE/diag/rdbms/${DB_UNIQUE_NAME} to find the ORACLE_SID name.  I would only include DB_UNIQUE_NAMEs that exist in this directory structure and have a subdirectory.
  • Find the possible ORACLE_HOME by matching the ORACLE_SID to the /etc/oratab.  If there is no entry in /etc/oratab still include it.

Script:


Below is the script I came up with, and it displays a report of the database on the host.  This can be changed to store the output in a temporary file and read it into a script that loops through the databases.




Output:

Below is the sample output from the script.. You can see that it doesn't require the DB to exist in the /etc/oratab file.



DB_UNIQUE_NAME : cdb1db1
ORACLE_SID     : cdb1db11
ORACLE_HOME    :  ******  NOT IN /etc/oratab **** Cannot determine ORACLE_HOME *****


DB_UNIQUE_NAME : daver
ORACLE_SID     : daver1
ORACLE_HOME    : /u01/app/oracle/product/19.0.0.0/dbhome_1


DB_UNIQUE_NAME : dbsgadat
ORACLE_SID     : dbsgadat1
ORACLE_HOME    : /u01/app/oracle/product/19.0.0.0/dbhome_1


DB_UNIQUE_NAME : dbsgprd
ORACLE_SID     : dbsgprd1
ORACLE_HOME    : /u01/app/oracle/product/19.0.0.0/dbhome_1



Finally:


If you are also trying to get a list of databases that are running on a DB node I hope this helps you.

Sunday, September 29, 2024

ZDLRA backups -- How do I know if they are Encrypted

 The ZDLRA introduced a new feature with release 23.1 that can both encrypt backups (if they are not already encrypted from TDE) and  compress the backups .  The combing of both encryption and compression with this feature is unique to the ZDLRA.



I talked about this new exciting feature in a blog post on Oracle.com you can find here.

What I am am going to cover in this blog post is how to audit the RMAN catalog on the ZDLRA to validate that your backups are completely RMAN encrypted.

There are two big advantages of ensuring your backups are fully encrypted

1) With the prevalence of data exfiltration, and the advent of new regulations in many industries,  full encryption of backups is mandatory

2) When sending a backup to the Oracle cloud (either in OCI or to object storage on ZFS) full encryption is required to protect the backup data.

The question I often get asked with this feature is..

 "How do you tell  if your backups are encrypted ?"

You can can determine that your backups are encrypted by looking at the RMAN catalog.

The RC_BACKUP_PIECE view contains a column identifying if the backup is encrypted.  This column is set to "YES" only when the backup piece is encrypted.

Keep in mind that there multiple types of backups pieces contained in the catalog

  • Controlfile backups
  • Spfile backups
  • Archive log sweeps
  • Archive log backups from real-time redo
  • Datafile backups
  • Virtual Full backups created from incremental backups.
All of these backups except for two are sent from RMAN with "encryption on" and the backup set will marked as encrypted based on the RMAN encryption setting.

The two that are not set by RMAN directly are
  • Real-time redo backups. Real-time redo backups are identified in the RMAN catalog as encrypted when the destination setting on the protected database has ENCRYPTION=ENABLE set.
  • Virtual Full backups.  Virtual full backups are identified, for each datafile backup set, as encrypted ONLY after a new L0 is taken with RMAN encryption on, and all subsequent L1 backups are encrypted.  I know that is a lot of stipulations on identifying the virtual full backup as encrypted.  Only when a new FULL encrypted backup is taken, and all future incremental backups are encrypted can the ZDLRA be sure the backup has remained completely encrypted.

Checking the catalog

  The script below takes 2 parameters (&db_name, and &days_to_compare) and it will check the RMAN catalog and display the status of the backups, by backup type making it easier to identify any backup pieces that may not be encrypted.



This provides a nicely formatted output as you can see below.


                                             Database backup summary for last 15 days database: DBSG23AI

Encrypted  Compressed Backup
 Yes or No  Yes or No pieces Backup piece type
========== ========== ====== ========================================
YES        YES            69  Full backup
YES        NO             39 Archive Log - log sweep
NO         YES             1 Incremental L1  backup
YES        NO           3958 Archive Log - real-time redo
YES        YES            67 Incremental L1  backup
NO         YES             3  Full backup
NO         NO              1 Controlfile/SPFILE backup
YES        NO             26 Controlfile/SPFILE backup
YES        NO            221 Incremental L1  backup


In the report you can see that there a  few backups that not encrypted, along with some controlfile/spfile backups.


NOTE: In order to run this report, I created a REPORT user in the database on the ZDLRA as an "monitor" user.. A report has enough permissions to create this report.

OKV and ZDLRA 

Previously when sending backups to Cloud (which included OCI object storage on ZFSSA), OKV was required. When using Space Efficient Encrypted backups, you can ensure that EVERY backup piece is fully encrypted and RMAN recognizes them as encrypted.

If you follow the information in the blog, and what I have posted in the past, you will no longer need to configure OKV when sending backups to the cloud. 

If all backup pieces are encrypted, and the RMAN catalog reports that all backup are encrypted, you can create backups using DBMS_RA.CREATE_ARCHIVAL_BACKUP setting the "encryption_algorithm" to "CLIENT" or "ENC_CLIENT". This will tell the ZDLRA not utilize OKV to encrypt backups, but if any backup pieces are NOT encrypted, the archival backups will fail.





Tuesday, August 27, 2024

Oracle Backup Compression and Encryption layers explained

 When working with customers who are applying compression and/or encryption to their Oracle DB backups, I found that it isn't always clear if backups are compressed or encrypted, or both. In this blog post I will break compression and encryption of Oracle backups down into the levels where these operations could occur.  Below is a high level view of these 3 levels.



Database

Compression

Data in the database can be compressed in any one of the following formats or all of the formats

HCC - Available only on Exadata, or ZFS storage, this compression is a columnar compression format with different options that allow you to choose the appropriate access speed and compression ratio for your data

Advanced Compression - A licensable option that will automatically compress data in the background to optimize storage without compromising performance.

Basic Compression - Requires a lock on object during insert and is typically used within a data warehouse.

 External Compression - In some cases the data stored in the database may already be compressed externally. An example of this is image files which are already stored in a compressed format.

 

Encryption

Data in the database can be encrypted in any one of the following formats or all of the formats

TDE - All data in the tablespace is encrypted by database.

Column Encryption - Specific data within a column is encrypted, SSN for example.  This is less widely used and most customers use TDE instead.

 External Encryption - In some cases the data stored in the database may already be encrypted by the application.

 

 NOTE: 

1. If the data is compressed and/or encrypted in these manors it will continue in that format when backed up.  

  • Any data that encrypted in the database will remain encrypted in the backups
  • Any data that is compressed in the database will remain compressed in the backups
  • Backups of data that is compressed and/encrypted will get little to no compression when backed up


2. RMAN does not know that the data  is either compressed or encrypted, and querying the RMAN views will not tell you that either has occurred.


3. Having data encrypted and/or compressed in the database may not stop you from further compressing and/or encrypted the backups.


ZDLRA

Compression

Datafile Compression - With Datafile compression you have 2 choices to compress the backups

    • RA_FORMAT = TRUE - This  compresses all datafile backups in the new ZDLRA 23.1 format.  If the datafile is part of a TDE tablespace, the blocks will be decrypted prior to compression to ensure the best compression ratio.  
    • RA_FORMAT not set or  FALSE - Backups of datafiles will be sent as uncompressed (unless you create a RMAN compressed backupset which the ZDLRA will uncompress before ingesting).  Once they are received on the ZDLRA they will be compressed in storage on ZDLRA.  When replicated to another ZDLRA, or restored, they are uncompressed.

Real-time Redo Compression - When sending real-time redo to the ZDLRA you can have the ZDLRA create an RMAN compressed backupset for the archive logs.  The level of compression can be set on the policy.  Once stored in an RMAN compressed backupset format, it is replicated and restored as a compressed backupset.  

          NOTE: If the Redo stream contains changes to a TDE tablespace, or you are                                            configuring encryption on the RA as destination, you may get little to no actual compression 

SPFILE, Controfile, archivelog backups - The ZDLRA will NOT attempt to compress these backupsets internally.  Only datafile backups are compressed on the ZDLRA.

 

Encryption

Datafile Encryption - Whether a datafile is encrypted by the ZDLRA in the new ZDLRA 23.1 format depends on these 2 conditions.

    • RA_FORMAT = TRUE and "RMAN Encryption ON" - If the datafile is NOT part of a TDE tablespace, this will force BOTH compression and encryption of that datafile backup.
    • RA_FORMAT = TRUE and "RMAN Encryption OFF" - If the datafile is part of a TDE tablespace, the backup of this datafile will remain encrypted.  If the datafile is NOT part of a TDE tablespace, the backup will NOT be encrypted.

Real-time Redo Encryption - If real-time redo is utilized and your database has implemented TDE, the change data in the archive log backups will be encrypted.  However, this backup is not considered RMAN encrypted, and ENCRYPTION=ENABLED must be set on the destination definition to ensure that the real-time redo backupsets are considered fully encrypted by RMAN.

SPFILE, Controlfile, archivelog backup Encryption - These are not encrypted by the ZDLRA.

 

 NOTE: 

1. The new Space Efficient Encrypted backup feature of the ZDLRA only affects datafile backups.

2. Real-time redo backups can be compressed and/or encrypted by the ZDLRA.

3. If you are using the new RA_FORMAT=TRUE for non-TDE datafile backup, you will only get a compressed a backupset.  You have set RMAN Encryption on with RA_FORMAT=TRUE in order to encrypt the backupset.

4. If you are backing up a non-TDE  datafile, and wish to encrypt it with the library, it will also be compressed.  You cannot encrypt without compression, but you can compress without encryption.

5. If datafile backups are sent to the ZDLRA  without RA_FORMAT=TRUE, they will appear as compressed in the RMAN catalog.  With RA_FORMAT=TRUE they will not appear as compressed.

6.If real-time redo is sent to the ZDLRA, and the profile for the database is set to compress the archivelogs, they will appear as compressed in the RMAN catalog. 

 

RMAN

Compression

Datafile Compression - With Datafile compression you have 2 choices to compress the backups

  • RA_FORMAT = TRUE - RMAN compression is ignored when this option is set.  
  • RA_FORMAT not set or  FALSE - RMAN can create a compressed backupset for datafiles.  If the datafile is part of a TDE tablespace, this datafile will not be able to create a virtual full.  If the Datafile is NOT part of a TDE tablespace, the backset will be decompressed on the ZDLRA and will not be stored as a Compressed backupset.


SPFILE, Controfile, archivelog backups - The ZDLRA will NOT attempt to compress these backupsets internally.  They remain compressed.

 

Encryption

Datafile Encryption - RMAN Encrypt ON  creates an Encrypted backupset which cannot be virtualized by the ZDLRA.  This should only be set when using RA_FORMAT=TRUE which bypasses RMAN encryption


SPFILE, Controlfile, archivelog backup Encryption - These can be encrypted by setting RMAN Encryption on.

 NOTE: 

1. The new Space Efficient Encrypted backup feature of the ZDLRA only affects datafile backups.

2. Real-time redo backups can be compressed and/or encrypted by the ZDLRA.

3. If you are using the new RA_FORMAT=TRUE for non-TDE datafile backup, you will only get a compressed a backupset.  You have set RMAN Encryption on with RA_FORMAT=TRUE in order to encrypt the backupset.

4. If you are backing up a non-TDE  datafile, and wish to encrypt it with the library, it will also be compressed.  You cannot separate encryption from compression, but you can compress only.

Wednesday, July 10, 2024

Creating Archival Backups from ZDLRA using EM Cloud Control


The ability for the ZDLRA to create archive backups was added with release 21.1 and I wrote a blog post (here) on how to do this.  I recently noticed that the latest plugin for ZDLRA (13.5.1.0.0) allows you to dynamically schedule your archival jobs from EM Cloud Control.

Create Archival Backup


In this blog post I will go through how to use this new feature.

First the release that I am using for this is

  •  EM Cloud Control 13.5.0.19
  • Zero Data Loss Recovery Appliance Plugin Release 13.5.1.0.0

Where to find the feature:

If you have the correct plugin, you will notice that there is a new choice in the "Recovery Appliance"  pull down menu provided by the plugin.


There is an entry for "Archival Backups" that appears just below "replication".  When you chose this option it will bring up a new window that you can use to prepare to create an archival backup.


Notice that there is nothing listed here.  I did create an archival backup earlier, but it isn't listed.

In order to create an archival backup, Click on the "Create Archival Backup" button and continue to one of the next sections.  You can either create a "one-time" archival backup, or schedule a recurring backup.  The default is to create a recurring scheduled backup

Create a recurring scheduled Backup:

Protected Databases

I am going to create a recurring scheduled backup for my database "testdb".   I can choose only one database.

Recovery Point Time

  • This should be for every month.  I chose every month individually, and I ensure that I chose all 12 months.
  • This should occur on the "last" day of the month
  • The recovery point should be 11:00 PM based on the browser time (I can also chose the DB time, or UTC).
  • I want to set the restore point prefix to be "MONTHLY_KEEP_BACKUP_". The job will affix the timestamp to the end of the the prefix

Retention Time

  • Keep this backup for 3 years (I can also choose a time period based on months or weeks).

Properties

  • Use the attribute set "TESTDB" that I created earlier.
  • Leave the default format of the backup pieces, but I can change the format if I'd like to.
  • I am setting not setting encryption algorithm on (I would need to for a copy-to-cloud job).
  • I am not setting a compression algorithm on.
My screen for creating the recurring backup looks like the image below.


Once I complete everything I can click on OK, and it will submit my schedule to run.


Viewing recurring scheduled Backup  Procedures:

The recurring backups are not scheduled as jobs, they are scheduled as Procedures because they have a few steps to execute.
You can find these scheduled backups in EM Cloud Control under Enterprise --> Provisioning and Patching --> Procedure Activity.
At this point, I scheduled 2 jobs (actually procedures) , prior and you can see them in this section.


In order to see more detail on these 2 procedures I can select one of them and click on the "Reschedule" button at the top of the list of procedures.
I know the first procedure is for executing scheduled archival backups for TESTDB because the name of the procedure contains TESTDB followed by the timestamp.

Below is what it shows it when I choose to reschedule it.


You can see that during this test, I created a monthly schedule that creates a new backup at 7:00 AM PT on the 10th of the months listed.  During my test I did not include all months, and those that I included, I did not choose them in order.  
When I go back to the list of procedures, and drill into the procedure, I can see there there are just a couple of steps, and I can't see any detail as to what the steps do.


Viewing executed scheduled Backup Procedures:

In order to view any executed scheduled backup you would look in the same place as you do for schedule procedures.  Along with the 2 scheduled procedures I had above, I also had one of the actually execute and I see it in the list.


You can see that the first scheduled job had successfully executed, now let's take a look at the executed step and output.
If you click on the highlighted "Run" name, you can drill into the procedure and steps. Below is what I see for the step detail for this execution.


Below is what the output of the last step looks like.

You can see all of the attributes that were set when I created this procedure, and you can see the actual command that executed to create the archival backup.


Create a One-time only archival Backup:

Similar to creating an recurring backup, you go the "Create Archival Backup" section within the ZDRLA plugin.

Protected Databases

I am going to create a One-time archival backup for my database "testdb".   I can choose only one database.

Create Archival Backup For


Within this section there are 3 choices

Point-in-Time : Using a date picker choose the point in time you want to create the archival backup as of. 


SCN : Enter the SCN you want to use. The text tells the range of SCN numbers you can use.


Restore Point : Enter the restore point from the drop down menu.



Retention Time (same as recurring backups)

  • Keep this backup for 3 years (I can also choose a time period based on months or weeks).

Properties (same as recurring backups)

  • Use the attribute set "TESTDB" that I created earlier.
  • Leave the default format of the backup pieces, but I can change the format if I'd like to.
  • I am setting not setting encryption algorithm on (I would need to for a copy-to-cloud job).
  • I am not setting a compression algorithm on.
Click "OK" after filling in all of the detail, and submit the job.


Viewing archival Backups:


In the window that you choose the "Create Archival Bucket" you can view existing backups.  In order to view the backups, you must first choose the "Protected Database" you want to view the backups for. Below is what you would see once a backup is initiated.



Summary:

You still might find it easier to create the archive log yourself using the PL/SQL package. This can be done either manually or through scripting.  The GUI gives you nice way to schedule individual database jobs, but for 100's, or 1000's of databases with varying requirements, scripting can be more flexible.

































Friday, May 31, 2024

ZDLRA's space efficient encrypted backups with TDE explained

 In this post I will explain what typically happens  when RMAN either compresses, or encrypts backups and how the new space efficient encrypted backup feature of the ZDLRA solves these issues.


TDE - What does a TDE encrypted block look like ?

Oracle Block contents

In the image above you can see that only the data is encrypted with TDE.  The header information (metadata) remains unencrypted.  The metadata is used by the database to determine the information about the block, and is used by the ZDLRA to create virtual full backups.


Normal backup of TDE encrypted datafiles

First let's go through what happens when TDE is utilized, and you perform a RMAN backup of the database.

In the image below, you can see that the blocks are written and are not changed in any way. 

NOTE: Because the blocks are encrypted, they cannot be compressed outside of the database.  


TDE backup no compression

Compressed backup of TDE encrypted datafiles

Next let's go through what happens if you perform an RMAN backup of the database AND tell RMAN to create compressed backupsets.  As I said previously, the encrypted data will not compress., and because the data is TDE the backup must remain encrypted.
Below you can see that RMAN handles this with series of steps.  

RMAN will
  1. Decrypt the data in the block using the tablespace encryption key.
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the whole block (including the headers) using a new encryption key generated by the RMAN job

You can see in the image below, after executing two RMAN backup jobs the blocks are encrypted with two different encryption keys. Each subsequent backup job will also have new encryption keys.

Compressed TDE data



Compression or Deduplication

This leaves you with having to chose one or the other when performing RMAN backup jobs to a deduplication appliance.  If you execute a normal RMAN backup, there is no compression available, and if you utilize RMAN compression, it is not possible to dedupe the data. The ZDLRA, since it needs to read the header data, didn't support using RMAN compression.

How space efficient encrypted backups work with TDE

So how does the ZDLRA solve this problem to be able provide both compression and the creation of virtual full backups?
The flow is similar to using RMAN compression, BUT instead of using RMAN encryption, the ZDLRA library encrypts the blocks in a special format that leaves the header data unencrypted.  The ZDLRA library only encrypts the data contents of blocks.

  1. Decrypt the data in the block using the tablespace encryption
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the data portion of the block (not the headers) using a new encryption key generated by the RMAN job
In the image below you can see the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full backup.

This allows the ZDLRA to both compress the blocks AND provide space efficient virtual full backups




How space efficient encrypted backups work with non-TDE blocks


So how does the ZDLRA feature work with non-TDE data ?
The flow is similar to that of TDE data, but the data does not have to be unencrypted first.  The blocks are compressed using RMAN compression, and are then encrypted using the new ZDLRA library.


In the image below you can the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full.





I hope this helps to show you how space efficient encrypted backups work, and how it is a much more efficient way to both protect you backups with encryption, and utilize compression.

NOTE: using space efficient encrypted backups does not require with the ACO or the ASO options.









Wednesday, April 17, 2024

Autonomous Recovery Service Prechecks

 If you are configuring backups to utilize the Autonomous Recovery Service, there are some prerequisites that you need to be aware of.  If your Oracle Database was originally created in OCI and has always been OCI, those prerequisites are already configured for your database.  But, if you migrated a database to an OCI service, you might not realize that these items are required.


Prerequisites for Autonomous Recovery Service


1) WALLET_ROOT must be configured in the SPFILE.

WALLET_ROOT is a new parameter that was added in 19c, and its purpose is to replace the SQLNET.ENCRYTPION_WALLET_LOCATION in the sqlnet.ora file. Configuring the encryption wallet location in the sqlnet.ora file is depreciated.
WALLET_ROOT points to the directory path on the DB node(s) where the encryption wallet is stored for this database, and possibly the OKV endpoint client if you are using OKV to manage your encryption keys.
WALLET_ROOT allows each database to have it's own configuration location specific to each database.

There is a second parameter that goes with WALLET_ROOT that tells the database what kind of wallet is used (file, HSM or OKV), and that parameter is tde_configuration.


Running the script below should return the WALLET_ROOT location, and the tde_configuration information.


Checking the WALLET_ROOT and tde_configuration


Below you can see that both of these parameters are configured and I am using a wallet file.


Parameter            Value
-------------------- ------------------------------------------------------------
wallet_root          /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root
tde_configuration    keystore_configuration=FILE


If you need to migrate to WALLET_ROOT use the tooling .

dbaascli tde enableWalletRoot - enable wallet_root spfile parameter for existing database.

           Usage: dbaascli tde enableWalletRoot --dbname <value> [--dbRestart <value>] [--executePrereqs] [--resume [--sessionID <value>]]
                     Where:
                          --dbname - Oracle database name.
                          [--dbRestart - database restart option. Valid values : full|rolling ]
                          [ --executePrereqs - run the prerequisite checks and report the results. ]
                          [--resume - to resume the previous operation]
                          [--sessionID - to resume a specific session id.]


2) Encryption keys must be configured and available

In order to leverage the Autonomous Recovery Service, you must have an encryption key set and available for the CDB and each PDB.  If you migrated a non-TDE database (or plugged in a nonTDE PDB) to OCI you might not have configured encryption for one ore more PDBs.  The next step is to ensure that you have an encryption key set, and the wallet is open.  The query below should return "OPEN" for each CDB/PDB showing that the encryption key is available.


Below is the output from the query showing that the wallet is open for the CDB and the PDBs. 



   INST_ID PDB Name   Type       WRL_PARAMETER                                                Status
---------- ---------- ---------- ------------------------------------------------------------ ---------------
         1 BGRENNPDB1 FILE                                                                    OPEN
           CDB$ROOT   FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/tde/         OPEN
           PDB$SEED   FILE                                                                    OPEN

         2 BGRENNPDB1 FILE                                                                    OPEN
           CDB$ROOT   FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/tde/         OPEN
           PDB$SEED   FILE                                                                    OPEN



3) All tablespaces are TDE encrypted

TDE encryption is mandatory in OCI, and the Autonomous Recovery Service cannot be used if all of your tablespaces are not encrypted.  Below is a query to run that will tell you if your tablespaces are all encrypted.


In my case I can see that all of the tablespaces are encrypted

Encrypted tablespace information
------------------------------------------------------------
Number of encrypted tablespaces   :      12
Number of unencrypted tablespaces :      0
                                         ----
Total Number of tablespaces       :      12



To find any tablespaces that are not encrypted you can run the query below.



4) Turn Off Any Manual Operational Backups.


In some cases, OCI users perform manual operational backups. These backups are run outside the standard tooling and support point-in-time recovery (non-KEEP backups).
Performing incremental backups to multiple locations can cause integrity issues with recovery.
The original backups can be kept to support the original retention window, and ensure that you have operational backups for a point-in-time prior to onboarding to the Recovery Service.  
Choose an appropriate cutover time, and switch to the Recovery Service, and slowly remove older backups as they expire until they are all completely removed.



Wednesday, February 14, 2024

DB Script management through pre-authenticated URLs

 Pre-authenticated URLs in OCI are fast becoming one of my favorite features of using object storage.  In this blog post I will explain how I am using them for both:

  • Storing the current copy of my backup scripts and dynamically pulling it from my central repository
  • uploading all my logs files to a central location
Pre-authenticated URL creation

PROBLEM:


The problem I was trying to solve, is that I wanted to create a script to run on all my database nodes to create a weekly archival backup.
Since I have databases that are both Base DB databases, and ExaCS I found that I was continuously making changes to my backup script.  Sometimes it was checking something different in my environment, and sometimes it was improving the error checking.
Each time I made a change, I was going out to every DB host and copying the new copy of my script.
Eventually I realized that Pre-authenticated URLs could not only help me ensure all my DB hosts are running the current copy of my backup script, they could be the central log location.

Solution:


Solution #1 - Script repository


The first problem I wanted to solve, was that I wanted to configure a script repository that I could dynamically pull the most current copy of my scripts from. Since I am running in OCI, I was looking for a "Cloud Native" solution rather than using NFS mounts that are shared across all my DB hosts.
To complicate things, I have databases that are running in different tenancies.

Step #1 - Store scripts in a bucket

The first step was to create a bucket in OCI to store both the scripts and logs.  Within that bucket, under "More Actions" I chose "Create New Folder" and I created 2 new folders, "logs" and "scripts".
Then within the scripts folder I uploaded by current scripts
  • rman.sh - My executable script that will set the environment and call RMAN
  • backup.rman - My RMAN script that contains the RMAN command to backup my database.

Step #2 - Create a Pre-Authenticated Request

The next step was to create a Pre-Authenticated request on the "scripts" folder.  Next to the scripts folder I clicked on the  3 dots and chose "Create Pre-Authenticated Request".
On the window that came up, I changed the expiration to be 1 year in the future (the default is 7 days).  I chose the "Objects with prefix" box so that I could download any scripts that I put in this folder to the DB hosts.  I also made sure the "Access Type" is "Permit object reads on those with specified prefix".
I did not chose "Enable Object Listing".
These settings will allow me to download the scripts from this bucket using the Pre-Authenticated URL only.  From this URL you will not be able to list the objects, or upload any changes.


Step #3 - Create wrapper script to download current scripts

Then using the Pre-Authenticated URL in a wrapper script, I download the current copies of the scripts to the host and then executed my execution script (rman.sh) with a parameter.

Below you can see that I am using curl to download my script (rman.sh) and storing it my local script directory (/home/oracle/archive_backups/scripts).  I am doing the same thing for the RMAN command file.
Once I download the current scripts, I am executing the shell script (rman.sh) .


curl -X GET https://{my tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{actual URL is removed }/n/id20skavsofo/b/bgrenn/o/scripts/rman.sh --output /home/oracle/archive_backups/scripts/rman.sh
curl -X GET https://{my tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{actual URL is removed }/n/id20skavsofo/b/bgrenn/o/scripts/backup.rman --output /home/oracle/archive_backups/scripts/backup.rman


/home/oracle/archive_backups/scripts/rman.sh $1


Solution #2 - Log repository

The second problem I wanted to solve was to make it easy review the execution of my scripts.  I don't want to go to each DB host and look at the log file.  I want to have the logs stored in a central location that I can check.  Again Pre-Authenticated URLs to the rescue !

Step #1 - Create the Pre-Authenticated URL

In the previous steps I already create a "logs" folder within the bucket. In this step I want to create a Pre-Authenticated URL like I did for the scripts, but in this case I want to use it to store the logs.
Like before I chose "Create Pre-Authenticated Request" for the "logs" folder.
This time, I am choosing "Permit object writes to those with the specified prefix". This will allow me to write my log files to this folder in the bucket, but not list the logs, or download any logs.


Step #2 - Upload the log output from my script

The nice thing was once I implemented Solution #1, and had all of my DB nodes already downloading the current script.  I updated the script to add an upload to object storage of the log file and they will all use my new script.
In my script I already had 2 variables set
  • NOW - The current date in "yyyymmdd" format
  • LOGFILE - The name of the output log file from my RMAN backup.
Now all I had to do was to add a curl command to upload my log file to the bucket.

Note I ma using the NOW variable to create a new folder under "logs" with the data so that my script executions are organized by date.

curl --request PUT --upload-file /home/oracle/archive_backups/logs/${LOGFILE} https://{My tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{URL removed}/n/id20skavsofo/b/bgrenn/o/logs/${NOW}/${LOGFILE}

BONUS


If I wanted to get fancy I could have put my LOGS in a new bucket, and configured  a lifecycle management rule to automatically delete logs after a period of time from the bucket.