Showing posts with label ZDLRA. Show all posts
Showing posts with label ZDLRA. Show all posts

Thursday, March 13, 2025

Oracle database wallets for TDE, ZDLRA and External Authentication

 One topic that I spend a lot of time on is "wallets" and the Oracle database. When working with multiple features in the database, there are multiple wallets that are used for different purposes. Along with multiple wallets, there are 2 ways to manage wallets (mkstore and orapki), and there a multiple types of wallets, passworded, auto-login and local.


Wallet Use Cases

Below is a subset of all the places where wallets are used.  

Encryption Wallet : This wallet contains the encryption keys used by DBMS_CRYPTO, TDE and/or RMAN encrypted backups

Strong authentication: Often when external authentication is configured in the database, each database has unique certificates that are stored in a wallet.  In this blog I will refer to this as Strong Authentication. This covers all of DB authentication terms. EUS, OUD, Kerberos, RADIUS, etc.

Certificate authorities and Self-signed certificates : These are used by the database to establish external calls to websites using SSL (HTTPS).  The database can validate the certificate with an external certificate authority, or the self-signed certificate can be stored directly in the wallet.

SEPS authentication : SEPS authentication is used by Oracle clients (including the ZDLRA) to allow scripts to authenticate with a username and password that is stored in an auto-login wallet.  The connection string to the DB is used as the key to retrieve the encrypted connection information.

Real-time redo and TLS certificates for ZDLRA : When the ZDLRA is configured to utilize HTTPS for send/receiving backups, a self-signed certificate is stored in a wallet. This is the same wallet that is used for SEPS authentication of the VPC user.

You can imagine the confusion when you try to combine multiple products that use a wallet, and you want to manage those wallets separately.


Encryption Wallet 

The encryption wallet is the easiest wallet to manage because it is typically isolated from the other wallets that are in use.

The hierarchy Oracle uses to find the location of an encryption wallet is below.  It follows this hierarchy and it will use the first wallet it finds.

WALLET_ROOT : This the recommended location for the encryption wallet as of 19c. The WALLET_ROOT is a spfile/pfile setting that allows you to specify a different location for each database.  It is recommended that the wallet is stored under $ORACLE_BASE/admin/{DB name}/wallet on each node to allow for out of place upgrades.

ENCRYPTION_WALLET_LOCATION in the sqlnet.ora : This was the recommended location prior to 19c.  When multiple databases were sharing the same $ORACLE_HOME (and thus the same sqlnet.ora file), this became confusing. The workaround was to set the location using a variable representation of the DB_NAME.  

$ORACLE_BASE/admin/{DB name}/wallet : This location is the recommended location, But you should set the WALLET_ROOT, or in on an old release (less than 19c)  set the ENCRYPTION_WALLET_LOCATION.  Depending on this location to be the "default" location can cause issues when you start using a wallet for other purposes.  This same location is the default location for any Strong authentication implementations.

Since you should be on 19c, you should be using WALLET_ROOT for encryption wallet location.

NOTE: If you running databases in OCI, it is mandatory to be using WALLET_ROOT in order to utilize the recovery service.

Recommendation :

My recommendation is to always use OKV to manage TDE encryption keys, but I understand that it is a licensable product and it isn't feasible to expect that all customers are using it.

When working in a RAC environment (non-OKV) it becomes critical to have a shared TDE wallet. You may be tempted to store the wallet on ASM, or Exascale. I recommend that you DO NOT.  This makes it much more difficult to backup the wallet, and it makes it more difficult to have a shared SEPS wallet if backing up to a ZDLRA.

Store the TDE encryption wallet on ACFS, and point the WALLET_ROOT to the ACFS location mounted on each node.  When backing up the encryption wallet, copy ONLY the passworded wallet ewallet.p12 to another location to be backed up outside of the DB backups.

Strong authentication wallets

This wallet typically causes the most headaches for users.  The hierarchy Oracle uses to find the location of a Strong authentication wallet is below.  Like the encryption wallet, it follows this hierarchy and it will use the first wallet it finds.

WALLET_LOCATION in the sqlnet.ora : When multiple databases are sharing the same $ORACLE_HOME (and thus the same sqlnet.ora file), this becomes confusing. The workaround was to set the location using a variable representation of the DB_NAME as part of location string.  

$ORACLE_BASE/admin/{DB name}/wallet : This is the location that most customers place their Strong authentication wallets in since it is isolated to the Database associated with the wallet

NOTE: The issue arises when customers use a product/feature that updates the WALLET_LOCATION in the sqlnet.ora, which breaks authentication since the WALLET_LOCATION is checked first.

Use separate wallets, and leverage the TNS_ADMIN variable to point to different sqlnet.ora files and sharing the same $ORACLE_HOME.


Certificate authorities and Self-signed certificates 

The most common use case for certificate authorities is when  utilizing the DBMS_CLOUD family of products.  Products such as DBMS_CLOUD call out object storage and require a secure (HTTPS) connection. In order to open a secure connection the client needs to authenticate the certificate as  valid certificate, or use a self-signed certificate that is stored in the wallet.
This same issue is true when using DBMS_CLOUD_AI and DBMS_VECTOR_CHAIN which makes calls to external LLMs that often require a secure connection.

This wallet is controlled by setting the database property "SSL_WALLET". 
For simplicity I would recommend creating a central wallet that can be used by ALL databases on the host and is stored within $ORACLE_BASE. My favorite location is $ORACLE_BASE/cert_wallet which identifies it as containing certificate authorities.

I do not recommend adding certificates to the Strong authentication wallet, or the SEPS wallet (discussed next) as it becomes more difficult to mange multiple wallets to make updates.

SEPS authentication 

The next wallet I want to discuss is the SEPS authentication wallet. This wallet is used by Oracle clients (sqlplus, RMAN, and ZDLRA) to store the credentials for a database.

The connection string (either an ezconnect string or a tnsnames.ora entry) is added to the wallet, along with the username and password that will be used when connecting using this entry.  

The location of the wallet is stored in the sqlnet.ora file, and there are 2 parameters associated with this setting.

SQLNET.WALLET_OVERRIDE=true

WALLET_LOCATION={location on disk}

NOTE: Setting the WALLET_OVERRIDE to true disables any OPS$ usage and allows the usage of SEPS wallets for authentication. 

Setting the WALLET_LOCATION on a host that supports databases utilizing Strong authentication often causes issues if it does not specify a separate location each database using variable.  The sqlnet.ora file is only read at startup, so changes to the WALLET_LOCATION might not become apparent to after a database bounce.

Recommendation :

If you are using multiple products that use a wallet AND share the same Oracle Home, I recommend using the TNS_ADMIN variable to mange which wallet to use in scripts. 

As wallets become more common for security, separating out the use cases, if possible, will make it easier to manage and rotate authentication information.  With TNS_ADMIN you can point to a directory containing a sqlnet.ora file specific to the database, and leave the original sqlnet.ora file without a WALLET_LOCATION entry. 

Real-time redo and TLS certificates for ZDLRA 

Prior to the 19.18 DB release, configuring real-time redo for databases sending backups to the ZDLRA required a bounce of the database (to refresh the DBs copy of the sqlnet.ora), and it required the WALLET_LOCATION to be set in the sqlnet.ora.

This changed with 19.18, and I recommend you use the new location.

   The hierarchy Oracle uses to find the location of the wallet real-time wallet is below.  Like the encryption wallet, it follows this hierarchy and it will use the first wallet it finds.

WALLET_ROOT/server_seps : If the variable WALLET_ROOT is set, and a wallet exists in the server_seps subdirectory, that wallet is used by the real-time redo.  This is a HUGE improvement as it doesn't require a bounce, and it makes it much easier to avoid issues with Strong authentication, and databases that share the same $ORACLE_HOME.

NOTE: WALLET_ROOT was added in 18c. If you are still using 12.x, you need to use the sqlnet.ora.

WALLET_LOCATION in the sqlnet.ora : When multiple databases are sharing the same $ORACLE_HOME (and thus the same sqlnet.ora file), this becomes confusing. The workaround was to set the location using a variable representation of the DB_NAME.  This is what I mentioned for Strong authentication.

Recommendation :

When backing up to a ZDLRA, especially with real-time redo you should be using a SEPS wallet that is stored under WALLET_ROOT.  
Since the ZDLRA supports encrypted backups, even if you don't own ASO, I recommend creating an encryption wallet with keys to encrypt your backups.  This is much more secure, and this ability is included in the ZDLRA license.
The steps I would recommend for any customer using the ZDLRA are
  • If you don't have an encryption wallet (because you don't own ASO), create one and set the  encryption keys for both the CDB and PDB (if it is multi-tenant). This does require a DB bounce to set the WALLET_ROOT, but this will allow you to have RMAN encrypted backups.
  • In a RAC environment store the encryption wallet on ACFS and point WALLET_ROOT to the ACFS location.
  • Store the SEPS wallet containing the VPC user credentials for the ZDLRA in the WALLET_ROOT/server_seps directory.  This will automatically be used by real-time redo starting with 19.18.
  • Ensure your channel configuration for RMAN points to the WALLET_ROOT/server_seps directory on ACFS for the wallet.
  • In your RMAN scripts ensure that you are pointing to a TNS_ADMIN location that has a sqlnet.ora file pointing to the WALLET_ROOT/server_seps location for WALLET_LOCATION or ensure that OEM has the correct SEPS wallet location set. 

MKSTORE vs ORAPKI

orapki 

The orapki utility manages public key infrastructure (PKI) elements, such as wallets and certificate revocation lists, from the command line.  This is the recommended method of managing wallet files.

You can use the orapki command-line utility to perform the following tasks:

  • Creating and viewing signed certificates for testing purposes

  • Manage Oracle wallets (except for Transparent Data Encryption keystores):

    • Create and display Oracle wallets

    • Add and remove certificate requests

    • Add and remove certificates

    • Add and remove trusted certificates

  • Manage certificate revocation lists (CRLs):

    • Renaming CRLs with a hash value for certificate validation

    • Uploading, listing, viewing, and deleting CRLs in Oracle Internet Directory

NOTE: The above is directly from the 19c documentation.  You can see that orapki is used to manage certificates with no mention of managing SEPS credentials.

mkstore

The first thing you will notice with mkstore, is that the mkstore command should be considered deprecated.  Upon digging into this some more, I found a comment from Russ Lowenthal (VP of Database Security products) who mentions that the SEPS credential wallet management will not be added to orapki until AFTER 23c.

NOTE: Even though it is considered deprecated, mkstore is the only way to manage SEPS credentials from the command line, and should only be used to manage SEPS credentials.

Administer key management

I added the "Administer Key Management" command to this section because it can also be used to manage both secrets and SEPS credentials.
The following options are available and can be found in the documentation.
  • add/update/delete Secret '{secret name}' for client '{client identifier}' --> secret
  • add/update/delete secret '{secret name}' for client '{client identifier}' to {local optionally} auto_login keystore {keystore location}  --> SEPS

How to manage wallets


Wallet Type How to manage contents
Encryption Keys Utilize the "ADMINISTER KEY MANAGEMENT" statement from the database
External user authentication Use orapki to manage certificates, or the OWM tool which uses orapki
Certificate authorities and Self-signed certificate Use orapki to manage certificates
SEPS authentication Use mkstore for now, as orapki does not support SEPS
Real-time redo for ZDLRA Use mkstore for now, as orapki does not support SEPS
TLS certificates for ZDLRA Use orapki to manage the certificates


Wallet names and type

When you look in the wallet directory you would see one, or both of these wallets.

cwallet.sso - This is an auto-login wallet.  With an auto-login wallet you can access the contents without having to provide a password. In almost all cases, you will have this type of wallet entry.

ewallet.p12 - This is the passworded wallet. In order add/change/delete entries you need to specify a password when making those changes.  

NOTE:

  • If only the cwallet.sso exist, you can assume it is an auto-login only wallet.
  • If both wallets exist, you can access the contents without a password, but any add/change/deletion commands will require a password and update both the passworded wallet and the auto-login wallet.
  • If only the ewallet.p12 exists, to access the contents of the wallet  you must provide a password.


Standard Password Protected wallet

This is the least common wallet type (at least alone without an auto-login wallet), since it requires a password to access the contents. This is most commonly used to protect encryption keys for databases since it will require entering password to open the wallet when the database is started.   In this configuration you create a new wallet using orapki or Administore key store and provide a password.  In this case there will only be a single wallet file, ewallet.p12.

NOTE: You cannot create a non auto-login wallet with mkstore 

  • orapki wallet create -wallet {wallet location}
  • administer key management create keystore {wallet location}

Auto-login only wallets

You can create an auto-login wallet using e mkstore,  orapki, or the administer key manage command.  The idea of an auto-login wallet, is that you can add entries to this wallet without needing a password. You can also list the entries in the wallet using either CLI tool. In this configuration there is only a cwallet.sso file in the wallet directory

Auto-login wallets

This is the most common configuration that you will see.  There is both a passworded wallet, and an auto-login wallet. With both wallets, it requires a password to make changes, but no password is required to open the wallet and use it.  The two wallets are synchronized when you make changes.

There are two ways to create auto-login wallets.

    1. Create a non auto-login wallet using orapki or within the database, then create an auto-login wallet from the non auto-login wallet.

  • orapki wallet create -wallet {wallet location}
    • orapki wallet create -wallet {wallet location} -auto_login  OR
    • mkstore -wrl {wallet location} -createSSO
  • administer keystore create keystore {wallet location}
    • administer keystore create auto_login keystore from keystore {wallet location}
     2. Create an auto-login wallet  and non auto-login wallet together
  •     orapki wallet create -wallet {wallet location} -auto_login

Local Auto-login wallets

Local auto-login wallets work the same way as the auto-login wallet, EXCEPT, the wallet is encrypted in a way that makes it only usable on the host it was created.  This limits any security risks if the wallet is copied (or restored) onto a different host.

When creating a local auto-login wallet you would use 
  • mkstore -wrl {wallet location} -createLSSO
  • orapki wallet create -wallet {wallet location} auto_login_local
  • administer keystore create local auto_login keystore from keystore {wallet location}

NOTE:

  • Local auto-login wallets are much more secure as they can only be used on the host  where the wallet was created. 
  • When backing up wallets, this includes Encryption wallets, only backup the ewallet.p12 file.  This ensures that a password is required to utilize the wallet.
NOTE: When only backing up the ewallet.p12, be sure you know the password so that you can recreate the auto-login wallet.
  • ALWAYS review the permissions on your wallet files, especially the auto-login wallet files containing credentials.  Any user that can access the auto-login wallet file can utilize the credentials contained within the wallet.

ASM/Exascale for Encryption wallets

You probably noticed that I am not a fan of ASM/Exascale as an encryption wallet location, even though ASM in mentioned in the documentation. 
I will add more to this section, but this is my reasoning for not preferring ASM.
  1. It's easy to forget backing up the wallet file.  Having it on ASM requires copying it back to the file system to get backed up.  It is very easy to forget about this, rotate the keys, and not have a wallet backup.
  2. WALLET_ROOT is becoming the starting point for different wallet files, not just encryption wallets.  ZDLRA is the first example. When WALLET_ROOT points to ASM or Exascale, then the same wallet cannot be used by many tools because they only expect wallets on the file system.
Shared wallets make sense, that's why I prefer ACFS, or a mounted filesystem for WALLET_ROOT.


Summary 

Starting with DB 19.18, you have the ability to store individual credential wallets for real-time redo transportation when leveraging ZDLRA for backups.  You can also use the TNS_ADMIN variable to set a different location when using SEPS authentication.  It is now possible to manage multiple wallets separately without having conflicts between products and features.


MY RECOMMENDATIONS (summary):

  • Use Oracle Key Vault (OKV) for encryption keys.  OKV is an Oracle product specifically designed to securely store and manage encryption keys, and much more.  OKV has tight integration with the Oracle Database.  If you are not using OKV, at least store Encryption Keys on ACFS as the shared location (not ASM or Exascale).
  • Use WALLET_ROOT if you are on 18c+.  This will continue to be used products to help separate wallet locations for different uses cases.  The ZDLRA is the first of many products to use the hierarchy for wallet files. 
  • Backup only the ewallet.p12.  This is the passworded wallet and with the password it can be used to recreate the auto-login wallet. This is especially critical for Encryption keys.

BUT - Make sure you know the password. Without the password, you can't recreate the auto-login wallet.

  • Lock down permissions on wallet files to only the account that needs access, especially the cwallet.sso file (auto-login).
  • Whenever possible create local auto-login wallets that can only be used on the source host where the wallet was created. This wallet, however,  cannot be shared across nodes.
  • Keep your SEPS wallets separate by utilizing the TNS_ADMIN variable and having a custom sqlnet.ora file.
  • If you are backing up to a ZDLRA create an encryption wallet with keys, and set the WALLET_ROOT location.  Put the SEPS wallet for ZDLRA under WALLET_ROOT/server_seps.  This wallet can also be used for the TPCS certificate if you configure HTTPS.   Keep this configuration separate to avoid conflicts with other products.


Sunday, September 29, 2024

ZDLRA backups -- How do I know if they are Encrypted

 The ZDLRA introduced a new feature with release 23.1 that can both encrypt backups (if they are not already encrypted from TDE) and  compress the backups .  The combing of both encryption and compression with this feature is unique to the ZDLRA.



I talked about this new exciting feature in a blog post on Oracle.com you can find here.

What I am am going to cover in this blog post is how to audit the RMAN catalog on the ZDLRA to validate that your backups are completely RMAN encrypted.

There are two big advantages of ensuring your backups are fully encrypted

1) With the prevalence of data exfiltration, and the advent of new regulations in many industries,  full encryption of backups is mandatory

2) When sending a backup to the Oracle cloud (either in OCI or to object storage on ZFS) full encryption is required to protect the backup data.

The question I often get asked with this feature is..

 "How do you tell  if your backups are encrypted ?"

You can can determine that your backups are encrypted by looking at the RMAN catalog.

The RC_BACKUP_PIECE view contains a column identifying if the backup is encrypted.  This column is set to "YES" only when the backup piece is encrypted.

Keep in mind that there multiple types of backups pieces contained in the catalog

  • Controlfile backups
  • Spfile backups
  • Archive log sweeps
  • Archive log backups from real-time redo
  • Datafile backups
  • Virtual Full backups created from incremental backups.
All of these backups except for two are sent from RMAN with "encryption on" and the backup set will marked as encrypted based on the RMAN encryption setting.

The two that are not set by RMAN directly are
  • Real-time redo backups. Real-time redo backups are identified in the RMAN catalog as encrypted when the destination setting on the protected database has ENCRYPTION=ENABLE set.
  • Virtual Full backups.  Virtual full backups are identified, for each datafile backup set, as encrypted ONLY after a new L0 is taken with RMAN encryption on, and all subsequent L1 backups are encrypted.  I know that is a lot of stipulations on identifying the virtual full backup as encrypted.  Only when a new FULL encrypted backup is taken, and all future incremental backups are encrypted can the ZDLRA be sure the backup has remained completely encrypted.

Checking the catalog

  The script below takes 2 parameters (&db_name, and &days_to_compare) and it will check the RMAN catalog and display the status of the backups, by backup type making it easier to identify any backup pieces that may not be encrypted.



This provides a nicely formatted output as you can see below.


                                             Database backup summary for last 15 days database: DBSG23AI

Encrypted  Compressed Backup
 Yes or No  Yes or No pieces Backup piece type
========== ========== ====== ========================================
YES        YES            69  Full backup
YES        NO             39 Archive Log - log sweep
NO         YES             1 Incremental L1  backup
YES        NO           3958 Archive Log - real-time redo
YES        YES            67 Incremental L1  backup
NO         YES             3  Full backup
NO         NO              1 Controlfile/SPFILE backup
YES        NO             26 Controlfile/SPFILE backup
YES        NO            221 Incremental L1  backup


In the report you can see that there a  few backups that not encrypted, along with some controlfile/spfile backups.


NOTE: In order to run this report, I created a REPORT user in the database on the ZDLRA. A report has enough permissions to create this report.






Tuesday, August 27, 2024

Oracle Backup Compression and Encryption layers explained

 When working with customers who are applying compression and/or encryption to their Oracle DB backups, I found that it isn't always clear if backups are compressed or encrypted, or both. In this blog post I will break compression and encryption of Oracle backups down into the levels where these operations could occur.  Below is a high level view of these 3 levels.



Database

Compression

Data in the database can be compressed in any one of the following formats or all of the formats

HCC - Available only on Exadata, or ZFS storage, this compression is a columnar compression format with different options that allow you to choose the appropriate access speed and compression ratio for your data

Advanced Compression - A licensable option that will automatically compress data in the background to optimize storage without compromising performance.

Basic Compression - Requires a lock on object during insert and is typically used within a data warehouse.

 External Compression - In some cases the data stored in the database may already be compressed externally. An example of this is image files which are already stored in a compressed format.

 

Encryption

Data in the database can be encrypted in any one of the following formats or all of the formats

TDE - All data in the tablespace is encrypted by database.

Column Encryption - Specific data within a column is encrypted, SSN for example.  This is less widely used and most customers use TDE instead.

 External Encryption - In some cases the data stored in the database may already be encrypted by the application.

 

 NOTE: 

1. If the data is compressed and/or encrypted in these manors it will continue in that format when backed up.  

  • Any data that encrypted in the database will remain encrypted in the backups
  • Any data that is compressed in the database will remain compressed in the backups
  • Backups of data that is compressed and/encrypted will get little to no compression when backed up


2. RMAN does not know that the data  is either compressed or encrypted, and querying the RMAN views will not tell you that either has occurred.


3. Having data encrypted and/or compressed in the database may not stop you from further compressing and/or encrypted the backups.


ZDLRA

Compression

Datafile Compression - With Datafile compression you have 2 choices to compress the backups

    • RA_FORMAT = TRUE - This  compresses all datafile backups in the new ZDLRA 23.1 format.  If the datafile is part of a TDE tablespace, the blocks will be decrypted prior to compression to ensure the best compression ratio.  
    • RA_FORMAT not set or  FALSE - Backups of datafiles will be sent as uncompressed (unless you create a RMAN compressed backupset which the ZDLRA will uncompress before ingesting).  Once they are received on the ZDLRA they will be compressed in storage on ZDLRA.  When replicated to another ZDLRA, or restored, they are uncompressed.

Real-time Redo Compression - When sending real-time redo to the ZDLRA you can have the ZDLRA create an RMAN compressed backupset for the archive logs.  The level of compression can be set on the policy.  Once stored in an RMAN compressed backupset format, it is replicated and restored as a compressed backupset.  

          NOTE: If the Redo stream contains changes to a TDE tablespace, or you are                                            configuring encryption on the RA as destination, you may get little to no actual compression 

SPFILE, Controfile, archivelog backups - The ZDLRA will NOT attempt to compress these backupsets internally.  Only datafile backups are compressed on the ZDLRA.

 

Encryption

Datafile Encryption - Whether a datafile is encrypted by the ZDLRA in the new ZDLRA 23.1 format depends on these 2 conditions.

    • RA_FORMAT = TRUE and "RMAN Encryption ON" - If the datafile is NOT part of a TDE tablespace, this will force BOTH compression and encryption of that datafile backup.
    • RA_FORMAT = TRUE and "RMAN Encryption OFF" - If the datafile is part of a TDE tablespace, the backup of this datafile will remain encrypted.  If the datafile is NOT part of a TDE tablespace, the backup will NOT be encrypted.

Real-time Redo Encryption - If real-time redo is utilized and your database has implemented TDE, the change data in the archive log backups will be encrypted.  However, this backup is not considered RMAN encrypted, and ENCRYPTION=ENABLED must be set on the destination definition to ensure that the real-time redo backupsets are considered fully encrypted by RMAN.

SPFILE, Controlfile, archivelog backup Encryption - These are not encrypted by the ZDLRA.

 

 NOTE: 

1. The new Space Efficient Encrypted backup feature of the ZDLRA only affects datafile backups.

2. Real-time redo backups can be compressed and/or encrypted by the ZDLRA.

3. If you are using the new RA_FORMAT=TRUE for non-TDE datafile backup, you will only get a compressed a backupset.  You have set RMAN Encryption on with RA_FORMAT=TRUE in order to encrypt the backupset.

4. If you are backing up a non-TDE  datafile, and wish to encrypt it with the library, it will also be compressed.  You cannot encrypt without compression, but you can compress without encryption.

5. If datafile backups are sent to the ZDLRA  without RA_FORMAT=TRUE, they will appear as compressed in the RMAN catalog.  With RA_FORMAT=TRUE they will not appear as compressed.

6.If real-time redo is sent to the ZDLRA, and the profile for the database is set to compress the archivelogs, they will appear as compressed in the RMAN catalog. 

 

RMAN

Compression

Datafile Compression - With Datafile compression you have 2 choices to compress the backups

  • RA_FORMAT = TRUE - RMAN compression is ignored when this option is set.  
  • RA_FORMAT not set or  FALSE - RMAN can create a compressed backupset for datafiles.  If the datafile is part of a TDE tablespace, this datafile will not be able to create a virtual full.  If the Datafile is NOT part of a TDE tablespace, the backset will be decompressed on the ZDLRA and will not be stored as a Compressed backupset.


SPFILE, Controfile, archivelog backups - The ZDLRA will NOT attempt to compress these backupsets internally.  They remain compressed.

 

Encryption

Datafile Encryption - RMAN Encrypt ON  creates an Encrypted backupset which cannot be virtualized by the ZDLRA.  This should only be set when using RA_FORMAT=TRUE which bypasses RMAN encryption


SPFILE, Controlfile, archivelog backup Encryption - These can be encrypted by setting RMAN Encryption on.

 NOTE: 

1. The new Space Efficient Encrypted backup feature of the ZDLRA only affects datafile backups.

2. Real-time redo backups can be compressed and/or encrypted by the ZDLRA.

3. If you are using the new RA_FORMAT=TRUE for non-TDE datafile backup, you will only get a compressed a backupset.  You have set RMAN Encryption on with RA_FORMAT=TRUE in order to encrypt the backupset.

4. If you are backing up a non-TDE  datafile, and wish to encrypt it with the library, it will also be compressed.  You cannot separate encryption from compression, but you can compress only.

Friday, May 31, 2024

ZDLRA's space efficient encrypted backups with TDE explained

 In this post I will explain what typically happens  when RMAN either compresses, or encrypts backups and how the new space efficient encrypted backup feature of the ZDLRA solves these issues.


TDE - What does a TDE encrypted block look like ?

Oracle Block contents

In the image above you can see that only the data is encrypted with TDE.  The header information (metadata) remains unencrypted.  The metadata is used by the database to determine the information about the block, and is used by the ZDLRA to create virtual full backups.


Normal backup of TDE encrypted datafiles

First let's go through what happens when TDE is utilized, and you perform a RMAN backup of the database.

In the image below, you can see that the blocks are written and are not changed in any way. 

NOTE: Because the blocks are encrypted, they cannot be compressed outside of the database.  


TDE backup no compression

Compressed backup of TDE encrypted datafiles

Next let's go through what happens if you perform an RMAN backup of the database AND tell RMAN to create compressed backupsets.  As I said previously, the encrypted data will not compress., and because the data is TDE the backup must remain encrypted.
Below you can see that RMAN handles this with series of steps.  

RMAN will
  1. Decrypt the data in the block using the tablespace encryption key.
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the whole block (including the headers) using a new encryption key generated by the RMAN job

You can see in the image below, after executing two RMAN backup jobs the blocks are encrypted with two different encryption keys. Each subsequent backup job will also have new encryption keys.

Compressed TDE data



Compression or Deduplication

This leaves you with having to chose one or the other when performing RMAN backup jobs to a deduplication appliance.  If you execute a normal RMAN backup, there is no compression available, and if you utilize RMAN compression, it is not possible to dedupe the data. The ZDLRA, since it needs to read the header data, didn't support using RMAN compression.

How space efficient encrypted backups work with TDE

So how does the ZDLRA solve this problem to be able provide both compression and the creation of virtual full backups?
The flow is similar to using RMAN compression, BUT instead of using RMAN encryption, the ZDLRA library encrypts the blocks in a special format that leaves the header data unencrypted.  The ZDLRA library only encrypts the data contents of blocks.

  1. Decrypt the data in the block using the tablespace encryption
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the data portion of the block (not the headers) using a new encryption key generated by the RMAN job
In the image below you can see the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full backup.

This allows the ZDLRA to both compress the blocks AND provide space efficient virtual full backups




How space efficient encrypted backups work with non-TDE blocks


So how does the ZDLRA feature work with non-TDE data ?
The flow is similar to that of TDE data, but the data does not have to be unencrypted first.  The blocks are compressed using RMAN compression, and are then encrypted using the new ZDLRA library.


In the image below you can the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full.





I hope this helps to show you how space efficient encrypted backups work, and how it is a much more efficient way to both protect you backups with encryption, and utilize compression.

NOTE: using space efficient encrypted backups does not require with the ACO or the ASO options.









Monday, October 23, 2023

Oracle Recovery Service now offers retention lock

 Oracle DB Recovery Service recently added a new feature to protect backups from being prematurely deleted, even by a tenancy administrator.  This new feature adds a retention lock to the Backup Retention Period at the policy level. The image below shows the new settings that you see within the protection policy.

Enabling retention lock

The recovery service comes with some default policies that appear as "oracle defined" policy types

Name            Backup retention period
Platinum            46 days
Gold                   65 days
Silver                 35 days
Bronze               14 days

These policies can't' be changed, and they do not enable retention lock.

In order to implement a retention lock you need to create a new protection policy or  update an existing user defined protection policy.

Step #1 Set/Adjust "Backup retention period"

If you are creating a new "user defined" protection policy, you need to set the backup retention to a number of days between 14 and 95.  You should also take this opportunity to adjust the backup retention of an existing policy, if appropriate, before it is locked.

NOTE: Once a retention lock on the protection policy is activated (discussed in step #3), the backup retention period cannot be decreased, it can only be increased.

Step #2 Click on "enable retention lock"

This step is pretty straightforward. But the most important item to know is that the retention lock is not immediately in effect.  Much like the "retention lock" that is set on object storage, there is a minimum period of at least 14 days before the lock is "active".

 Note: Once the grace period has expired for the policy (explained later in this blog post) the  "retention lock"  is permanent and cannot be removed.


Step #3 Set "Scheduled lock time"

As I said in the previous step, the lock isn't immediately active. In this step you set the future date/time  that the lock time becomes active, and this Date/Time must be at least 14 days in the future.  This provides a grace period that delays when the lock on the policy becomes active. You have up until the lock activation date/time to adjust the scheduled lock time further into the future if it becomes necessary to further day lock activation.

Grace Period 

I wanted to make sure I explain what happens with this grace period so that you can plan accordingly.

  • If you change an existing "user defined" policy to enable the retention lock, any databases that are a member of this policy will not have locked backups until the scheduled lock date/time activates the lock.  
  • If you add databases to a protection policy that has a retention lock enabled, the backups will not be locked until whichever time is farther in the future.
    • Scheduled lock time for the policy if the retention lock has not yet activated.
    • 14 days after the database is added to the protection policy.
  • Databases can be removed from a retention locked protection policy during this grace period.
  • If the policy itself is still within it's grace period from activating, the backup retention period can be adjusted down for the protection policy.
NOTE: This 14 day grace period allows you to review the estimated space needed.  On the protected database summary page, for each database, you can see the "projected space for policy"  in the Space Usage section.  This value can be used to estimate the "locked backup" utilization.


What happens with a retention lock ?

Once the grace period expires the backups for the protected database are time locked and can't be prematurely deleted.  

The backups are protected by the following rules.

1. The database cannot be moved to another policy. No user within the tenancy, including an administrator can remove a database from it's retention enabled policy.  If it becomes necessary to move a database to another policy , an SR needs to raised, and security policies are followed to ensure that this is an approved change.


2.  There is always a 14 day grace period in which changes can be made before the backups become locked. This is your window to verify the backup storage usage required before the lock activates.

3. Even if you check the "72 hour termination option" on the database, backups are locked throughout the retention window.


Comments:

This is a great new feature that protects backups from being deleted by anyone in the tenancy, including tenancy administrators.  This provides an extra layer of security from an attack with compromised credentials.  Because the lock is permanent, always use the 14 day grace period to ensure the usage and duration is appropriate for you database.






Wednesday, October 4, 2023

Cyber Vault Characteristics

 One topic that has been coming up over and over this year is Cyber Vault. In this post I am going to through the characteristics I commonly see when a customer build a Cyber Vault.  The image below gives you a good idea of what is involved.

Characteristics of a Cyber Vault

Cyber Vault


  • NTP and DNS services.: Because a Cyber Vault is often isolated from the rest of the datacenter it is critical to have NTP service.  Proper time management is critical to ensuring backups are kept for the proper retention.  DNS isn't critical, but it is definitely very helpful in configuring infrastructure.  In many cases "/etc/hosts" can get around this, but is a pain to maintain.
  • Firewalls:  Configuring firewalls, and isolated networks is critical to ensure the Cyber Vault is isolated.  The vault is often physically in the same datacenter, with network isolation providing the protection.  Be sure to understand what ports, networks, and traffic direction is utilized on all infrastructure so you can proper set firewall rules.
  • Air Gap:  Creating an Air-Gap has become the standard to protect backups in the Cyber Vault. The Air Gap is often open for only a few hours a day at random times to ensure that the opening isn't predictable.  To limit the exposure time, it is critical to maximize the networking into the vault, and minimize the amount of data necessary to transfer.
 NOTE: Not all customers choose to have an Air Gap.  Having an Air Gap that is closed for long periods of times ensures there is less chance of intrusions, BUT it guarantees long periods of data loss when a restoration is performed.  This is most critical to decide with databases that are always changing.
  • Break-the-glass: There needs to be control on who gets access into the vault, and an approval process to ensure that all access is planned and controlled.
  • Backup validation: There needs to be a validation process in a vault to ensure that the backups are untouched.  When the backups contain executables, this is typically scanning for ransomware signatures. When backups are Oracle Backups, performing  "Restore Database Validate" is the gold standard for validating backups.
  • Clean Room: A clean room is an environment where backups can tested, This can be a small environment (a server or 2) or it can be large enough to restore and run the whole application.
  • Monitoring and reporting infrastructure : For Oracle this OEM (Cloud Control). It is critical that any issues are alerted and reported outside the vault.
  • Audit Reports: Audit reports are critical to ensuring that the backups in the Cyber Vault are secured.  Audit reports will capture any changes to the environment, and any issues with the backups themselves.

BONUS: One thing that customers don't often think about is encryption keys.  Implementing TDE on Oracle Databases is an important part of protecting your data from exfiltration. But you should also ensure that you have a secure backup of you encryption keys in the Vault.
OKV (Oracle Key Vault) is the best way of managing the keys for Oracle databases.

Tuesday, September 5, 2023

Creating dynamic KEEP archival backups from ZDLRA

 This post covers how to utilize the new package DBMS_RA.CREATE_ARCHIVAL_BACKUP to dynamically create KEEP archival backups from a ZDLRA.

When using this package to schedule KEEP backups, I recommend creating restore points with every incremental backup.  Read this blog post to find out why.

PROCEDURE CREATE_ARCHIVAL_BACKUP(
   db_unique_name IN VARCHAR2,
   from_tag IN VARCHAR2 DEFAULT NULL,
   compression_algorithm IN VARCHAR2 DEFAULT NULL,
   encryption_algorithm IN VARCHAR2 DEFAULT NULL,
   restore_point IN VARCHAR2 DEFAULT NULL,
   restore_until_scn      IN VARCHAR2 DEFAULT NULL,
   restore_until_time     IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   attribute_set_name     IN VARCHAR2,
   format                 IN VARCHAR2 DEFAULT NULL,
   autobackup_prefix      IN VARCHAR2 DEFAULT NULL,
   restore_tag            IN VARCHAR2 DEFAULT NULL,
   keep_until_time        IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
   max_redo_to_apply      IN INTEGER DEFAULT 14                    --> Added in 21.1 June PSU
   comments IN VARCHAR2 DEFAULT NULL);

NOTE: This blog post was updated to include the MAX_REDO_TO_APPLY parameter which is not documented as of writing this post.

 The documentation can be found here.  

These archival KEEP backups can be sent to either

  • TAPE - Using the copy-to-tape process you can send archival backups to physical tape, virtual tape, or any media manager that uses a "TAPE" backup type.
  • CLOUD - Using the copy-to-cloud process you can send archival backups to an OCI object store bucket which can be either on a local ZFSSA (using the OCI API protocol), or to the Oracle Cloud directly.



NOTE: When sending backups to a cloud location, retention rules can be set on the bucket LOCKING the cloud backups to ensure that they are immutable.  This is integrated with the new compliance settings on the RA21.



How to use this package

1. Identify the Database

Because this is more of an on demand process, you have to execute the package for each database separately (rather than by using a protection policy), and identify for each database the point-in-time you want to use for recovery..

2. Set Archival Restore Point

Because the archival backup is dynamically created using existing backups the restore point works differently than if you create the KEEP backup on demand from the protected database. 


When you create a KEEP backup from the protected database, the backup contains 

    • Full backup of all datafiles
    • Backup of spfile and controlfile
    • Backup of archive logs created during the backup starting with a log switch at the beginning of the backup.
    • Final archive logs created by performing a log switch at the end of the backup.

 When you create an Archival backup from the ZDLRA , the backup contains

    • Most current virtual full backup of each datafile prior to the point in time for recovery that you choose. 
    • Backup of spfile and controlfile 
    • Backup of the active archive logs generated when the oldest virtual full datafile backup started, up to the archive logs needed to recover until the point in time chosen for recovery.

As you can see a normal KEEP backup generated by the protected database is a a "self-contained" backup that can be recovered only to the point in time that the backup completed.  You can increase the recover point by adding additional KEEP archival log backups after the backup.

The dynamically created KEEP backup generated by the ZDLRA is also a "self-contained" backup that can be recovered to any point in time after the last datafile backup completed, but it also includes any point in time up to the restore point identified.  

Choices for a dynamic restore point 

 There are 3 options to choose a specific restore point. If you do not set one of these options, the KEEP backup will be created using the current restore point of the database.  

  •  RESTORE_POINT - If you set a unique restore point in the database immediately following an incremental backup (or  at a later point in time), you can create a KEEP backup that will recover to that point-in-time.  When using this process, after creating the restore point you should ensure that you also perform a log switch, and a log sweep to backup the archive logs.  This restore point name is used as the default RESTORE_TAG, and should be unique.  The recommended name (because it is the default KEEP restore tag) is "<KEEP_BACKUP_><yyyyMMddHH24miSS>".  BUT- in order to better identify the restore point, I would use a shorter name that just contains the date (assuming you are only performing an single daily incremental backup), for example "KEEP_BACKUP_MMDDYY".  By using a restore point, you can better control the amount of archive logs necessarily to recover the database.

 

    • Incremental forever backups ensure that the duration of the backup is much shorter than a typical full KEEP backup limiting the amount of archive logs necessary to have a recovery point.
    • Setting a restore point immediately following the backup ensures that the recovery window following the last datafile backup piece is short also limiting the amount of archive logs necessary.

  • RESTORE_UNTIL_SCN or RESTORE_UNTIL_TIME I am grouping these 2 choices together, because they are so similar.  Unlike using a restore point that is preset, using either of these options will create the KEEP archive backup with a recover point as the SCN number given or the UNTIL TIME given (using the databases timezone). 


FROM_TAG - The documentation states that only backups containing the FROM_TAG will be considered if a FROM_TAG is set. I am thinking this would make sense if you let the restore point default to the current time, and you want to choose which backup pieces to include.  I am not sure of the full use of this option however.


WARNING: This process only looks back 14 days for a full backup to start the KEEP backupset with.  If you do not have a full backup within the 14 day window this can be over ridden with the  MAX_REDO_TO_APPLY parameter in the package call. This was added in the 21.1 June PSU to allow customers to set a window farther than 14 days.

 RECOMMENDATIONS 

  •  Because you can create up to 2048 RESTORE_POINTs in a database, and normal restore points are automatically dropped when necessary, I would recommend creating a restore point following each incremental backup with the format mentioned above, This will allow you to create a self-contained FULL KEEP backup from any incremental backup as needed. This can be used to easily create an end-of-month KEEP backup (for example).

 

  • I would use the RESTORE_UNTIL options when it is necessary to create a KEEP backup as of a specific point-in-time regardless of when the backup completed. This would be used if the recovery point is critical.

WARNING

Before creating the archival backup, ensure you have the archive logs backed up that are needed to support the recover point, and ensure there is enough time for the incremental backups to virtualize.  You many need perform a log switch and execute an additional log sweep prior to scheduling the archival backup.

3. Set Archival Options


COMPRESSION_ALGORITHM
-  The default is no compression, and if the backup piece is already compressed, it will not try to compress the backup again.  The documentation does a good job of going through the options, and why you would chose one or the other.  Keep in mind that if your database uses TDE for all the datafiles, there will be no gain with compression, and the extra resources required for compression may slow down the restore.  Also, the compression is performed by the ZDLRA (RMAN compression), but the de-compression is performed by the protected database during restore.

 ENCRYPTION_ALGORITHM - The default is no encryption, but it is important to understand that any copy-to-cloud processing MUST have encryption set.  It is also important to understand that the ZDLRA must be using OKV (Oracle Key Vault) to store the encryption keys when encryption is set. The list of algorithms can be found in the documentation.

 

4. Set Archival Location and Name

ATTRIBUTE_SET_NAME - This must be specified, and this identifies the backup location to send the archival backups.

FORMAT - By default the  backup pieces are given handles automatically generated by the ZDLRA, this setting allows you to change the default backup piece format using normal RMAN formatting options.

AUTOBACKUP_PREFIX - - By default the autobackup pieces will retain the original names, but  you can add a prefix to the original autobackup names. 

 

5. Set Restore TAG

 By default the RESTORE_TAG defaults to  "<KEEP_BACKUP_><yyyyMMddHH24miSS>". This can be overridden to give the backup a more meaningful tag. For example, the end-of-month backup could be tagged as "MONTHLY_12_2023", making it easier to automate finding specific KEEP backups.

 RECOMMENDATIONS 

I would set the Restore Tag to a set format that makes the KEEP backups easy to find. You can see the example above. 

6. Set KEEP_UNTIL time

The default KEEP_UNTIL time is "FOREVER". In most cases you want to set an end date for the backup, allowing the ZDLRA to automatically remove the backup when it expires.  This date-time is based on the timezone of the protected database. 



 SUMMARY 

 If using this functionality to dynamically create Archival KEEP backups...

  • I would set a Restore Point in each database immediately following every incremental backup.  
  • I would schedule this procedure to create the archival backup with a formatted restore tag to make the backup easy to find.
  • If backing up to a CLOUD location, I would use retention rules to ensure the backups are immutable until they expire.

 

 

Monday, June 5, 2023

Autotuned_reserved_space is a new feature on the ZDLRA that you should be using

 Autotuned_reserved_space is a new policy setting that got released with 21.1 and you should be using it. When I talk to customers about how to manage databases on a ZDLRA, the biggest confusion comes in when I talk about reserved space.  Reserved space needs to be understood, and properly managed. This new feature in 21.1 allows the ZDLRA to handle the reserved space for you, and I explain how to use it in this blog post.  First let's go through space usage, and reserved space in general.

space usage ZDLRA

Space usage on the ZDLRA. 


Recovery Window goal (which drives the space utilization)

The recovery window goal is set at the policy level, and this value (in days) is the number of days that you want to keep as a recovery window for all databases that are a member of this policy.  This will drive the space utilization.

Total space

The ZDLRA comes with all the space pre-allocated.  When you are looking at OEM, or in the SAR report you will see the total space listed. You want to make sure that you have enough space for your database backups and any incoming new backups.

Used Space

When the ZDLRA purges backups beyond the the Recovery Window Goal that you set, if does a bulk purge of backups.  This can be controlled by setting the maximum disk backup retention in days (which defaults to 1.5 times the recovery window goal).  Because of the bulk purge, more space is shown as used than is needed to support your recovery window goal.

Recovery Window Space

This is the amount space that is needed to support the recovery window goal.  Because, of the bulk purge, the recovery window space is less than the used space.


Reserved space

In order to control what happens with space, the concept of reserved space is used.  When a database is added to the ZLDRA, the reserved space value is set for this database.  This value should be updated regularly to ensure that there is enough space for the database backups to be stored.

The important things to know about reserved space are:
  • The sum of all the reserved space cannot be greater than the total space available on the ZDLRA.
  • When adding a new database, it's reserved space must fit within the unreserved space.
  • When a new database is added, the reserved space must be set to least the size of the database, and defaults to 2.5 times the size of the database.
  • The reserved space for a database needs to be at least the size of the largest datafile.
  • The reserved space should be larger than the amount of space needed to support the recovery window goal space for the database.  For databases with fluctuation, you need to reserve space for the peak usage. 
The reserved space serves two purposes when properly set
  1. It can be used to determine how much space is available for new database backups.
  2. If the ZDLRA determines that it does not have enough space to support the recovery window goal of the supported databases, space is reclaimed from databases whose reserved space is too small.
It is critical to keep the reserved space updated, and many customers have used an automated process to set the reserved space to "recovery window space needed" + 10%

Unfortunately configuring an automated process for all databases does not take into account any fluctuations in usage.  Let's say I have a database which is much busier at months end, I want to make that sure my reserved space is not adjusted down to the low value, I want it to stay adjusted based on the highest space usage value.

Autotuned_reserved_space 


This where autotuned reserved space can help you manage the reserved space.  This setting is controlled at the policy level.  

AUTOTUNED_RESERVED_SPACE

This value is set at the protection policy level and contains either "YES" or "NO", and defaults to "NO". "YES" will allow the ZDLRA to manage reserved space automatically for all databases (whose disk_reserved_space is not set) and are a member of this policy.

MAX_RESERVED_SPACE


This value is also set at the protection policy level.  This value is optional for autotuned_reserved_space, but if set, it will control the maximum amount of reserved space that can be set for an individual database in the protection policy. 

AUTOTUNE_SPACE_LIMIT


This value is set at the storage level for ALL databases. This sets a reserved space usage limit, where autotuning can slow down large reserved space increases. When reached, autotune will limit databases from increasing their reserved space growth to 10% per week.  This value is optional and will default to the total space if not set.  


SUMMARY:

  • autotuned_reserved_space - Enables autotuning of space within a protection policy
  • max_reserved_space - Controls the maximum reserved space of databases in a protection policy
  • autotune_space_limit - Slows the reserved space growth when a specified space limit is reached.

What does autotune reserved space do ?

  • On a regular basis, if needed, the reserved space for each autotune controlled database is adjusted to reserve space for the recovery window goal, and incoming backups.
  • If the database has a disk_reserved_space set, autotuning will not be used for this database.  It is assumed that the disk_reserved_space will be set manually for this database

Autotune  will replace the need for the ZDLRA admin to constantly update the reserved space for each database, as it's space needs change over time. It will also allow them to configure a constant reserved space for databases with fluctuating storage usage.