Thursday, March 13, 2025

Oracle database wallets for TDE, ZDLRA and External Authentication

 One topic that I spend a lot of time on is "wallets" and the Oracle database. When working with multiple features in the database, there are multiple wallets that are used for different purposes. Along with multiple wallets, there are 2 ways to manage wallets (mkstore and orapki), and there a multiple types of wallets, passworded, auto-login and local.


Wallet Use Cases

Below is a subset of all the places where wallets are used.  

Encryption Wallet : This wallet contains the encryption keys used by DBMS_CRYPTO, TDE and/or RMAN encrypted backups

Strong authentication: Often when external authentication is configured in the database, each database has unique certificates that are stored in a wallet.  In this blog I will refer to this as Strong Authentication. This covers all of DB authentication terms. EUS, OUD, Kerberos, RADIUS, etc.

Certificate authorities and Self-signed certificates : These are used by the database to establish external calls to websites using SSL (HTTPS).  The database can validate the certificate with an external certificate authority, or the self-signed certificate can be stored directly in the wallet.

SEPS authentication : SEPS authentication is used by Oracle clients (including the ZDLRA) to allow scripts to authenticate with a username and password that is stored in an auto-login wallet.  The connection string to the DB is used as the key to retrieve the encrypted connection information.

Real-time redo and TLS certificates for ZDLRA : When the ZDLRA is configured to utilize HTTPS for send/receiving backups, a self-signed certificate is stored in a wallet. This is the same wallet that is used for SEPS authentication of the VPC user.

You can imagine the confusion when you try to combine multiple products that use a wallet, and you want to manage those wallets separately.


Encryption Wallet 

The encryption wallet is the easiest wallet to manage because it is typically isolated from the other wallets that are in use.

The hierarchy Oracle uses to find the location of an encryption wallet is below.  It follows this hierarchy and it will use the first wallet it finds.

WALLET_ROOT : This the recommended location for the encryption wallet as of 19c. The WALLET_ROOT is a spfile/pfile setting that allows you to specify a different location for each database.  It is recommended that the wallet is stored under $ORACLE_BASE/admin/{DB name}/wallet on each node to allow for out of place upgrades.

ENCRYPTION_WALLET_LOCATION in the sqlnet.ora : This was the recommended location prior to 19c.  When multiple databases were sharing the same $ORACLE_HOME (and thus the same sqlnet.ora file), this became confusing. The workaround was to set the location using a variable representation of the DB_NAME.  

$ORACLE_BASE/admin/{DB name}/wallet : This location is the recommended location, But you should set the WALLET_ROOT, or in on an old release (less than 19c)  set the ENCRYPTION_WALLET_LOCATION.  Depending on this location to be the "default" location can cause issues when you start using a wallet for other purposes.  This same location is the default location for any Strong authentication implementations.

Since you should be on 19c, you should be using WALLET_ROOT for encryption wallet location.

NOTE: If you running databases in OCI, it is mandatory to be using WALLET_ROOT in order to utilize the recovery service.

Recommendation :

My recommendation is to always use OKV to manage TDE encryption keys, but I understand that it is a licensable product and it isn't feasible to expect that all customers are using it.

When working in a RAC environment (non-OKV) it becomes critical to have a shared TDE wallet. You may be tempted to store the wallet on ASM, or Exascale. I recommend that you DO NOT.  This makes it much more difficult to backup the wallet, and it makes it more difficult to have a shared SEPS wallet if backing up to a ZDLRA.

Store the TDE encryption wallet on ACFS, and point the WALLET_ROOT to the ACFS location mounted on each node.  When backing up the encryption wallet, copy ONLY the passworded wallet ewallet.p12 to another location to be backed up outside of the DB backups.

Strong authentication wallets

This wallet typically causes the most headaches for users.  The hierarchy Oracle uses to find the location of a Strong authentication wallet is below.  Like the encryption wallet, it follows this hierarchy and it will use the first wallet it finds.

WALLET_LOCATION in the sqlnet.ora : When multiple databases are sharing the same $ORACLE_HOME (and thus the same sqlnet.ora file), this becomes confusing. The workaround was to set the location using a variable representation of the DB_NAME as part of location string.  

$ORACLE_BASE/admin/{DB name}/wallet : This is the location that most customers place their Strong authentication wallets in since it is isolated to the Database associated with the wallet

NOTE: The issue arises when customers use a product/feature that updates the WALLET_LOCATION in the sqlnet.ora, which breaks authentication since the WALLET_LOCATION is checked first.

Use separate wallets, and leverage the TNS_ADMIN variable to point to different sqlnet.ora files and sharing the same $ORACLE_HOME.


Certificate authorities and Self-signed certificates 

The most common use case for certificate authorities is when  utilizing the DBMS_CLOUD family of products.  Products such as DBMS_CLOUD call out object storage and require a secure (HTTPS) connection. In order to open a secure connection the client needs to authenticate the certificate as  valid certificate, or use a self-signed certificate that is stored in the wallet.
This same issue is true when using DBMS_CLOUD_AI and DBMS_VECTOR_CHAIN which makes calls to external LLMs that often require a secure connection.

This wallet is controlled by setting the database property "SSL_WALLET". 
For simplicity I would recommend creating a central wallet that can be used by ALL databases on the host and is stored within $ORACLE_BASE. My favorite location is $ORACLE_BASE/cert_wallet which identifies it as containing certificate authorities.

I do not recommend adding certificates to the Strong authentication wallet, or the SEPS wallet (discussed next) as it becomes more difficult to mange multiple wallets to make updates.

SEPS authentication 

The next wallet I want to discuss is the SEPS authentication wallet. This wallet is used by Oracle clients (sqlplus, RMAN, and ZDLRA) to store the credentials for a database.

The connection string (either an ezconnect string or a tnsnames.ora entry) is added to the wallet, along with the username and password that will be used when connecting using this entry.  

The location of the wallet is stored in the sqlnet.ora file, and there are 2 parameters associated with this setting.

SQLNET.WALLET_OVERRIDE=true

WALLET_LOCATION={location on disk}

NOTE: Setting the WALLET_OVERRIDE to true disables any OPS$ usage and allows the usage of SEPS wallets for authentication. 

Setting the WALLET_LOCATION on a host that supports databases utilizing Strong authentication often causes issues if it does not specify a separate location each database using variable.  The sqlnet.ora file is only read at startup, so changes to the WALLET_LOCATION might not become apparent to after a database bounce.

Recommendation :

If you are using multiple products that use a wallet AND share the same Oracle Home, I recommend using the TNS_ADMIN variable to mange which wallet to use in scripts. 

As wallets become more common for security, separating out the use cases, if possible, will make it easier to manage and rotate authentication information.  With TNS_ADMIN you can point to a directory containing a sqlnet.ora file specific to the database, and leave the original sqlnet.ora file without a WALLET_LOCATION entry. 

Real-time redo and TLS certificates for ZDLRA 

Prior to the 19.18 DB release, configuring real-time redo for databases sending backups to the ZDLRA required a bounce of the database (to refresh the DBs copy of the sqlnet.ora), and it required the WALLET_LOCATION to be set in the sqlnet.ora.

This changed with 19.18, and I recommend you use the new location.

   The hierarchy Oracle uses to find the location of the wallet real-time wallet is below.  Like the encryption wallet, it follows this hierarchy and it will use the first wallet it finds.

WALLET_ROOT/server_seps : If the variable WALLET_ROOT is set, and a wallet exists in the server_seps subdirectory, that wallet is used by the real-time redo.  This is a HUGE improvement as it doesn't require a bounce, and it makes it much easier to avoid issues with Strong authentication, and databases that share the same $ORACLE_HOME.

NOTE: WALLET_ROOT was added in 18c. If you are still using 12.x, you need to use the sqlnet.ora.

WALLET_LOCATION in the sqlnet.ora : When multiple databases are sharing the same $ORACLE_HOME (and thus the same sqlnet.ora file), this becomes confusing. The workaround was to set the location using a variable representation of the DB_NAME.  This is what I mentioned for Strong authentication.

Recommendation :

When backing up to a ZDLRA, especially with real-time redo you should be using a SEPS wallet that is stored under WALLET_ROOT.  
Since the ZDLRA supports encrypted backups, even if you don't own ASO, I recommend creating an encryption wallet with keys to encrypt your backups.  This is much more secure, and this ability is included in the ZDLRA license.
The steps I would recommend for any customer using the ZDLRA are
  • If you don't have an encryption wallet (because you don't own ASO), create one and set the  encryption keys for both the CDB and PDB (if it is multi-tenant). This does require a DB bounce to set the WALLET_ROOT, but this will allow you to have RMAN encrypted backups.
  • In a RAC environment store the encryption wallet on ACFS and point WALLET_ROOT to the ACFS location.
  • Store the SEPS wallet containing the VPC user credentials for the ZDLRA in the WALLET_ROOT/server_seps directory.  This will automatically be used by real-time redo starting with 19.18.
  • Ensure your channel configuration for RMAN points to the WALLET_ROOT/server_seps directory on ACFS for the wallet.
  • In your RMAN scripts ensure that you are pointing to a TNS_ADMIN location that has a sqlnet.ora file pointing to the WALLET_ROOT/server_seps location for WALLET_LOCATION or ensure that OEM has the correct SEPS wallet location set. 

MKSTORE vs ORAPKI

orapki 

The orapki utility manages public key infrastructure (PKI) elements, such as wallets and certificate revocation lists, from the command line.  This is the recommended method of managing wallet files.

You can use the orapki command-line utility to perform the following tasks:

  • Creating and viewing signed certificates for testing purposes

  • Manage Oracle wallets (except for Transparent Data Encryption keystores):

    • Create and display Oracle wallets

    • Add and remove certificate requests

    • Add and remove certificates

    • Add and remove trusted certificates

  • Manage certificate revocation lists (CRLs):

    • Renaming CRLs with a hash value for certificate validation

    • Uploading, listing, viewing, and deleting CRLs in Oracle Internet Directory

NOTE: The above is directly from the 19c documentation.  You can see that orapki is used to manage certificates with no mention of managing SEPS credentials.

mkstore

The first thing you will notice with mkstore, is that the mkstore command should be considered deprecated.  Upon digging into this some more, I found a comment from Russ Lowenthal (VP of Database Security products) who mentions that the SEPS credential wallet management will not be added to orapki until AFTER 23c.

NOTE: Even though it is considered deprecated, mkstore is the only way to manage SEPS credentials from the command line, and should only be used to manage SEPS credentials.

Administer key management

I added the "Administer Key Management" command to this section because it can also be used to manage both secrets and SEPS credentials.
The following options are available and can be found in the documentation.
  • add/update/delete Secret '{secret name}' for client '{client identifier}' --> secret
  • add/update/delete secret '{secret name}' for client '{client identifier}' to {local optionally} auto_login keystore {keystore location}  --> SEPS

How to manage wallets


Wallet Type How to manage contents
Encryption Keys Utilize the "ADMINISTER KEY MANAGEMENT" statement from the database
External user authentication Use orapki to manage certificates, or the OWM tool which uses orapki
Certificate authorities and Self-signed certificate Use orapki to manage certificates
SEPS authentication Use mkstore for now, as orapki does not support SEPS
Real-time redo for ZDLRA Use mkstore for now, as orapki does not support SEPS
TLS certificates for ZDLRA Use orapki to manage the certificates


Wallet names and type

When you look in the wallet directory you would see one, or both of these wallets.

cwallet.sso - This is an auto-login wallet.  With an auto-login wallet you can access the contents without having to provide a password. In almost all cases, you will have this type of wallet entry.

ewallet.p12 - This is the passworded wallet. In order add/change/delete entries you need to specify a password when making those changes.  

NOTE:

  • If only the cwallet.sso exist, you can assume it is an auto-login only wallet.
  • If both wallets exist, you can access the contents without a password, but any add/change/deletion commands will require a password and update both the passworded wallet and the auto-login wallet.
  • If only the ewallet.p12 exists, to access the contents of the wallet  you must provide a password.


Standard Password Protected wallet

This is the least common wallet type (at least alone without an auto-login wallet), since it requires a password to access the contents. This is most commonly used to protect encryption keys for databases since it will require entering password to open the wallet when the database is started.   In this configuration you create a new wallet using orapki or Administore key store and provide a password.  In this case there will only be a single wallet file, ewallet.p12.

NOTE: You cannot create a non auto-login wallet with mkstore 

  • orapki wallet create -wallet {wallet location}
  • administer key management create keystore {wallet location}

Auto-login only wallets

You can create an auto-login wallet using e mkstore,  orapki, or the administer key manage command.  The idea of an auto-login wallet, is that you can add entries to this wallet without needing a password. You can also list the entries in the wallet using either CLI tool. In this configuration there is only a cwallet.sso file in the wallet directory

Auto-login wallets

This is the most common configuration that you will see.  There is both a passworded wallet, and an auto-login wallet. With both wallets, it requires a password to make changes, but no password is required to open the wallet and use it.  The two wallets are synchronized when you make changes.

There are two ways to create auto-login wallets.

    1. Create a non auto-login wallet using orapki or within the database, then create an auto-login wallet from the non auto-login wallet.

  • orapki wallet create -wallet {wallet location}
    • orapki wallet create -wallet {wallet location} -auto_login  OR
    • mkstore -wrl {wallet location} -createSSO
  • administer keystore create keystore {wallet location}
    • administer keystore create auto_login keystore from keystore {wallet location}
     2. Create an auto-login wallet  and non auto-login wallet together
  •     orapki wallet create -wallet {wallet location} -auto_login

Local Auto-login wallets

Local auto-login wallets work the same way as the auto-login wallet, EXCEPT, the wallet is encrypted in a way that makes it only usable on the host it was created.  This limits any security risks if the wallet is copied (or restored) onto a different host.

When creating a local auto-login wallet you would use 
  • mkstore -wrl {wallet location} -createLSSO
  • orapki wallet create -wallet {wallet location} auto_login_local
  • administer keystore create local auto_login keystore from keystore {wallet location}

NOTE:

  • Local auto-login wallets are much more secure as they can only be used on the host  where the wallet was created. 
  • When backing up wallets, this includes Encryption wallets, only backup the ewallet.p12 file.  This ensures that a password is required to utilize the wallet.
NOTE: When only backing up the ewallet.p12, be sure you know the password so that you can recreate the auto-login wallet.
  • ALWAYS review the permissions on your wallet files, especially the auto-login wallet files containing credentials.  Any user that can access the auto-login wallet file can utilize the credentials contained within the wallet.

ASM/Exascale for Encryption wallets

You probably noticed that I am not a fan of ASM/Exascale as an encryption wallet location, even though ASM in mentioned in the documentation. 
I will add more to this section, but this is my reasoning for not preferring ASM.
  1. It's easy to forget backing up the wallet file.  Having it on ASM requires copying it back to the file system to get backed up.  It is very easy to forget about this, rotate the keys, and not have a wallet backup.
  2. WALLET_ROOT is becoming the starting point for different wallet files, not just encryption wallets.  ZDLRA is the first example. When WALLET_ROOT points to ASM or Exascale, then the same wallet cannot be used by many tools because they only expect wallets on the file system.
Shared wallets make sense, that's why I prefer ACFS, or a mounted filesystem for WALLET_ROOT.


Summary 

Starting with DB 19.18, you have the ability to store individual credential wallets for real-time redo transportation when leveraging ZDLRA for backups.  You can also use the TNS_ADMIN variable to set a different location when using SEPS authentication.  It is now possible to manage multiple wallets separately without having conflicts between products and features.


MY RECOMMENDATIONS (summary):

  • Use Oracle Key Vault (OKV) for encryption keys.  OKV is an Oracle product specifically designed to securely store and manage encryption keys, and much more.  OKV has tight integration with the Oracle Database.  If you are not using OKV, at least store Encryption Keys on ACFS as the shared location (not ASM or Exascale).
  • Use WALLET_ROOT if you are on 18c+.  This will continue to be used products to help separate wallet locations for different uses cases.  The ZDLRA is the first of many products to use the hierarchy for wallet files. 
  • Backup only the ewallet.p12.  This is the passworded wallet and with the password it can be used to recreate the auto-login wallet. This is especially critical for Encryption keys.

BUT - Make sure you know the password. Without the password, you can't recreate the auto-login wallet.

  • Lock down permissions on wallet files to only the account that needs access, especially the cwallet.sso file (auto-login).
  • Whenever possible create local auto-login wallets that can only be used on the source host where the wallet was created. This wallet, however,  cannot be shared across nodes.
  • Keep your SEPS wallets separate by utilizing the TNS_ADMIN variable and having a custom sqlnet.ora file.
  • If you are backing up to a ZDLRA create an encryption wallet with keys, and set the WALLET_ROOT location.  Put the SEPS wallet for ZDLRA under WALLET_ROOT/server_seps.  This wallet can also be used for the TPCS certificate if you configure HTTPS.   Keep this configuration separate to avoid conflicts with other products.


Wednesday, March 5, 2025

Oracle DB release 23.7 includes "Select AI" with the DBMS_CLOUD_AI package

 The latest release of Oracle DB23ai (23.7) now includes the promised packages for DBMS_CLOUD.  

I'm not talking about the ADB release, this is the general 23.7 DB release, and it even includes Select AI !!



You can find the documentation for how to install DBMS_CLOUD here.  This is updated documentation that supersedes the MOS note 2748362.1 - How To Setup And Use DBMS_CLOUD Package.


What's Included in 23.7

The Following packages are included in 23.7

DBMS_CLOUD - The SQL to install this package has been included with the DB release since 19.9.  More procedures have been added over time to provide more functionality with object storage.

DBMS_CLOUD_AI - This is the most interesting part of the release (at least to me).  This package is used as the basis for Select AI.

DBMS_CLOUD_NOTIFICATION -  This package allows you to send messages, or the output of a query to an e-mail or to Slack.

DBMS_CLOUD_PIPELINE -  This package allows you to create a data pipeline for loading and exporting data in the cloud.  This is mainly used to interact with data in object storage on a scheduled basis.

DBMS_CLOUD_REPO -  This package allows you to interact with hosted code repositories from the oracle Database. Repositories like Github are supported.


Where to start

The following are some great places to learn more about how to use the packages.

Videos:

Documentation:


Installing in your Database

I started by going through the install and prerequisites found here.
  1. Install the DBMS_CLOUD packages in a 23.7 CDB using the instructions in the 23.7 Documentation (20.2)
  2. Create the SSL wallet with certificates (20.3)
  3. Configure your environment with the new wallet (20.4).
NOTE: If you are using SEPS (ZDLRA uses SEPS), or other user authentication this is the same wallet that other authentication methods use).

    4. Configure the ACL list to allow DB calls to the LLM that you are going to be using (20.5)

    5. Verify the configuration for DBMS_CLOUD (20.6)

    6. Configure users or roles to use DBMS_CLOUD. (20.7).  In my case I granted the access to "SH".

    7. Create the credential for the LLM you are using in your PDB

    8. Create the Profile which identifies the tables that you want to use in your PDB

Example


I installed the Sample sales schema into my PDB (SH user) and followed the instructions in the documentation found here.


Below is the output of one of the queries that I ran using "Select AI" once I went through these steps to install it with the sample SH schema.

SQL> select ai tell me how many customers are in each country;

COUNTRY_NAME                             CUSTOMER_COUNT
---------------------------------------- --------------
Italy                                              7780
Singapore                                           597
Brazil                                              832
United Kingdom                                     7557
Australia                                           831
Japan                                               624
Canada                                             2010
Argentina                                           403
Poland                                              708
China                                               712
Germany                                            8173
United States of America                          18520
France                                             3833
Spain                                              2039
New Zealand                                         244
Denmark                                             383
South Africa                                         88
Saudi Arabia                                         75
Turkey                                               91


I am just getting starting determining how to best use this feature, and this should be enough to get your started.



Monday, January 20, 2025

Oracle DB 23ai in your datacenter

 Oracle DB 23ai is available for Exadata and I've been spending a lot of time working on building some demos in my lab environment. Below is the architecture.


To help you get started below are the pre-steps I did to create this demo.

  1. Download and install DB 23ai (latest version which was 23.6 when I created my demo).
  2. Install APEX within the database.  Most existing demos use APEX, and makes it easy to build a simple application.  Here is a link to a blog that I used to explain the install process, and ORDS setup for the webserver.
  3. Optional - Install the embedding model in your database to convert text it's vector representation. Here is a link to how to do this. You can also use an external model with Ollama.
  4. Optional - Install DBMS_CLOUD to access object storage.  Most demos access object storage to read in documents. Here is a link to my blog on how to install it.  I actually used ZFS for my object storage after installing DBMS_CLOUD. You can OCI, or even a PAR against any Object storage.
  5. Install ollama. Ollama is used to host the LLM, and you within Ollama you can download any open source model.. For my demo, I downloaded and installed llama3.2.
The demo I started with was the Texas Legislation demo which can be found here. This link points to a video showing the demo, and within the description is a link to the code and instruction on how to recreate the demo in your environment which are located in Github

The majority of the application is written in APEX, and can be downloaded using the instructions on github which can be found here.

The major changes I had to make to get this demo working on-premises had to do with using Ollama rather than access OCI for the LLM.

Documentation for using Ollama can be found here.

The biggest challenge was the LLM calls.  The embedding and document search was the same DBM_VECTOR calls regardless of the model.  The Demo, however uses DBMS_CLOUD.send_request which does not support OLLAMA.

I changed the functions to call DBMS_VECTOR_CHAIN.UTL_TO_GENERATE_TEXT instead, and I built a "prompt" instead of a message.  This is outlined below.

Description Demo request Ollam request
Call LLM with chat history and results/td> dbms_cloud.send_request

Message:
Question:
DBMS_VECTOR_CHAIN.UTL_TO_GENERATE_TEXT

Question:
Chat History:
Context:

SUMMARY : This RAG demo is a great place to start learning how to create a RAG architecture, and with just a few changes many of the Demo's created for Autonomous can be used on-premises also !






Wednesday, December 11, 2024

Listing Databases on an Oracle DB node

 In this blog post I am sharing a script that I wrote that will give you the list of databases running on a DB node.  The information  provided by the script is

  • DB_UNIQUE_NAME
  • ORACLE_SID
  • DB_HOME

WHY


I have been working on a script to automatically configure OKV for all of the Oracle Databases running on a DB host.  In order to install OKV in a RAC cluster, I want to ensure the unique OKV software files are in the same location on every host when I set the WALLET_ROOT variable for my database.  The optimal location is to put the software under $ORACLE_BASE/admin/${DB_NAME} which should exist on single instance nodes, and RAC nodes.

Easy right?


I thought it would be easy to determine the name of all of the databases on a host so that I could make sure the install goes into $ORACLE_BASE/admin/{DB_NAME}/okv directory on each DB node.

The first item I realized is that the directory structure under $ORACLE_BASE/admin is actually the DB_UNIQUE_NAME rather than DB_NAME. This allows for 2 different instances of the same DB_NAME (primary and standby) to be running on the same DB node without any conflicts. 

Along with determining the DB_UNIQUE_NAME, I wanted to take the following items into account
  • A RAC environment with, or without srvctl properly configured
  • A non-RAC environment 
  • Exclude directories that are under $ORACLE_BASE/admin that are not a DB_UNQUE_NAME running on the host.
  • Don't match on ORACLE_SID.  The ORACLE_SID name on a DB node can be completely different from the DB_UNIQUE_NAME.

Answer:

After searching around Google and not finding a good answer I checked with my colleagues.  Still no good answer.. There were just suggestions like "srvctl config", which would only work on a RAC node where all databases are properly registered.  

The way I decided to this was to 
  • Identify the possible DB_UNIQUE_NAME entries by looking in $ORACLE_BASE/admin
  • Match the possible DB_UNIQUE_NAME with ORACLE_SIDs by looking in $ORACLE_BASE/diag/rdbms/${DB_UNIQUE_NAME} to find the ORACLE_SID name.  I would only include DB_UNIQUE_NAMEs that exist in this directory structure and have a subdirectory.
  • Find the possible ORACLE_HOME by matching the ORACLE_SID to the /etc/oratab.  If there is no entry in /etc/oratab still include it.

Script:


Below is the script I came up with, and it displays a report of the database on the host.  This can be changed to store the output in a temporary file and read it into a script that loops through the databases.




Output:

Below is the sample output from the script.. You can see that it doesn't require the DB to exist in the /etc/oratab file.



DB_UNIQUE_NAME : cdb1db1
ORACLE_SID     : cdb1db11
ORACLE_HOME    :  ******  NOT IN /etc/oratab **** Cannot determine ORACLE_HOME *****


DB_UNIQUE_NAME : daver
ORACLE_SID     : daver1
ORACLE_HOME    : /u01/app/oracle/product/19.0.0.0/dbhome_1


DB_UNIQUE_NAME : dbsgadat
ORACLE_SID     : dbsgadat1
ORACLE_HOME    : /u01/app/oracle/product/19.0.0.0/dbhome_1


DB_UNIQUE_NAME : dbsgprd
ORACLE_SID     : dbsgprd1
ORACLE_HOME    : /u01/app/oracle/product/19.0.0.0/dbhome_1



Finally:


If you are also trying to get a list of databases that are running on a DB node I hope this helps you.

Sunday, September 29, 2024

ZDLRA backups -- How do I know if they are Encrypted

 The ZDLRA introduced a new feature with release 23.1 that can both encrypt backups (if they are not already encrypted from TDE) and  compress the backups .  The combing of both encryption and compression with this feature is unique to the ZDLRA.



I talked about this new exciting feature in a blog post on Oracle.com you can find here.

What I am am going to cover in this blog post is how to audit the RMAN catalog on the ZDLRA to validate that your backups are completely RMAN encrypted.

There are two big advantages of ensuring your backups are fully encrypted

1) With the prevalence of data exfiltration, and the advent of new regulations in many industries,  full encryption of backups is mandatory

2) When sending a backup to the Oracle cloud (either in OCI or to object storage on ZFS) full encryption is required to protect the backup data.

The question I often get asked with this feature is..

 "How do you tell  if your backups are encrypted ?"

You can can determine that your backups are encrypted by looking at the RMAN catalog.

The RC_BACKUP_PIECE view contains a column identifying if the backup is encrypted.  This column is set to "YES" only when the backup piece is encrypted.

Keep in mind that there multiple types of backups pieces contained in the catalog

  • Controlfile backups
  • Spfile backups
  • Archive log sweeps
  • Archive log backups from real-time redo
  • Datafile backups
  • Virtual Full backups created from incremental backups.
All of these backups except for two are sent from RMAN with "encryption on" and the backup set will marked as encrypted based on the RMAN encryption setting.

The two that are not set by RMAN directly are
  • Real-time redo backups. Real-time redo backups are identified in the RMAN catalog as encrypted when the destination setting on the protected database has ENCRYPTION=ENABLE set.
  • Virtual Full backups.  Virtual full backups are identified, for each datafile backup set, as encrypted ONLY after a new L0 is taken with RMAN encryption on, and all subsequent L1 backups are encrypted.  I know that is a lot of stipulations on identifying the virtual full backup as encrypted.  Only when a new FULL encrypted backup is taken, and all future incremental backups are encrypted can the ZDLRA be sure the backup has remained completely encrypted.

Checking the catalog

  The script below takes 2 parameters (&db_name, and &days_to_compare) and it will check the RMAN catalog and display the status of the backups, by backup type making it easier to identify any backup pieces that may not be encrypted.



This provides a nicely formatted output as you can see below.


                                             Database backup summary for last 15 days database: DBSG23AI

Encrypted  Compressed Backup
 Yes or No  Yes or No pieces Backup piece type
========== ========== ====== ========================================
YES        YES            69  Full backup
YES        NO             39 Archive Log - log sweep
NO         YES             1 Incremental L1  backup
YES        NO           3958 Archive Log - real-time redo
YES        YES            67 Incremental L1  backup
NO         YES             3  Full backup
NO         NO              1 Controlfile/SPFILE backup
YES        NO             26 Controlfile/SPFILE backup
YES        NO            221 Incremental L1  backup


In the report you can see that there a  few backups that not encrypted, along with some controlfile/spfile backups.


NOTE: In order to run this report, I created a REPORT user in the database on the ZDLRA. A report has enough permissions to create this report.






Tuesday, August 27, 2024

Oracle Backup Compression and Encryption layers explained

 When working with customers who are applying compression and/or encryption to their Oracle DB backups, I found that it isn't always clear if backups are compressed or encrypted, or both. In this blog post I will break compression and encryption of Oracle backups down into the levels where these operations could occur.  Below is a high level view of these 3 levels.



Database

Compression

Data in the database can be compressed in any one of the following formats or all of the formats

HCC - Available only on Exadata, or ZFS storage, this compression is a columnar compression format with different options that allow you to choose the appropriate access speed and compression ratio for your data

Advanced Compression - A licensable option that will automatically compress data in the background to optimize storage without compromising performance.

Basic Compression - Requires a lock on object during insert and is typically used within a data warehouse.

 External Compression - In some cases the data stored in the database may already be compressed externally. An example of this is image files which are already stored in a compressed format.

 

Encryption

Data in the database can be encrypted in any one of the following formats or all of the formats

TDE - All data in the tablespace is encrypted by database.

Column Encryption - Specific data within a column is encrypted, SSN for example.  This is less widely used and most customers use TDE instead.

 External Encryption - In some cases the data stored in the database may already be encrypted by the application.

 

 NOTE: 

1. If the data is compressed and/or encrypted in these manors it will continue in that format when backed up.  

  • Any data that encrypted in the database will remain encrypted in the backups
  • Any data that is compressed in the database will remain compressed in the backups
  • Backups of data that is compressed and/encrypted will get little to no compression when backed up


2. RMAN does not know that the data  is either compressed or encrypted, and querying the RMAN views will not tell you that either has occurred.


3. Having data encrypted and/or compressed in the database may not stop you from further compressing and/or encrypted the backups.


ZDLRA

Compression

Datafile Compression - With Datafile compression you have 2 choices to compress the backups

    • RA_FORMAT = TRUE - This  compresses all datafile backups in the new ZDLRA 23.1 format.  If the datafile is part of a TDE tablespace, the blocks will be decrypted prior to compression to ensure the best compression ratio.  
    • RA_FORMAT not set or  FALSE - Backups of datafiles will be sent as uncompressed (unless you create a RMAN compressed backupset which the ZDLRA will uncompress before ingesting).  Once they are received on the ZDLRA they will be compressed in storage on ZDLRA.  When replicated to another ZDLRA, or restored, they are uncompressed.

Real-time Redo Compression - When sending real-time redo to the ZDLRA you can have the ZDLRA create an RMAN compressed backupset for the archive logs.  The level of compression can be set on the policy.  Once stored in an RMAN compressed backupset format, it is replicated and restored as a compressed backupset.  

          NOTE: If the Redo stream contains changes to a TDE tablespace, or you are                                            configuring encryption on the RA as destination, you may get little to no actual compression 

SPFILE, Controfile, archivelog backups - The ZDLRA will NOT attempt to compress these backupsets internally.  Only datafile backups are compressed on the ZDLRA.

 

Encryption

Datafile Encryption - Whether a datafile is encrypted by the ZDLRA in the new ZDLRA 23.1 format depends on these 2 conditions.

    • RA_FORMAT = TRUE and "RMAN Encryption ON" - If the datafile is NOT part of a TDE tablespace, this will force BOTH compression and encryption of that datafile backup.
    • RA_FORMAT = TRUE and "RMAN Encryption OFF" - If the datafile is part of a TDE tablespace, the backup of this datafile will remain encrypted.  If the datafile is NOT part of a TDE tablespace, the backup will NOT be encrypted.

Real-time Redo Encryption - If real-time redo is utilized and your database has implemented TDE, the change data in the archive log backups will be encrypted.  However, this backup is not considered RMAN encrypted, and ENCRYPTION=ENABLED must be set on the destination definition to ensure that the real-time redo backupsets are considered fully encrypted by RMAN.

SPFILE, Controlfile, archivelog backup Encryption - These are not encrypted by the ZDLRA.

 

 NOTE: 

1. The new Space Efficient Encrypted backup feature of the ZDLRA only affects datafile backups.

2. Real-time redo backups can be compressed and/or encrypted by the ZDLRA.

3. If you are using the new RA_FORMAT=TRUE for non-TDE datafile backup, you will only get a compressed a backupset.  You have set RMAN Encryption on with RA_FORMAT=TRUE in order to encrypt the backupset.

4. If you are backing up a non-TDE  datafile, and wish to encrypt it with the library, it will also be compressed.  You cannot encrypt without compression, but you can compress without encryption.

5. If datafile backups are sent to the ZDLRA  without RA_FORMAT=TRUE, they will appear as compressed in the RMAN catalog.  With RA_FORMAT=TRUE they will not appear as compressed.

6.If real-time redo is sent to the ZDLRA, and the profile for the database is set to compress the archivelogs, they will appear as compressed in the RMAN catalog. 

 

RMAN

Compression

Datafile Compression - With Datafile compression you have 2 choices to compress the backups

  • RA_FORMAT = TRUE - RMAN compression is ignored when this option is set.  
  • RA_FORMAT not set or  FALSE - RMAN can create a compressed backupset for datafiles.  If the datafile is part of a TDE tablespace, this datafile will not be able to create a virtual full.  If the Datafile is NOT part of a TDE tablespace, the backset will be decompressed on the ZDLRA and will not be stored as a Compressed backupset.


SPFILE, Controfile, archivelog backups - The ZDLRA will NOT attempt to compress these backupsets internally.  They remain compressed.

 

Encryption

Datafile Encryption - RMAN Encrypt ON  creates an Encrypted backupset which cannot be virtualized by the ZDLRA.  This should only be set when using RA_FORMAT=TRUE which bypasses RMAN encryption


SPFILE, Controlfile, archivelog backup Encryption - These can be encrypted by setting RMAN Encryption on.

 NOTE: 

1. The new Space Efficient Encrypted backup feature of the ZDLRA only affects datafile backups.

2. Real-time redo backups can be compressed and/or encrypted by the ZDLRA.

3. If you are using the new RA_FORMAT=TRUE for non-TDE datafile backup, you will only get a compressed a backupset.  You have set RMAN Encryption on with RA_FORMAT=TRUE in order to encrypt the backupset.

4. If you are backing up a non-TDE  datafile, and wish to encrypt it with the library, it will also be compressed.  You cannot separate encryption from compression, but you can compress only.

Thursday, July 18, 2024

DBMS_CLOUD Debugging with ZFS Object Storage

 In the course of testing the DBMS_CLOUD functionality against OCI object storage on ZFS, I have wanted to perform debugging by looking at the packets sent to the Web Listener on my ZFS.

Unfortunately for debugging purposes, DBMS_CLOUD requires all calls to object storage to be HTTPS calls which are encrypted.

In this blog post, I will go through the architecture below to show you how I was able to use a Load Balancer in OCI on port 443 (HTTPS traffic) to send the requests to my ZFS using Port 80 (HTTP traffic).

By doing this I was able to see all the packets going to ZFS.

You can use this same process to debug network traffic, while leaving the application interface encrypted.


Below are the steps in the OCI console, but I am not going to include the policies that need to be configured.

1) Create a vault

You can find create Vault under "Identity & Security" --> "Key Management & Secret Management".

Click on "Create Vault" and all you need to do is to give the vault a name, and choose the compartment to store the vault.

Once you fill them in click on "Create Vault" to have the vault created.

2) Create a Master Encryption Key

Once the Vault is created, click on the vault name, and this will bring up the window where you can enter a Master Encryption Key to be created within the vault.

Click on "Create Key" and enter the information to create a new Key in this vault.  Note that 

  • The key MUST be an HSM key, you cannot use a software key
  • The key must be asymmentric. The default is symmetric and must be changed.

3) Create a Certificate Authority

Under "Identity & Security" --> "Certificates" you will see "Certificate Authorities". We need to create a new one.

Click on "Create Certificate Authority", and in this case we are creating a Root Certificate Authority. You need to give it a "Name" and "Description" and click on the 'Next" button in the lower left corner.

Then on the next window give it a "Common Name" and click on Next.


On the next window, you must choose a "not valid before". In my case, I chose today.

Then you must enter the Vault and the Encryption key that you had created previously.

Then click on 'Next"


Then set the expiry rule and click on "next".  I left the defaults.


On the next window I changed "Revocation Configuration"  to "skip" and I clicked on "Next"


Then on the "Summary" window I clicked on "Create Certificate Authority" to create the Certificate Authority.


4) Create a Load Balancer

This can be found under "Networking" --> "Load balancers". Click on Load Balancer.

Once here, click on the "Create Load balancer" button.
Give the load balancer a name (if you want) to make it easier to find.
You then need to scroll down to the bottom of the screen to choose your network and subnet for the Load balancer.
Once you fill these in Click on "NEXT".





After clicking on Next, I left everything defaulted. This will do a health check on the ZFS using port 80.  Then I clicked on "Next" again.


In this window, I changed from HTTPS to HTTP. This allows me to create the Load Balancer without having a Certificate yet.  


I left the logging off, and clicked on "Submit" to create the Load Balancer.


5) Determine the Public IP for the Load Balancer

Once the load balancer is created, I go to the list of of load balancers under Networking--> Load balancers --> Load Balancer and it shows me the public IP for the Load Balancer that was created.  The overall Health is showing "incomplete" since I haven't added any backend hosts yet.



6) Create the certificate


Now that I know the Public IP address (129.146.220.252) I can create a certificate for it in my Certificate Authority.
I go back to "Identity & Security" --> Certificates and click on "certificates"

I click on "Create Certificate" and I enter the name and description and Click on "next"







I give the "Common Name" my IP address so that the Certificate Name matches the URL I am going to use to connect.  Then I click on "Next".


In the next window I fill in the "not valid before" and click on "next".





I leave the rules default for the certificate and click on "next"


Then when I get to the "Summary" window I click on "Create Certificate".

7) Create a Backend set for the load balancer

I now go back to Networking --> Load Balancers --> Load Balancer and choose the Load Balancer I had previously created.

On the left hand side of the window I click on "Backend Sets" to list the existing Backend sets.  By default a backend set was created for me, but it has no members.
I click on the default Backend set to bring up the window to add members.
This will bring up a window showing that the backend set is "incomplete"
From here I click on "Backends(0)" on left hand side of the window.


This brings up a window with an "Add backends" button. Click on this button to bring up the window to enter backends.



On the window above I entered the IP address of the HTTP interface I am using ZFS, leaving the port as 80 so that the traffic will be unencrypted, and click on "ADD" to add it to the backend list.

8) Change the Health Check to TCP

On the Backends window I changed the "Update Health Check" to use TCP protocol from HTTP protocol and clicked on "Save Changes".




9) Change the Load Balancer to HTTPS

I now go back to Networking --> Load Balancers --> Load Balancer and choose the Load Balancer I had previously created.

From the left had side, I click on "Listeners" and then I click on "Create Listener".



In the window that comes up, I want to make this a HTTPS listener, I change the protocol to HTTPS, and I choose the certificate I created in the previous step. This allows the load balancer to encrypted receive traffic with a registered certificate.
In this step, I also need to ensure it is using the Backend set I just updated. Once complete choose "Create Listener".




That's all there is to it.

Now I can access the Object storage on ZFS using the "Public IP" using DBMS_CLOUD (which is encrypted) and it will be passed on to the ZFS as HTTP traffic.