Monday, October 6, 2025

ZDLRA best practices

 ZDLRA best practices (updated 2025)

I have seen a lot of changes with the ZDLRA over the last 10 years or so, and because of that I wanted to document the best practices to follow that evolved up to now (2025).



Because of these changes, I wanted to write a blog post on what customers should think about doing now.

1. Channel Configuration

There are a few items to talk about with channel configuration. First let's break down the pieces of the channel configuration


a) Library

Version

Up until version 19.26, you could use a shared version of the library (libra.so) based on the OS.  Updates to the library could be downloaded directly from MOS.

Starting with version 19.27 on Linux, you must use the version that is stored in $ORACLE_HOME and patched using OPatch.  More information can be found in the MOS note below.

Location (channel setting)

In the previous section I mentioned that the library is tied to the version of Oracle starting with 19.26. Prior to this release, the default channel setting for the library with the ZDLRA  was
  • "PARMS  'SBT_LIBRARY=$ORACLE_HOME/lib/libra.so"
In order to always utilize the current library for the $ORACLE_HOME, the recommendation is to set the channel setting to
  • "SBT_LIBRARY=libra.so" - No path, and it will default to the current $ORACLE_HOME/lib"
    Or
  • "SBT_LIBRARY=oracle.zldlra" - This will default to the current $ORACLE_HOME/lib/libra.so

b) Environment

In the environment section, you specify the SEPS wallet location. Because there could be conflicts with other oracle products that use a wallet (like OUD), the default location for the SEPS wallet should be set to the WALLET_ROOT location, rather than within the $ORACLE_HOME location.

In the past,  the channel setting for ZDLRA  was
  • "RA_WALLET=location=file:${ORACLE_HOME}/dbs/ra_wallet credential_alias={SEPS entry in wallet}"
The recommendation is to set the WALLET_ROOT in the spfile, and store the SEPS wallet within the server_seps directory under this location. If WALLET_ROOT is set to $ORACLE_BASE/admin/{$DB_NAME}/wallet, the setting would be

  • "RA_WALLET=location=file:${ORACLE_BASE}/admin/${DB_NAME}/wallet/server_seps/ credential_alias={SEPS entry in wallet}"

c) TLS or non-TLS

If a customer has the ZDLRA configured to encrypt backups with TLS/SSL, the library, by default, will attempt to encrypt the backup using SSL/HTTPS.  If TLS/SSL is configured as optional, and if the client does not have the certificate information, backups will fail. To avoid this, it is recommended to add the following setting to the channel configuration.  This setting will allow you to send backups even if optional TSL/SSL is configured.

  • _RA_NO_SSL=TRUE

d) Space Efficient Encrypted Backups

In order to use Space Efficient Encrypted Backups, you must set the following in your channel configuration. This is only available on linux with DB version 19.18+. Keep in mind this setting will only compress datafile backup pieces.  
  • RA_FORMAT=TRUE

2. RMAN encryption

One misconception with Space Efficient Encrypted Backups, is that by setting RA_FORMAT=TRUE you are both compressing and encrypting backups. This is not true. You are only compressing the backups. If the tablespace is encrypted with TDE, the backup will remain encrypted, but any non-TDE tablespace backups will be unencrypted.  In order to encrypt the backup pieces (Datafiles, controlfile,. archive logs and spfile) you need to both set an encryption key (if not already set), and set RMAN encryption on.  If you are backing up any tablespaces that are not TDE, you must also ensure you set RA_FORMAT=TRUE.
You should also set the default encryption algorithm to AES256, as most customers use this today rather than the default of AES128

NOTE: Use of RA_FORMAT=TRUE (RMAN Compression) and RMAN encryption is included in the license for ZDLRA and you do not need the ACO or ASO license for backups to the ZDLRA.

3. Real-time redo settings.

a) Default SEPS wallet location


Wallets are becoming more and more common with databases, and this increases the chances that there will be conflict with other Oracle features that also utilize a wallet. To avoid this there is a new default location for the SEPS wallet used by real-time redo. The database will first look in the {WALLET_ROOT}/server_seps location for the wallet.  It is recommended to set the WALLET_ROOT for all databases and use this default location for the SEPS wallet.  Of course you may use links to simplify management.

b) Encrypt Redo


If you have an encryption key set, and you wish to fully encrypt your backups, you also need to ensure that real-time redo is also encrypted. You need to set encryption on the destination for real-time redo
  • ENCRYPTION=ON

4. Reserve Space

Reserve space has been difficult to explain, and it has become even more important when implementing a compliance window.  If there is no compliance set, then the reserve space is only interrogated when the ZDLRA runs out of space.  When a compliance window is set, the reserve space is interrogated when backing up to ensure that compliance locked backups fit within the reserved space. If they do not, backups are rejected.  Because of this, I always recommend setting the "auto-tune reserve space" for the policies, especially when setting a compliance window. This setting will automatically adjust the reserve space for you as databases grow.

5. When encrypting backups set "secure mode"

If you need to fully encrypt your backups with RMAN encryption, it is recommended to use "secure mode" on your protection policy. This setting checks all backup pieces (including real-time redo) to ensure that they are RMAN encrypted.  Any unencrypted backup pieces will be rejected.

6. Use racli commands

One of the biggest changes is the increase in the functionality available with racli command vs executing DBMS_RA PL/SQL packages. It is recommended to utilize RACLI commands when possible.  I know when trying to automate onboarding databases, it is much easier to utilize PL/SQL packages.

7. Pair ZDLRAs when using replication

Another change is the ability to pair ZDLRAs that have replication configured. This can be done with the "racli add ra_partner" command.  Using the ZDLRA pairing commands for replication greatly simplifies the configuration of replication. Existing configurations can be converting to using the pairing model.

8. Review types of DB users

There have been a few changes to DB users. There are now 4 types of DB users.
  • ADMIN -Used to administer ZDLRA configuration settings
  • MONITOR - Read-only account that can view metadata
  • REPLICATION - Used to replicate backups between ZDLRAs
  • VPC - Used by protected databases to send/received backups
    • Insecure=TRUE - VPC user's password will expire, but it can be reset to the same password. This type of user does not support password rollover
    • insecure=false - VPC user's password will expire, but this user can leverage password rollover (STIG) to manage password rotation without having backups fail.
NOTE: If you need to rotate password for VPC users you should leverage the STIG/password rollover function that allows you have two passwords associated with the same VPC user as you update wallets.

Summary:

  1. Set WALLET_ROOT in the database and store the SEPS in the {WALLET_ROOT}/server_seps directory
  2. Ensure the library location in the channel configuration defaults to the current $ORACLE_HOME
  3. Set _RA_NO_SSL=TRUE to ensure that converting to TLS/SSL will not cause existing backups to fails
  4. Set RA_FORMAT=TRUE to leverage space efficient encrypted backups --> Linux only
  5. Enable RMAN encryption, and set algorithm to AES256 --> Linux only
  6. Encrypt Real-time redo if applicable
  7. If you require fully encrypted backups, set SECURE_MODE on the policy
  8. Enable auto-tune of reserved space for all databases, especially those using a compliance window.
  9. Use RACLI commands to manage the ZDLRA
  10. Pair ZDLRAs when configuring replication
  11. Leverage password rollover for VPC users if you require password rotation



Monday, September 29, 2025

Oracle DB 23ai supports external Apache Iceberg tables

 One area that I have been spending a lot of time on is accessing data stored in object storage from within Oracle DB 23ai using external table definitions.

Parquet objects

I started by creating external tables on Parquet objects, and recently I have been testing accessing objects that are cataloged in Apache Iceberg.


Alexey Filanovskiy wrote a great blog post on Oracle Tables and parquet objects you can find here.

Reading through that blog post gives you a good foundation for why parquet formatted objects make sense for external tables.  Oracle DB is able to leverage the metadata in parquet objects to optimize queries.

Defining external tables on parquet objects

Below is an example table definition for an external table accessing parquet objects. In the below example, the parquet objects were stored on ZFSSA using the OCI API.

CREATE TABLE "DW"."CUSTOMER_ADDRESS" 
   (    "CA_ADDRESS_SK" NUMBER(*,0), 
    "CA_ADDRESS_ID" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_STREET_NUMBER" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_STREET_NAME" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_STREET_TYPE" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_SUITE_NUMBER" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_CITY" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_COUNTY" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_STATE" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_ZIP" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_COUNTRY" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP", 
    "CA_GMT_OFFSET" NUMBER, 
    "CA_LOCATION_TYPE" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP"
   )  DEFAULT COLLATION "USING_NLS_COMP" 
   ORGANIZATION EXTERNAL 
    ( TYPE ORACLE_BIGDATA
      DEFAULT DIRECTORY "DATA_PUMP_DIR"
      ACCESS PARAMETERS
      ( com.oracle.bigdata.credential.name=OCI_ZFS
        com.oracle.bigdata.fileformat=parquet
          com.oracle.bigdata.trimspaces=notrim
       )
      LOCATION
       ( 'oraclebmc://pmdemo-zs71-01-client.us.oracle.com/n/tcptest/b/tcpds_10000g_parquet/o/customer_address/*'
       )
    )
   REJECT LIMIT UNLIMITED 
  PARALLEL ;
What you will notice from this table definition is
  • It uses the "ORACLE_BIGDATA" type. This is necessary to create external tables using big data object types, which is currently [csv|textfile|avro|parquet|orc].
  • I am specifying a credential. I had already created a credential that contains the login credentials for the OCI API on my ZFSSA.
  • The object type for my objects is parquet.
  • The location of the object(s) is specified. In my case I am using an "*" as there are multiple objects within this path that make up the table.
  • The format of the URL is in this case "ORACLEBMC". The options are
    • ORACLEBMC - Specifies this is an OCI API object storage.
    • S3 - Specifies amazon S3 object storage
    • S3h- Specifies a non-standard S3 object store
You can find additional information here.

Notes:

  1. The format of the URL is not well documented, I also found that in some cases "s3a" can be used for non-AWS s3 object storage.
  2. DBMS_CLOUD does not allow you to use URLs other than "https". You will most likely have to create tables using the above syntax rather than using the DBMS_CLOUD package.

Apache Iceberg

Once I was able to successfully access parquet objects directly, I began testing cataloging parquet objects with Apache Iceberg using Spark.
I found the most difficult part of this was properly creating an Apache Iceberg manifest file that could be used to build an external table definition against Iceberg.

Environment to build manifest


My testing environment to build the Apache Iceberg manifest contains the following.

Python

  • Python 3.9.18
  • Below is a list of my python packages that are installed
Python 3.9.18
Package             Version
------------------- -----------
aiobotocore         2.23.0
aiohappyeyeballs    2.6.1
aiohttp             3.12.14
aioitertools        0.12.0
aiosignal           1.4.0
annotated-types     0.7.0
async-timeout       5.0.1
attrs               20.3.0
Babel               2.9.1
bcc                 0.28.0
boto3               1.39.7
botocore            1.38.27
cachetools          5.5.2
certifi             2020.12.5
cffi                1.14.5
chardet             4.0.0
circuitbreaker      1.3.2
click               8.1.8
cloud-init          23.4
cockpit             311.1
colorama            0.4.6
configobj           5.0.6
configshell-fb      1.1.30
cryptography        36.0.1
dasbus              1.4
dbus-python         1.2.18
distro              1.5.0
docutils            0.16
file-magic          0.4.0
frozenlist          1.7.0
fsspec              2025.7.0
gpg                 1.15.1
greenlet            3.2.3
halo                0.0.31
idna                2.10
Jinja2              2.11.3
jmespath            1.0.1
jsonpatch           1.21
jsonpointer         2.0
jsonschema          3.2.0
kmod                0.1
Ksplice Uptrack     1.2.84
libcomps            0.1.18
lockfile            0.12.2
log-symbols         0.0.14
markdown-it-py      3.0.0
MarkupSafe          1.1.1
mdurl               0.1.2
mmh3                5.1.0
multidict           6.6.3
netaddr             0.8.0
netifaces           0.10.6
nftables            0.1
numpy               2.0.2
nvmetcli            0.7
oauthlib            3.1.1
oci                 2.126.3
oci-utils           0.14.0
oracledb            3.2.0
pandas              2.3.1
parquet_tools       0.2.16
pciutils            2.3.6
pcp                 5.0
perf                0.1
pexpect             4.8.0
pip                 25.1.1
ply                 3.11
prettytable         0.7.2
propcache           0.3.2
psutil              5.8.0
ptyprocess          0.6.0
py4j                0.10.9.9
pyarrow             20.0.0
pycparser           2.20
pycurl              7.43.0.6
pydantic            2.11.7
pydantic_core       2.33.2
Pygments            2.19.2
PyGObject           3.40.1
pyiceberg           0.9.1
pyOpenSSL           19.0.0
pyparsing           3.2.3
pyrsistent          0.17.3
pyserial            3.4
PySocks             1.7.1
pyspark             4.0.0
python-daemon       2.2.4
python-dateutil     2.9.0.post0
python-dmidecode    3.12.2
python-linux-procfs 0.7.3
pytz                2021.1
pyudev              0.22.0
PyYAML              5.4.1
requests            2.25.1
rhnlib              2.8.6
rich                13.9.4
rpm                 4.16.1.3
s3fs                2025.7.0
s3transfer          0.13.0
sdnotify            0.3.2
selinux             3.6
sepolicy            3.6
setools             4.4.4
setroubleshoot      3.3.32
setuptools          53.0.0
six                 1.15.0
sortedcontainers    2.4.0
sos                 4.7.0
spinners            0.0.24
SQLAlchemy          2.0.41
strictyaml          1.7.3
systemd-python      234
tabulate            0.9.0
tenacity            9.1.2
termcolor           3.1.0
thrift              0.16.0
typing_extensions   4.14.0
typing-inspection   0.4.1
tzdata              2025.2
urllib3             1.26.5
urwid               2.1.2
wrapt               1.17.2
yarl                1.20.1

PySpark & Spark runtime

PySpark version: 4.0.0
Java version: 21.0.7
Scala version: version 2.13.16

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 4.0.0
      /_/

Using Scala version 2.13.16, OpenJDK 64-Bit Server VM, 21.0.7
Branch HEAD
Compiled by user wenchen on 2025-05-19T07:58:03Z
Revision fa33ea000a0bda9e5a3fa1af98e8e85b8cc5e4d4
Url https://github.com/apache/spark

 Environment variables

HADOOP_OPTS= -Djava.library.path=/usr/local/hadoop/hadoop-3.4.1/lib/ -Djava.library.path=/usr/local/hadoop/hadoop-3.4.1/lib/ -Djava.library.path=/usr/local/hadoop/hadoop-3.4.1/lib/
JAVA_HOME=/usr/lib/jvm/java-21-openjdk-21.0.7.0.6-1.0.1.el9.x86_64
HADOOP_HOME=/usr/local/hadoop/hadoop-3.4.1
SPARK_HOME=/usr/local/spark
HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop/hadoop-3.4.1/lib/native/


Script to build manifest and store python files

I used the NY State Taxi data to build my objects, and it can be found here.
For my testing, I used the yellow taxi datasets.

Below is the section of my code that builds the spark catalog on OCI S3 object storage, and sets the environment using my credentials.

NOTE: It points to my endpoint and uses my access key and secret. I removed those from the script.

Python script to set the environment and hadoop catalog (Part #1)

from pyspark.sql import SparkSession

ICEBERG = "org.apache.iceberg:iceberg-spark-runtime-4.0_2.13:1.10.0"
HADOOP_CLOUD = "org.apache.spark:spark-hadoop-cloud_2.13:4.0.0"

spark = (
    SparkSession.builder
      .appName("iceberg-oci-s3")
      .config("spark.jars.packages", ",".join([ICEBERG, HADOOP_CLOUD]))
      # Iceberg catalog -> Hadoop catalog in an OCI S3 warehouse
      .config("spark.sql.catalog.oci", "org.apache.iceberg.spark.SparkCatalog")
      .config("spark.sql.catalog.oci.type", "hadoop")
      .config("spark.sql.catalog.oci.warehouse", "s3a://icebergs3/iceberg_warehouse")
      # OCI S3-compatible endpoint
      .config("spark.hadoop.fs.s3a.endpoint", "https://{my tenancy/namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com")
      .config("spark.hadoop.fs.s3a.path.style.access", "true")
      .config("spark.hadoop.fs.s3a.endpoint.region", "us-ashburn-1")
      .config("spark.hadoop.fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider")
      .config("spark.hadoop.fs.s3a.list.version", "1")
      .getOrCreate()
)


hconf = spark.sparkContext._jsc.hadoopConfiguration()
hconf.set("fs.s3a.access.key",    "{my access key}")            # OCI 'Customer Secret Key' ID
hconf.set("fs.s3a.secret.key",    "{my secret for that key}")   # the secret key
hconf.set("fs.s3a.connection.ssl.enabled", "true")

Python script to create namespace and create table (Part #2)

spark.sql("""
  CREATE NAMESPACE IF NOT EXISTS oci_db
""")

import pyarrow.parquet as pq


spark.sql("""
  CREATE NAMESPACE IF NOT EXISTS oci.tripdata
""")

spark.sql("""
  CREATE TABLE IF NOT EXISTS oci.tripdata.yellow (
  `VendorID` INT,
  `tpep_pickup_datetime` TIMESTAMP,
  `tpep_dropoff_datetime` TIMESTAMP,
  `passenger_count` BIGINT,
  `trip_distance` DOUBLE,
  `RatecodeID` BIGINT,
  `store_and_fwd_flag` STRING,
  `PULocationID` INT,
  `DOLocationID` INT,
  `payment_type` BIGINT,
  `fare_amount` DOUBLE,
  `extra` DOUBLE,
  `mta_tax` DOUBLE,
  `tip_amount` DOUBLE,
  `tolls_amount` DOUBLE,
  `improvement_surcharge` DOUBLE,
  `total_amount` DOUBLE,
  `congestion_surcharge` DOUBLE,
  `Airport_fee` DOUBLE
  )
  USING iceberg
  PARTITIONED BY  (months(tpep_pickup_datetime))
""")


Python scrypt to read parquet object/set type/append to iceberg table  (Part #3)

sdf = spark.read.parquet("{parquet object}")  # Location and name of parquet object to load

from pyspark.sql import functions as F
sdf_cast = (
    sdf
    .withColumn("VendorID", F.col("VendorID").cast("int"))
    .withColumn("tpep_pickup_datetime", F.col("tpep_pickup_datetime").cast("timestamp"))
    .withColumn("tpep_dropoff_datetime", F.col("tpep_dropoff_datetime").cast("timestamp"))
    .withColumn("passenger_count", F.col("passenger_count").cast("bigint"))
    .withColumn("trip_distance", F.col("trip_distance").cast("double"))
    .withColumn("RatecodeID", F.col("RatecodeID").cast("bigint"))
    .withColumn("store_and_fwd_flag", F.col("store_and_fwd_flag").cast("string"))
    .withColumn("PULocationID", F.col("PULocationID").cast("int"))
    .withColumn("DOLocationID", F.col("DOLocationID").cast("int"))
    .withColumn("payment_type", F.col("payment_type").cast("bigint"))
    .withColumn("fare_amount", F.col("fare_amount").cast("double"))
    .withColumn("extra", F.col("extra").cast("double"))
    .withColumn("mta_tax", F.col("mta_tax").cast("double"))
    .withColumn("tip_amount", F.col("tip_amount").cast("double"))
    .withColumn("tolls_amount", F.col("tolls_amount").cast("double"))
    .withColumn("improvement_surcharge", F.col("improvement_surcharge").cast("double"))
    .withColumn("total_amount", F.col("total_amount").cast("double"))
    .withColumn("congestion_surcharge", F.col("congestion_surcharge").cast("double"))
    .withColumn("Airport_fee", F.col("Airport_fee").cast("double"))
)

sdf_cast.writeTo("oci.tripdata.yellow").append()

Summary :

Combining the 3 parts of the script, filling in the credentials along with the endpoint, and bucket, along with specifying the parquet object will load the object storage properly including the manifest file.

Investigating the resulting manifest and objects

Below is a list of the objects created in the metadata directory.


You can see that I updated the data and added more data, and each time, it created a new version of the manifest along with snapshot information.

When writing the data, I chose to partition it, below is the partitioned data. You can see that when Apache Iceberg wrote the data, it automatically created parquet objects in directories for each partition, and added to the current directory with new data.


Creating an Oracle table on top of Iceberg

Now that I have created my iceberg table, and it is stored in object storage, I can create an Oracle table that will read the manifest file.

In my example, the most recent manifest object is "iceberg_warehouse/tripdata/yellow/metadata/v7.metadata.json".

My External Table definition is below.
CREATE TABLE CUSTOMERS_ICEBERG
(
VendorID        integer,
tpep_pickup_datetime    timestamp,
tpep_dropoff_datetime    timestamp,
passenger_count        number,
trip_distance        number,
RatecodeID        number,
store_and_fwd_flag    blob,
PULocationID        number,
DOLocationID        number,
payment_type        number,
fare_amount        number,
extra            number,
mta_tax            number,
tip_amount        number,
tolls_amount        number,
improvement_surcharge    number,
total_amount        number,
congestion_surcharge    number,
Airport_fee        number
)
ORGANIZATION EXTERNAL
(
    TYPE ORACLE_BIGDATA
    DEFAULT DIRECTORY DATA_PUMP_DIR
    ACCESS PARAMETERS
    (
        com.oracle.bigdata.fileformat=parquet
        com.oracle.bigdata.credential.name=OCI_S3
        com.oracle.bigdata.access_protocol=iceberg
    )
   LOCATION ('iceberg:https://{namespace}.compat.objectstorage.us-ashburn-1.oci.customer-oci.com/icebergs3/iceberg_warehouse/tripdata/yellow/metadata/v7.metadata.json')
)
PARALLEL;

NOTE: my location is "iceberg:https".

I can now select from my new table and it will read the Iceberg manifest file.

Conclusion:

Oracle Database 23ai not only supports creating external tables on top of Parquet objects, but it also supports creating external tables on top of Apache Iceberg manifest objects.





Wednesday, September 17, 2025

Building a Cyber Vault ? Don't forget your keys

When building a cyber vault, one of the most important items to manage is encryption keys.  Encrypting your data is a fundamental pillar of ransomware protection, but encryption key management is often forgotten.  
Ensuring your Cyber Vault has a current copy of your encryption keys associated with your Oracle Databases is critically important for a successful restore and recover after an attack.

Starting with the Oracle Key Vault (OKV) 21.11 release, Oracle Key Vault includes a preview of the Key Transfer Across Oracle Key Vault Clusters Integration Accelerator. You can use this feature to transfer security objects from one OKV cluster to another.

You can find more detail on this feature here, and I will describe it's benefits in this post.

The diagram below shows a typical cyber vault architecture to protect Oracle Databases with the Zero Data Loss Recovery Appliance (ZDLRA) and OKV .

Transferring keys into a cyber vault with OKV

Encryption Key Architecture

Encryption key management is a critical piece of data protection, and it is important to properly manage your keys.  Good cyber protection begins with proper encryption key management.

Local Wallets

With Oracle databases, the default (and simplest) location for encryption keys is in a locally management wallet file.  The keys are often stored in an auto-login wallet, which is automatically opened by the database at startup making key management transparent and simple, but not very secure.

Why aren't wallets secure ?

  • They are often auto-login (cwallet.sso) which allows the database to open the wallet without requiring a password. This wallet file gives full access to the encryption keys.
  • The wallet file is stored with the database files.  A privileged administrator, such as a DBA has access to both the database and the keys to decrypt the data directly from the hosts.  They also have the ability to delete the wallet file.
  • Often the wallet file is backed up with the database, which includes an auto-login wallet.  This allows anyone who has access to backups, to also be able to decrypt the data.
  • Securely backing up the wallet file separate from the database is often forgotten, especially when ASM is used as the wallet location.  Not having the wallet file when restoring the database makes restoration and recovery impossible.

Steps you can take

  • Create both a passworded wallet (ewallet.p12) and local auto-login wallet (cwallet.sso). With a local auto-login wallet, the wallet can only be opened on the host where the wallet was created.
  • Backup the passworded wallet only (ewallet.p12).  The auto-login wallet can always be recreated from the passworded wallet.
  • Properly store the password for you passworded encryption wallet. You will need the password to rotate your encryption keys, and create the auto-login wallet.

Oracle Key Vault (OKV)

 The best way to securely manage encryption keys is with OKV.

Why ?

  • Keys are managed outside of the database and cannot be accessed locally outside of the database.
  • Access to keys is granted to a specific database instance.
  • OKV is clustered for High Availability, and the OKV cluster can be securely backed up.
  • Key access is audited to allow for early detection of access.

Encryption Keys in a cyber vault architecture

Below is a diagram of a typical cyber vault architecture using ZDLRA.  Because the backups are encrypted, either because the databases are using TDE and/or they are creating RMAN encrypted backups sent to the ZDLRA, the keys need to be available in the vault also.
Not only is the cyber vault a separate, independent database and backup architecture, the vault also contains a separate, independent OKV cluster.
This isolates the cyber vault from any attack to the primary datacenter, including any attack that could compromise encryption key availability.


Encryption Keys in an advanced cyber vault architecture

Below is a diagram of an advanced cyber vault architecture using ZDLRA.  Not only are the backups replicated to a separate ZDLRA in the vault, they are internally replicated to an Isolated Recovery Environment (IRE).  In this architecture, the recover area is further isolated, and the OKV cluster is even further isolated from the primary datacenter. This provides the highest level of protection.


OKV Encryption Key Transfer


This blog post highlights the benefit of the newly released (21.11) OKV feature to allow for the secure transfer of encryption keys.
Periodic rotation of encryption keys is a required practice to protect encryption keys, and ensuring you have the current key available in a cyber vault is challenging.
OKV solves this challenge by providing the ability to transfer any new or changed keys between clusters.

Implementing OKV in a cyber Vault

When building a cyber vault, it is recommend to build an independent OKV cluster.  The OKV cluster in the vault is isolated from the primary datacenter and protected by an airgap.  The nodes in this cluster are  not be able to communicate with the OKV cluster outside of the vault.
The OKV cluster in the vault can be created using a full, secure, backup from the OKV cluster in the primary datacenter. The backup can be transferred into the vault, and then restored to the new, independent OKV cluster providing a current copy of the encryption keys.

Keeping OKV managed keys updated in a cyber Vault

The challenge, once creating an isolated OKV cluster has been keeping the encryption keys within the cluster current when new keys are created.  This was typically accomplished by transferring a full backup of OKV into the vault, and rebuilding the cluster using this backup.

OKV 21.11 provides the solution with secure Encryption Key Transfer.  Leveraging this feature you can securely transfer just the keys that have recently changed allowing you to manage independent OKV clusters that are synchronized on a regular basis.

The diagram below shows the flow of the secure Encryption Key Transfer package from the primary OKV cluster into the vault when the air-gap is opened.

This new OKV feature provides a much better way to securely manage encryption keys in a Cyber Vault.



Summary

As ransomware attacks increase, it is critical to protect a backup copy of your critical database in a cyber vault.  It is also critical to protect a copy of your encryption keys to ensure you can recovery those databases.  OKV provides the architecture for key management in both your primary datacenter and in a cyber vault.  The new secure key transfer feature within OKV allows you to synchronize keys across independent OKV clusters.

Friday, August 29, 2025

Oracle Master Encryption Key Management: Wallets, OKV, and OCI KMS Explained

Data Encryption Keys (DEK) are used by encryption algorithms (typically AES256) to protect data at rest in tablespaces, backups and even datapump exports. Since DEKs are stored with the data, they need to be protected.  Master Encryption Keys (MEK) are stored outside the database in local wallets or in an external keystore, and MEKs are used to encrypt the DEK which provides a separation between the data that is encrypted and an external key.

DEKs are not only found on Tablespaces (TDE), but they are also found in RMAN encrypted backups, Space Efficient Encrypted backups to ZDLRA/Recovery Service and encrypted datapump exports.

The diagram below shows that relationship between MEKs and DEKs.




You can see in the diagram that MEKs are not only used to encrypt Tablespace Encryption Keys to support TDE, they are also used to encrypt RMAN backups to disk, Tape or ZDLRA/Recovery Service, along with Datapump encryption keys.  

NOTE: When implementing encryption you should make sure you meet ASO licensing restrictions
  • Creating encrypted tablespaces or encrypted columns in tables requires ASO, unless you are leveraging an OCI Cloud Database environment (including multicloud).
  • Creating RMAN encrypted disk backups requires licensing.
  • Creating RMAN encrypted tape backups is license included and only supported when writing to Oracle products.
    • OSB - Oracle Secure Backup is license included for creating encrypted backups.
    • OCI Object Storage - The OCI cloud library requires encryption and can only be leveraged for writing backups directly to OCI object storage in OCI, or writing backups from a ZDLRA to OCI object storage in cloud or on ZFSSA
    • ZDLRA - Creating Space Efficient Encrypted backups is license included on ZDLRA and will create encrypted backups on ZDLRA.
  • Creating Datapump Encrypted backups requires licensing. 


MEKs are commonly called "TDE keys", but they are much more, and I will refer to them as MEKs throughout this post.

MEK with a Wallet File

In this first section I am going to talk about how the Oracle Database manages an MEK when using a Wallet file.

Wallet settings

With wallet-based MEKs, the database stores and reads keys from a wallet file on disk (or +ASM or Exascale).

There are 2 parameters in the database (spfile) that help guide the database to the wallet file.

WALLET_ROOT - This replace the "ENCRYPTION_WALLET_LOCATION" parameter that was previous set in the sqlnet.ora file.  Having it set at the database level allows multiple databases to independently share the same $ORACLE_HOME. It even allows PDBs to have individual wallet files. When using a wallet file, the WALLET_ROOT points to top directory where the encryption keys will be kept.

TDE_CONFIGURATION - This tells the database what kind of MEK will be used. For wallets you would specify "TDE_CONFIGURATION=FILE".  This tells the database to use find the encryption keys in an encryption wallet.

Encryption key location

Within the WALLET_ROOT directory the encryption wallet containing the MEKs, the wallet must be stored in a subdirectory named "tde".

Within that directory you will find 2 wallet files

  • ewallet.p12 - The password-protected wallet (contains keys; required for updates).
  • ewallet.sso — the auto-login wallet (allowing no password prompt on open); The auto-login wallet can also be created as a “local” autologin wallet bound to a single host.

On startup (or when you open the keystore), the database retrieves the CDB/PDB MEKs from the wallet file and caches them in memory.

Database interaction with MEK

When using a wallet file, the auto-login wallet file is opened when the database is opened, and the database constantly interacts with this file.  If no auto-login wallet file is created, you must open the wallet manually providing the password.

Viewing V$ENCRYPTION_WALLET 

What you’ll typically see for FILE based wallets:

PDB_NAME        WALLET_TYPE  STATUS  WRL_PARAMETER                KEYSTORE
--------------  -----------  ------  ----------------------------  --------
CDB$ROOT        AUTOLOGIN    OPEN    {WALLET_ROOT}/tde             NONE
DB23AI_PDB      AUTOLOGIN    OPEN                                  UNITED

MEK with OKV/OCI KMS

When using an external keystore, the database communicates with the OKV/OCI KMS client libraries installed on the host in order to retrieve the MEKs.

OKV/OCI KMS settings

There are 2 parameters in the database (spfile) that help guide the database to the OKV/OCI KMS installation.

WALLET_ROOT -  When using OKV/OCI KMS, the WALLET_ROOT points to top directory of the client install of the OKV/OCI KMS software.

TDE_CONFIGURATION - For OKV/OCI KMS  specify "TDE_CONFIGURATION=OKV" or "TDE_CONFIGURATION=HSM".  It is possible to combine both wallets and  OKV/OCI KMS if there are older keys that are still stored a wallet file. In that case you would see "TDE_CONFIGURATION=OKV|FILE" for example, and the database would read older encryption keys from a wallet file along with reading the keys from OKV/OCI MKS.

Encryption key location

Within the WALLET_ROOT directory, the software to communicate with the HSM is typically installed in the "okv" or "hsm" subdirectory.  The keys themselves are stored externally, and could be securely cached locally depending on your configuration.

The credential used to access OKV/OCI KMS is typically stored under the /tde directory in an auto-login wallet. This allows the database to connect to OKV/OCI KMS at startup. The external password store entries (if used) live under /tde_seps which allows up to manage keys without entering a password.

Database interaction with MEK

The MEK is always permanently stored externally.  The database interacts with the external keystore through a library.  This library is named liborapkcs.so.  The library itself is vendor specific and is used by the database to interact with different vendors keystore,. Because there is a single library, you cannot use multiple external key store types on the same DB host.
When the database starts up, it reads the password for the external keystore out of the wallet, opens up a connection to the keystore (after authentication and encryption handshake) and caches the MEK.  The database periodically will communicate with the external keystore through that library.


OKV/OCI MKS in Practice: What You’ll See

When both FILE and OKV configurations exist (e.g., during migration), you may observe dual rows in V$ENCRYPTION_WALLET showing the FILE wallet (no active master key) and the active OKV keystore:

PDB_NAME        TYPE  STATUS             WALLET_TYPE  WRL_PARAMETER
--------------  ----  -----------------  -----------  ---------------------------
CDB$ROOT        FILE  OPEN_NO_MASTERKEY  AUTOLOGIN    {WALLET_ROOT}/tde
CDB$ROOT        OKV   OPEN               OKV
DB23AI_PDB      FILE  OPEN_NO_MASTERKEY  AUTOLOGIN
DB23AI_PDB      OKV   OPEN               OKV


Best Practices & Notes

  • Back up only the passworded wallet when using a wallet file. (ewallet.p12) and store it securely, separate from database backups.
  • Use local autologin wallets if you require host binding for additional security.
  • When migrating to OKV/OCI KMS, consider a staged approach (OKV|FILE) so you can fall back if needed.
  • Always test wallet/keystore open/close operations and RMAN/Data Pump encryption workflows in lower environments before production.
  • Keep client libraries (liborapkcs.so and OKV client) patched and consistent across RAC nodes.

Last updated: August 29, 2025

Monday, August 25, 2025

ZDLRA - How to manually register databases

Steps to Manually Register a Database and Backup to ZDLRA

Table of Contents


Section 1: User and Policy Setup

Step 1.1 – Create the VPC User Account

racli add db_user --user_name=USER_NAME --user_type=vpc [--insecure]

Notes:

  • If created with --insecure, the VPC password will expire, but can be reused. Password rollover is not supported in this mode.
  • Without --insecure, enable password rollover with the enable_stig option on alter db_user.

Step 1.2 – Create the DB Admin User Account

racli add db_user --user_name=USER_NAME --user_type=admin

Step 1.3 – Create a New Protection Policy (optional)

You can use the default protection policies provided with ZDLRA, but you may want to create custom policies for more control over the protection window and other attributes.  Creating a new policy also allows you the name the protection appropriately to ensure you chose the correct policy.

racli create protection_policy --protection_policy_name=VALUE --storage_location_name=VALUE \
 [--recovery_window_goal=VALUE] [--description=VALUE] [--polling_policy_name=VALUE] \
 [--max_retention_window=VALUE] [--recovery_window_sbt=VALUE] [--unprotected_window=VALUE] \
 [--guaranteed_copy=VALUE] [--allow_backup_deletion=VALUE] [--store_and_forward=VALUE] \
 [--log_compression_algorithm=VALUE] [--autotune_reserved_space=VALUE] \
 [--recovery_window_compliance=VALUE] [--keep_compliance=VALUE] [--comments=VALUE]

Example:

racli create protection_policy --protection_policy_name='test' --storage_location='delta' \
 --recovery_window_goal=10days --unprotected_window=30minutes --autotune_reserved_space=yes \
 --log_compression=low --recovery_window_compliance=1days --allow_backup_deletion=no



Section 2: Database Registration in ZDLRA

Step 2.1 – Add the Database to ZDLRA Metadata

racli add protected_db --db_unique_name={DB_NAME} --protection_policy_name={PROTECTION_POLICY} --reserved_space={VALUE}

Recommended reserved space = ~2x database size.

Step 2.2 – Grant VPC Users Access

racli grant db_access --db_unique_name={DB_NAME} --username={VPCUSER}



Section 3: Wallet and Network Configuration

Step 3.1 – Determine the Wallet Location

  • Check for existing wallets in $ORACLE_BASE/admin/<dbname>/wallet and sqlnet.ora.
  • Use WALLET_ROOT/server_seps (typically under $ORACLE_BASE/admin/{db_name}/wallet).
You should always store the SEPS wallet used by ZDLRA under the WALLET_ROOT location for your database. This will allow the wallet configuration for the ZDLRA to coincide with any other wallets.

NOTES:
  • Setting WALLET_ROOT requires a bounce of the database if it is not already set.  Setting WALLET_ROOT will better prepare the database for managing wallets for real-time redo, TLS, and any encryption keys in the future.
  • Having individual SEPS wallet files for each database, stored outside of $ORACLE_HOME will allow you to have different VPC users, and certificates, providing more flexibility. It also allows you to perform out-of-place upgrades without changing wallet locations.
  • Yes, you can store TDE wallets on ASM/ExaScale and customers sometimes set WALLET_ROOT to ASM/Exascale.  I don't recommend this because many of the additional wallets (SEPS, TLS, OKV) cannot be read from ASM/ExaScale.  I recommend ACFS as a shared file system.

Step 3.2 – Create or Update the Wallet

  • Create the wallet file (if It doesn't exist, or update the existing wallet file):
orapki wallet create -wallet $ORACLE_BASE/{DB_NAME}/wallet/server_seps
mkstore -wrl $ORACLE_BASE/{DB_NAME}/wallet/server_seps -createCredential {connect string} {VPC user} {password}

NOTE: Creating the wallet file with orapki allows you later add TLS certificates
  • Add any Certificates if TLS is configured on the ZDLRA (optional)
orapki wallet add -wallet $ORACLE_BASE/{DB_NAME}/wallet/server_seps -trusted_cert -cert {trusted cert PEM file}
  • Create the auto-login wallet from the passworded wallet:
orapki wallet create -wallet $ORACLE_BASE/{DB_NAME}/wallet/server_seps -auto_login


Copy wallet to all RAC nodes, or ensure it is on a shared mount point.

Step 3.3.A –Create a new sqlnet.ora for your database (Option A).

It is highly recommended to create a separate TNS_ADMIN directory for each database when multiple databases share the same host. This allows you to leverage different wallets for VPC users and TLS certificates.
This also ensures that setting WALLET_LOCATION does not affect any configurations that also use WALLET_LOCATION (i.e. TLS).

I would recommend storing the sqlnet.ora under $ORACLE_BASE/admin/{dbname}/tnsadmin and using TNS_ADMIN within your RMAN session to point to this location.
Within the sqlnet.ora make the following changes.  
SQLNET.WALLET_OVERRIDE=TRUE
WALLET_LOCATION=
 (SOURCE=
   (METHOD=FILE)
   (METHOD_DATA=
     (DIRECTORY={Wallet Location})
   )
 )

Create this directory on each RAC node (if it isn't shared) and copy the sqlnet.ora to all RAC nodes

Step 3.3.B –Update the general sqlnet.ora for your database (Option B).

If this is the only database on your host, you can update the general sqlnet.ora file within $ORACLE_HOME.  Be careful that other DB features, like TLS which may also use WALLET_LOCATION are not affected.

Within the sqlnet.ora make the following changes.  
SQLNET.WALLET_OVERRIDE=TRUE
WALLET_LOCATION=
 (SOURCE=
   (METHOD=FILE)
   (METHOD_DATA=
     (DIRECTORY={Wallet Location})
   )
 )

Copy this file to all nodes if implementing on a RAC cluster.

Step 3.4 – Test SEPS Connection

If using TNS_ADMIN, set it to appropriate location.

Execute the command below using the entry you created in the wallet with mkstore. This will ensure SEPS security is working correctly before continuing.
Check all nodes in a RAC environment.
sqlplus /@<db SCAN listener:PORT/Service:dedicated>




Section 4: Database Registration with RMAN

Step 4.1 – Register the Protected Database

Connect to the local database, and connect to the RMAN catalog using the entry in the SEPS wallet.
Once connect you need to register the database.
RMAN> CONNECT TARGET /  CATALOG /@<db_SCAN:PORT/Service:dedicated>
RMAN> REGISTER DATABASE;



Section 5: Real-Time Redo (Optional)


Step 5.1 – Update Database Parameters

ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=shared;
ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='log_%d_%t_%s_%r.arc';

Step 5.2 – Set LOG_ARCHIVE_CONFIG 

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(<ZDLRA DB>,<DB_UNIQUE_NAME>)' SCOPE=BOTH;

Step 5.3 – Update the WALLET_ROOT in the database (if not set)

ALTER SYSTEM SET WALLET_ROOT='{Oracle home}/admin/{DB name}/wallet' scope=spfile;

It is highly recommended to allow real-time redo use the default wallet located in WALLET_ROOT/server_seps.
If there is no wallet in WALLET_ROOT/server_seps, then the database will use the setting from WALLET_LOCATION in the sqlnet.ora file. The sqlnet.ora can only point to a single WALLET_LOCATION and this can cause confclits.

NOTE: This requires a bounce of the instance to take affect.

Step 5.5 – Create the Redo Transport user  (when using DG)

If you are using Dataguard, you need to ensure you add the VPC user in the database and grant it SYSOPER priveleges.
You also need to ensure that you standby database(s) have a current copy of the Password file.

For multi-tenant environments you need to execute "alter session set "_oracle_script"=true;" prior to creating the users.

Verify that the user exists in the wallet on all systems by querying against the v$password_users view. 

Step 5.6 – Change the Redo Transport User to the VPC user

Change the redo transport user on all environments (primary and standby).
ALTER SYSTEM SET redo_transport_user={VPCUSER};

Step 5.7 – Add ZDLRA as an Archive Log Destination

ALTER SYSTEM SET LOG_ARCHIVE_DEST_<n>=
 'SERVICE=<string from wallet>', 
 VALID_FOR=(ALL_LOGFILES,ALL_ROLES) ASYNC, 
 DB_UNIQUE_NAME='<ZDLRA DB>' SCOPE=BOTH;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_<n>=ENABLE SCOPE=BOTH;


NOTE: The OS version for Real-time by the ZDLRA using a backup of the controlfile.  
            Real-time redo will not start until you backup the controlfile for the first time. 


NOTE: You can also configure DG broker to manage the ZDLRA using the "ADD RECOVERY_APPLIANCE " command.

Step 5.8 – Test Archive Log Configuration

SELECT status, error FROM v$archive_dest WHERE dest_id=<destination>;

Section 6: RMAN Channel and Backup Configuration

Step 6.1 – Add Default Channel Configuration

CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' FORMAT '%d_%U'
 PARMS 'SBT_LIBRARY=libra.so, ENV=(_RA_NO_SSL=TRUE,RA_WALLET="location=file:<ORACLE_BASE>/admin/<db_name>/wallet/server_seps credential_alias=<string from wallet>",RA_FORMAT=TRUE)';
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM <n> BACKUP TYPE TO BACKUPSET;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY;
CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;
NOTE:
  1. Set _RA_NO_SSL to disable sending backups encrypted unless you are implementing TLS already.
  2. Set RA_FORMAT=TRUE to enable space efficient encrypted backups

Step 6.2 – Perform an Explicit Level 0 Backup

RMAN> CONNECT TARGET /
RMAN> CONNECT CATALOG <string from wallet>
RUN {
 BACKUP INCREMENTAL LEVEL 0 DATABASE FILESPERSET 1 SECTION SIZE 64g PLUS ARCHIVELOG NOT BACKED UP;
 FILESPERSET 32}

Step 6.3 – Schedule Regular Incremental Backups

RMAN> CONNECT TARGET /
RMAN> CONNECT CATALOG <string from wallet>
RUN {
 BACKUP INCREMENTAL LEVEL 1 DATABASE FILESPERSET 1 SECTION SIZE 64G PLUS ARCHIVELOG NOT BACKED UP;
 FILESPERSET 32}

Monday, May 12, 2025

Configuring TLS security on ZDLRA

 TLS security is becoming more and more necessary to protect data, and the ZDLRA supports configuring TLS security for both communicating with the RMAN catalog, and for send/receiving backup pieces. In this blog post I will go through the steps to configure TLS security.


These steps are well documented and you can find the documentation below

Configuring Certificate Management 


As a DBA, some of the concepts were new to me, and I wanted to go through the steps in detail so even a DBA (like me) could understand the process.

  1. Determine the DNS entries 

In order to create the certificate request I first need to gather DNS network information about the ZDLRA.

This can be done by utilizing the "racli list san" command.

racli list san

Below is the example output from the documentation showing the DNS entries.


    Created log /opt/oracle.RecoveryAppliance/log/racli_list_san.log
    Thu May  6 16:18:33 2021: Start: List SAN
    CN = zdlra09ingest-scan1.yourdomain.com
    DNS.1 = zdlra09adm01.yourdomain.com
    DNS.2 = zdlra09adm02.yourdomain.com
    DNS.3 = zdlra09ingest-scan1.yourdomain.com
    DNS.4 = zdlra09ingest01-vip.yourdomain.com
    DNS.5 = zdlra09ingest01.yourdomain.com
    DNS.6 = zdlra09ingest02-vip.yourdomain.com
    DNS.7 = zdlra09ingest02.yourdomain.com
    Thu May  6 16:18:39 2021: End: List SAN

This returns the two pieces of information necessary to create the request.

CN (Common Name)

The common name is the name associated with the certificate.  For the ZDLRA this will be the scan listener's fully qualified DNS name.  You will request a certificate registered to this name.



Alternate Names

As you probably know the Protected Database client connects to the ZDLRA using DNS entries other than the Scan Listener.  The scan listener is used to balance traffic and the connection itself is handed to a DNS/IP address assigned to one of the hosts.
Because of this you can see in the example that there are 7 additional DNS entries that are associated with this certificate as alternate names.

 zdlra09adm01.yourdomain.com,zdlra09adm02.yourdomain.com,zdlra09ingest-scan1.yourdomain.com,zdlra09ingest01-vip.yourdomain.com,zdlra09ingest01.yourdomain.com,zdlra09ingest02-vip.yourdomain.com,zdlra09ingest02.yourdomain.com




2. Prepare Certificate Details

The next step is to create the signed certificate and add it to the ZDLRA
There are 2 methods you can use to create the signed certificate, and they are outlined in the documentation

NOTE: You need a signed certificate in order to implement TLS on the ZDLRA. If you are not familiar with the process of requesting a signed certificate you should contact someone from your security team, or talk to someone in your organization who has gone through the process of requesting a server certificate.

A) Certificate Creation Using Third Party Software.

Request Certificate


With this method you request a signed certificate from your company's certificate authority.
Along with the signed certificate the private key is generated by the certificate authority and provided to you.
I used this method because my companies certificate authority process pre-filled in most of the fields required for the request.


In order acquire the certificate I created a request (following my companies process) and entered the two items determined above in the request details along with the company specific information provided to me by the security team.
  • CN name
  • Alternate DNS names
From this, my company's certificate request process (once approved) returned to me a .ZIP file containing two files
  • {CN name}.crt
  • {CN name}.key
I was also given a link to download the Trusted Certificate associated with the certificate authority used by my company.
I stored the Trusted Certificate in a .pem file.
  • {CN name}.pem

NOTE: The PKCS12 command (used in the next step) requires a "Trusted Certificate" as input. When signing my certificate, I was also given a link to the CA Chain certificates, including ROOT and intermediate CA.  As a DBA, I did not immediately understand this concept.  The "Trusted Certificate" contains the ROOT certificate and any intermediate certificates for the 3rd part Certificate Authority that will validate the signed certificate and ensure the certificate is valid and belongs to DNS names provided with the certificate request.
Examples of Trusted Certificate Authorities are DigiCert, Entrust, GlobalSign, Symantec, etc.
You can find more detail on this concept here.

Once completed you can continue to Step #3 with the 3 files

  • {CN Name}.crt - Signed certificate request
  • {CN Name}.key - Private key
  • {CN name}.pem - Trusted Certificate (may be named differently)

B) Create the request and send the request to be signed

With this method, you create the request and generate the key on the ZDLRA.  The request is sent to the Certificate Authority, and signed. You are responsible for creating and storing the private key.

Manually create the certificate request


You begin by creating a ".crt" file with your company's certificate information.
You can see an example in the documentation. 

You need to replace the information in the request file with your companies detail that was provided to you. 

With your company specific detail, you also need to ensure that the request contains the proper
  • CN name from the first step
  • Alternate names from the first step

Upload your certificate request using your company's process

Upload your certificate request to get it signed.
Once signed you will be returned your certificate and you will be given a copy of (or a link to) your company's Trusted Certificate. (see explanation from the first method if this is a new concept).

You should have 2 files
  • {CN name}.crt
  • {CN name}.pem
NOTE: The .pem file for the trusted certificate might be named differently as it is associated with Certificate Authority rather than the Common Name.

Generate a private key

You need to generate a private key to be associated with your signed certificate.  Using the first method, the key was provided to me. With this method you will generate the key yourself. 

The command to generate this key is below, and you may assign a password to the private key.

openssl genrsa -passout pass:<your password> --out /dir/{CN name>.key 2048

Once completed you will have 3 files
  • {CN Name}.crt - Signed certificate request
  • {CN Name}.key - Private key
  • {CN name}.pem - Trusted Certificate

3. Add Certificate to ZDLRA

Sign the user Certificate with the Trusted Certificate

The next step is create a signed user certificate. This step will create a {CN name}.p12 file that can be added to the wallet on the ZDLRA.

NOTE: A PKCS12 file, often also called a PFDX file, is a standard format for storing digital certificates and private keys within a single, encrypted file. It's commonly used to bundle all the necessary cryptographic components for secure communicate, especially for SSL/TLS certificates and code signing. This file combines the signed certificate, the private encryption key, and the Certificate Authority (Trusted Certificate) information into a single file. 

The PKCS12 file is created by executing "openssl pkcs12 -export" and passing in all three files  from above.

openssl pkcs12 -export --in /<DIR>/<NAME>.crt 
--inkey /<DIR>/<NAME2>.key --certfile  /<DIR>/<NAME3>.pem 
--passin pass:<YOURPASSWORD> --passout pass:<YOURPASSWORD>  
--out /<DIR>/<NAME4>.p12

NOTE: The command includes a password. The --passin password is the password for the .key file and the --passout password will be used adding to the .p12 file when created.

Import the user Certificate with the Trusted Certificate

The next step is to take the .PKCS12 file (contained the signed certificate, the private key, and the Certificate Authority) along with the trusted certificate and add them to the wallet on the ZDLRA.

In this step you execute the command below to add both files to the wallet.
 Ensure you give a fully qualified location for the files.

racli add certificate --signed_cert /dir/{CN name}.p12 --trusted_cert=/dir/{CN name}.pem

NOTE: If this is the first time executing this command you will be asked to provide a password for the wallet. The password will be stored on the ZDLRA and you will not be asked for it again. You will also be asked for the password used to create the PKCS12 files in the previous step

Validate both the user Certificate and Trusted Certificate were added

Execute the "racli list certificate command".

This command should return BOTH the "trusted_cert" and "signed_cert" similar to the example in the documentation.

4. Enable the certificate

Once both the trusted_cert and signed_cert have been added to the wallet on the ZDLRA we can configure the ZDLRA to utilize TLS encryption for BOTH TNS (TCPS SQL*net) and backup (HTTPS) traffic.


NOTE: 
  • The ZDLRA will bounce when implementing this step

  • TLS will be the default for sending and receiving backups. ALL channel configurations should contain "_RA_NO_SSL=TRUEin order to ensure they continue sending backups until they are configured with a client wallet.

  • Configuring TLS with "ONLY" will disable non-TLS connections and traffic. Ensure all of your clients are configured for TLS prior to enabling ONLY.

The command to enable TLS is specified in the documentation with the default settings.
When enabling TLS you need to decide if you want TLS to be mandatory, or optional.

You can use the simple command (without any defaults), or you can change the default ports.
If you want to use all of the defaults you can execute the command below

racli alter network --service=ra_server --encrypt=enable | only




4. Validate the TLS configuration

You can validate the configuration in a couple of different ways.

  • Check the health of TLS with "racli run check --check_name=tls_health". This should give you a PASS status
  • As Oracle execute a listener status "lsnrctl status". Below is the output and I can see that TPCS traffic is configured to go over port 2484 and HTTP will go over port 8005 has a certificate configured to ensure it is encrypted.


5. Configuring the Protected DB Client wallet

When using TLS there are 2 wallet entries that need to be referenced when communicating with the ZDLRA
  • Trusted Certificate - This is a new wallet entry and needs to be present so that the client can validate the certificate.
  • VPC credentials (SEPS) - These are the login credentials for the VPC user. If you are using an EZ connect entry, then you would need to change the entry format to use EZConnect Plus (see below) or a TNS alias. If you are currently using a TNS alias you would need to update the tnsnames.ora entry.
These two entries are independent and do not have to be in the same wallet file.

Update current wallet file to include the Trusted Certificate

In order to add certificates to the wallet file, the wallet file must have been created using the orapki command (rather than mkstore).
If the your wallet file was created with mkstore, you will need to recreate it with the orapki command

You will be loading the Trusted Cert into the wallet file used by the ZDLRA to authenticate the VPC user.
More discussion on where the wallet should be located, and how to set the wallet location is covered in my last blog post you can find here.

Assuming the Trusted Certificate is stored in a file names {CN name).pem you would execute the command below also providing the location of the wallet you want to add the certificate to.
Remember it needs to be added to all nodes in a RAC cluster.

orapki wallet add --wallet <WRL> --trusted_cert --cert <dir>/{CN name}.pem

 

Update the SEPS wallet

If you have been using the EZConnect string in your SEPS wallet vs a tnsnames alias you are at a decision point. 

EZConnect - {host}:{port}/{service}

When using TCPS to encrypt communication, you need to indicate that you are using TCPS, and you need to tell EZConnect where to find the wallet file containing the Trusted Certificate for the communication.
Because EZConnect does not support TCPS you need to use EZConnect Plus. EZConnect Plus was introduced with 18g, and is supported with release 19g and above.

EZConnect Plus [Protocol]://{host}:{port}/{service}?wallet_location={wallet location on disk}

When adding the SEPS credential for the VPC user  you need to use the EZConnect Plus format in your wallet for the connection to succeed.

TNSNAMES Alias - As an alternate to using EZConnect, you can use a tnsnames.ora alias. If using an alias, you would add the entry to the tnsnames.ora file that points to the correct port and uses TPCS, and you would ensure the WALLET_LOCATION set in the associated sqlnet.ora file contains the Trusted Certificate.


EZConnect Plus example :

TCPS://ra1-scan1.mycorp.com:2484/radb1?/u1/app/oracle/mydb/wallet/server_seps


6. Configuring the Client to utilize TLS

There are three places where you would change the settings to use TLS

  • RMAN catalog connections (/@)
  • RMAN channel configuration
  • Protected DB real-time redo destination

RMAN catalog connections

In order to connect to the RMAN securely with TLS you need to ensure that your connection is using the correct encrypted port using the TPCS protocol, and you need to ensure that the WALLET_LOCATION in the sqlnet.ora file points to a wallet containing the Trusted Certificate.  You can connect to the catalog using a few different methods
  • Connect directly using the password and using a tnsnames.ora alias.
  • Connect directly using the password and using the EZConnect Plus format specifying TCPS and the location of the Trusted Certificate wallet.
  • Connect using a SEPS entry pointing to a tnsnames alias or the EZConnect Plus formatted string.

RMAN channel configuration

There are a couple of changes to the channel configuration.  The channel configuration will use the two different wallet entries.
  • ra_wallet - This points to the SEPS wallet location and you specify the credential you want to utilize which is in this SEPS wallet.
  • Trusted Certificate location - If you are using the EZConnect Plus format, you would specify the wallet_location which points to a wallet containing the Trusted Certificate. If you are using a tnsnames alias, the WALLET_LOCATION in the sqlnet.ora points to a wallet containing the Trusted Certificate.
NOTE: If your entry is in the EZConnect Plus format, you might have to enclose the credential on double quotes.

Real-time redo configuration


The real-time redo connection will use the two different wallet entries.
  • SEPS wallet - Real-time redo will determine if the parameter WALLET_ROOT is set and if there is a server_seps directory within the WALLET_ROOT. If so, that wallet is used to look up the SEPS connection. If WALLET_ROOT is not set, the connection will use the WALLET_LOCATION set in the sqlnet.ora file pointed to during DB startup.
  • Trusted Certificate location - Real-time redo will always use the WALLET_LOCATION from the sqlnet.ora file to locate the Trusted Certificate.
NOTE: If your entry is in the EZConnect Plus format, you might have to enclose the credential on double quotes.

7. Cloud Control (OEM) considerations

If you are using Cloud Control (OEM) to onboard databases, please follow MOS note