This post will go through the process of creating a few simple scripts to automate OKV installation using the REST API capability of OKV.
NOTE: This information was provided by the product manager Peter Wahl who has a great blog on the features of OKV and some great "Ask Tom" sessions on OKV (and other security pieces),you can watch and learn more.
Step #1 Configure RESTful Services and download client tool
First you need to configure the OKV server for RESTful Services. The instructions can be found here. This is done by navigating to the System tab and clicking on RESTful Services.
It is recommended that you only enable the RESTful services when you are actively onboarding new endpoints.
This bring up the window below.
There are three things you want to do from this window.
Click on the "Enable" box to enable RESTful services
Download the okvrestcliepackage.zip which are the client utilities.
Save this setting to enable RESTful services.
Now that we have this file, we need to download it our client and start creating the scripts to automate this process.
I downloaded the zip file to my DB host to configure it. I unzipped it in /home/oracle/okv/rest
NOTE: you can also download it directly from the OKV hosts by using any of the commands below.
Step #2 Create a user for the restapi steps
I create a new user in OKV called "restapi" and you can see the permissions below. Before using the new user, you must login and change the password because the password is expired by default when creating a new user.
Step #3 unzip and configure the client tool
I unzipped the client tool into my home directory on a DB server so I can put together the automation scripts. In my case I unzipped it into /home/oracle/okv/rest. This creates 3 sub directories. I am going to format the output using this command.
In order to configure OKV, I am going to need some variables set in my environment. I can do this manually, but in my case I decided to create a "setenv.sh" script that will set the variables and add the OKV script to my path to be executed. I also included the ability to pass a parameter (ORACLE_SID) so that you can use the script in a loop across multiple instances on the same host.
The 3 main variables I will be using are
OKV_RESTCLI_HOME - Location of the scripts that I am going to be installing. If I source the setenv.sh script, it will set the home to this location.
OKV_RESTCLI_CONFIG - Name of the configuration file that contains the rest CLI configuration.
OKV_HOME - Location to install OKV for the current instance. This location is $ORACLE_BASE/admin/${DB_UNIQUE_NAME}/okv_home. This follows the standard for ExaCC.
NOTE: If this is a NEW database
If you want to use these steps to configure OKV on a new database, you need to perform the following steps prior to executing "/ ./setenv.sh".
Add the new $ORACLE_SIDE for the host in the "/etc/oratab" file.
Create the directory "mkdir $ORACLE_BASE/admin/{DB Unique Name}"
Create the directory "mkdir -p $ORACLE_BASE/diag/rdbms/${DB Unique Name}/${ORACLE_SID}
Use ". oraenv" to set the environment to this $ORACLE_SID
Step #5 - Set initialization parameters in okvrestcli.ini file
Next, I am going to configure the initialization parameters. These are found in the okvrestcli.ini file.
You can see that the file contains a "[Default]" profile and a few other example profiles. We will start with the default profile. In this we are going to set a few of the properties.
LOG_PROPERTY - Location of the logging properties. Default location is ./conf directory.
SERVER - IP address (or DNS) of one or more OKV hosts
OKV_CLIENT_CONFIG - location of the config file. Default location is ./conf directory
USER - OKV user that has authority to administer endpoints an wallets. In this case it will be the restapi user that I created.
PASSWORD - Password for the user, or location of wallet containing the password. I am NOT going to use this as I am going to use a wallet file.
CLIENT_WALLET - I am going to use a wallet to store the password, and this is the location of the wallet file. I will be creating the autologin wallet later.
Below is what my "[Default]" configuration file looks like after my changes which is located at $OKV_RESTCLI_HOME/conf/okvrestcli.ini . I am going to use the environmental variables I set in the setenv.sh script.
NOTE: I am choosing to store my password in wallet rather than clear text in the .ini file.
You need to change the server to either the server name or the IP address.
Step #6 Create the wallet to save the password encrypted
Since I chose to put my password in a wallet, I now need to create that wallet. Using the instructions in the document (linked to at the beginning of this blog), I execute the command from the directory I installed into (/home/oracle/okv/rest)
Step #7 Create and execute the run-me.sh script
The last step is to create the script that will be executed on the host to create the provision script. In my script, I took the default and did some checking. This script will
Ensure the variable OKV_RESTCLI_HOME is set before it can be executed.
Determine the DB_UNIQUE_NAME from the $ORACLE_BASE/diag/rdbms/*/$ORACLE_SID directory. Solving for the * should give us the DB_UNIQUE_NAME
While executing, it tells you what it believes the DB_UNIQUE_NAME is, and gives you a chance to change it if incorrect.
It will validate if the endpoint group exists by accessing OKV. If the endpoint group already exists, it does not try to create it again. If it doesn't exist it will create the endpoint group.
It will validate if the wallet exists by accessing OKV. If the wallet already exists, it does not try to create it again. If it doesn't exist it will create the wallet .
t will add the endpoint, add this endpoint to the endpoint group, and assign the wallet to this endpoint.
It will install the client software in $ORACLE_BASE/admin/$DB_UNIQUE_NAME/okv_home/okv
Below is the script I am using.
Step #8 Execute okv-install-${ORACLE_SID}.sh to validate that enrollment works
Now I am ready to download this zip file to my Database Host and enroll a database.
NOTE: To change the script to work on another OKV all host I only had to make 3 changes.
Update the okvrestcli.ini file with OKV host IP
Update the okvrestcli.ini file with the the user
recreate the wallet file that contains the password for the OKV user
Step #10 Run root.sh if this is the first on a host
When a new endpoint is added, a script is downloaded into the $OKV_HOME/okv/bin/ directory called root.sh.. This script will copy the PKCS library (liborapkcs.so) in central location on the host as root. This file is needed for the database to access OKV. The location of this file is different on different OS systems.
NOTE: If you are already using a key management software on this host for Oracle databases, running root.sh could overwrite the library used.
How to migrate your local TDE wallet to Oracle Key Vault .
Here and here are the links to the 21C document that I used for to go through this process.
Also, in order to keep up on OKV and new features, the Product manager Peter Wahl has a great blog where you can learn about on the latest additions to the product, and new release information.
I am assuming that you installed the OKV server cluster by this point.
NOTE: If you are using the RestAPI method of installing the client, you can go straight to step #6. Be sure you have changed the password using the command below.
Below are the steps.
1) Add the database/host to OKV as an endpoint.
Remember in OKV, each endpoint is unique, but a wallet is shared between endpoints.
I navigate to the endpoint tab and click on the "Add" button.
I fill in the information for my database (TDETEST from my previous post). This is the CDB, as I am using a UNITED wallet for all PDBs that are a member of my CDB. Once filled in I click on the "Register" button.
Once registered, I can see it on the Endpoint screen. Note the "Enroll Token" column. This is needed to enroll the endpoint itself. Save this token, as this will be needed by the person who actually enrolls each DB host/Database.
2) Create wallet in OKV and associate it with the endpoint(s)
Now that OKV is installed and configured on the client we can create a wallet in OKV to upload the keys into. I am going to start by logging back into OKV and navigating to the wallets tab and clicking on "Create" to create a new wallet.
The screen belows comes up, and I enter the name of the new wallet to hold the keys for my CDB. I then click on save to save the new wallet.
3) Make this wallet the default wallet for the endpoint
Now go to the endpoints tab and click on the "endpoint name"
At this point you want to make this wallet the default wallet for the endpoint. Click on the "Choose Wallet" button, and it will bring up the window in front. Select the correct wallet and click on save. This will set the default wallet for this endpoint when it is enrolled in the next step.
4) Download the OKV client install file
Now that the database/host is registered in OKV (the combination of the 2 is the endpoint), I need to download the jar file which will configure the setting on the database host.
The download is initiated by logging out of the OKV console, and clicking on the "Endpoint Enrollment and Software Download" link on the logon screen. I highlighted it below.
You might not have noticed this link before ! Now click on the link, you don't need to login for this step. It will bring up the window below and in that window you will
Click on the "Submit Token" button, and it will validate the token
Click on "Enroll" to begin the download of the install file. If SMTP was configured, you can also have the software install e-mailed to the endpoint administrator.
The download file is a jar file called okvclient.jar. It is highly recommended that you rename it because it is specific to this endpoint.
5) Transfer the .jar file to the database host and install it.
The pre-requisites are in the install guide. The oracle environment during the install mustbe set to the database you are configuration ($ORACLE_HOME, $ORACLE_BASE, $ORACLE_SID)
My target directory is going to be "/home/oracle/app/oracle/admin/tdetest/wallet/okv" and I copied my .jar file to /home/oracle/app/oracle/admin/tdetest (which I renamed to tdetest_okv.jar).
Execute java passing the location of the jar file, followed by -d "install location"
[oracle@oracle-server okvtest]$ java -jar /home/oracle/app/oracle/admin/tdetest/tdetest_okv.jar -d /home/oracle/app/oracle/admin/tdetest/wallet/okv Detected JAVA_HOME: /home/oracle/db_19c/jdk
Enter new Key Vault endpoint password (<enter> for auto-login):
Confirm new Key Vault endpoint password:
The endpoint software for Oracle Key Vault installed successfully.
Deleted the file : /home/oracle/app/oracle/okvfiles/okvtest/okvtest_install.jar
[oracle@oracle-server okvtest]$
If this is the first time OKV is being installed on the server, you need to execute the root.sh script (located in the /bin directory within the install location) as root. If it has already been executed, you can skip this step.
Finally, verify that we can connect OKV by executing "okvutil list". If successful, you will receive "default template". This script is located in the /bin directory within the install.
oracle@oracle-server bin]$ ./okvutil list
Enter Oracle Key Vault endpoint password:
Unique ID Type Identifier
E8C825BB-649C-451E-98B8-E128B675FD31 Template Default template for TDETEST_PRIMARY_NODE1
6) Review how OKV connects to the database.
WALLET_ROOT is set in the database, and within WALLET_ROOT there is an "/okv" directory where the endpoint software must be installed.
On the OS itself, a library is installed (as root if it's not already there) to take care of the encryption. The location on Linux is "/opt/oracle/extapi/64/hsm/oracle"
A link is created to a config file for this endpoint. This link is located in $ORACLE_BASE/okv/$DB_UNIQUE_NAME and links to 2 files that were part of the install. okvclient.lck, and okvclient.ora.
NOTE: okvclient.ora is the configuration file where parameters are set for the endpoint.
7) Upload the keys from the local wallet into OKV
Now we upload the keys from the local wallet into OKV.
Within the upload, I can see where the TDE master keys are being uploaded for my PDBs by looking at the PDB guids.
PDB PDB_ID
SQL> column name format a40
SQL> select name,guid from v$pdbs;
NAME GUID
---------------------------------------- --------------------------------
PDB$SEED BF5039AF39966A70E0536800000A09E1
TDEPDB1 BF50708B8BEB0266E0536800000A7B90
TDEPDB2 BF5072A8540A032BE0536800000AB0DD
TDEPDB3 BF507489CE7703B4E0536800000A8180
And I can look in the wallet in OKV (filtering by Symmetric key) and see the contents that was uploaded from the local wallet. In this screen I can identify the PDB key because I used tags when I created the keys.
8) Add secret to allow use of "External Store".
1) I am going to add the OKV password to the keystore as a secret to allow me to use the "EXTERNAL STORE" instead of the password.
ADMINISTER KEY MANAGEMENT ADD SECRET '0KV2021!' FOR CLIENT 'OKV_PASSWORD' TO LOCAL AUTO_LOGIN KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde_seps';
NOTE: As I pointed out in the last post
The keystore must be in a subdirectory of the WALLET_ROOT location called "tde_seps" in order to be found.
The "FOR CLIENT" entry must be 'OKV_PASSWORD' to be found.
It must be AUTO_LOGIN so that it can be opened and used.
2) I am going to add the OKV password to the keystore as a secret to allow me to auto logon to the OKV Keystore.
ADMINISTER KEY MANAGEMENT ADD SECRET '0KV2021!' FOR CLIENT 'HSM_PASSWORD' TO AUTO_LOGIN KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde';
3) I need to change the TDE_CONFIGURATION (which is dynamic).
'ALTER SYSTEM SET TDE_CONFIGURATION = "KEYSTORE_CONFIGURATION=OKV|FILE" SCOPE = BOTH;
4) I am going to bounce the database, and ensure that both my file and OKV wallets open up properly.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN AUTOLOGIN NONE YES
CDB$ROOT OKV OPEN_NO_MASTER_KEY OKV NONE UNDEFINED
PDB$SEED FILE OPEN AUTOLOGIN UNITED YES
PDB$SEED OKV OPEN_NO_MASTER_KEY OKV UNITED UNDEFINED
TDEPDB1 FILE OPEN AUTOLOGIN UNITED YES
TDEPDB1 OKV OPEN_NO_MASTER_KEY OKV UNITED UNDEFINED
TDEPDB2 FILE OPEN AUTOLOGIN UNITED YES
TDEPDB2 OKV OPEN_NO_MASTER_KEY OKV UNITED UNDEFINED
TDEPDB3 FILE OPEN AUTOLOGIN UNITED YES
TDEPDB3 OKV OPEN_NO_MASTER_KEY OKV UNITED UNDEFINED
10 rows selected.
9) Combine the local wallet File and OKV.
Next I need to migrate the keys using the local wallet. Note this will rekey the database.
ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "0KV2021!" MIGRATE USING "F1LE2021!" WITH BACKUP;
I am going to bounce the database and ensure it comes up with both Keystores opened.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN AUTOLOGIN NONE YES
CDB$ROOT OKV OPEN OKV NONE UNDEFINED
PDB$SEED FILE CLOSED UNKNOWN UNITED UNDEFINED
PDB$SEED OKV CLOSED UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE OPEN AUTOLOGIN UNITED YES
TDEPDB1 OKV OPEN OKV UNITED UNDEFINED
TDEPDB2 FILE OPEN AUTOLOGIN UNITED YES
TDEPDB2 OKV OPEN OKV UNITED UNDEFINED
TDEPDB3 FILE OPEN AUTOLOGIN UNITED YES
TDEPDB3 OKV OPEN OKV UNITED UNDEFINED
9) Combine the local wallet
That's all there is to it !
The most important notes I found during this process
WALLET_ROOT and TDE_CONFIGURATION should be used in 19c.
The password for OKV
add secret to the wallet in WALLET_ROOT/tde_seps using client 'OKV_PASSWORD'
add secret to the wallet in WALLET_ROOT/tde using client 'HSM_PASSWORD'
OKV must be installed in WALLET_ROOT/okv
The local wallet for TDE must be in WALLET_ROOT/tde
This is post #1 in a series of posts explaining how to implement TDE (Transparent Data Encryption). In this first post I will take my Multitenant 19c database (remember Multitenant is mandatory with 21c) and configure TDE in my 3 (no extra license needed) pluggable databases.
The database I created for this example contains 3 PDBs as this will give me the flexibility to unplug and move PDBs around.
The names I used are
TDETEST - This is my CDB, and I will only be encrypting users tablespace and the catalog.
TDEPDB1,TDEPDB2, TDEPDB3 - My 3 PDBs. I will be encrypted all user tablespaces that make up these 3 PDBS .
Since I have only a single instance, the location I chose to put the local wallet file is under $ORACLE_BASE/admin/$ORACLE_SID/wallet (/home/oracle/app/oracle/admin/tdetest/admin).
In a RAC environment you have a few choices of where to put the wallet file. The most important thing though, is to ensure each node has an identical copy in a RAC environment.
In a RAC environment you can put the wallet file:
On the local file system on each node, but be sure they are all identical. It is best to put them within the $ORACLE_BASE to make it easier in an out-of-place upgrade.
On a shared filesystem. This can be ACFS, DBFS, or just NFS.
With ASM. It is possible to set the location to be an ASM file for example "+DATA/TDETEST/wallet"
NOTE: When deciding where to store your TDE wallet there are 2 items to keep in mind.
1) You need to backup your wallet. Without the wallet file you can't access your database. Once encryption is implemented, the wallet needs to be available.
2) The wallet needs to be backed up separate from the database. If the wallet and the Database backup are stored together, anyone can read the backup.
In my further blogs I will be converting from using a local wallet file to store my encryption keys, to using OKV along with a local wallet that caches the autologin information.
To migrate to TDE, I chose to perform the quickest method "Restore tablespace as encrypted". With my test database, I did not have a standby database.
NOTE: With a standby database the fastest way to convert to TDE would be with a "standby first" strategy. This is explained in this MAA document which includes an automated procedure. With this strategy you would convert your standby database to utilize TDE with a restore as encrypted, while the primary database stays untouched. Once the standby is converted, you would perform a switchover (to the standby) and encrypt the "current standby", which was the primary. Once both are encrypted, you would switch back and the process is completed.
Step 1 - Perform a full backup of the database immediately prior to starting the process. Since I am using "restore tablespace as encrypted" this will allow me to open the database with minimal recovery. Once backed up, you also should create a restore point to quickly identify the point after the full backup prior to the encryption.
create restore point pretde;
Step 2 - Set the location of the wallet_root, and the tde configuration. I chose to use the WALLET_ROOT parameter (new with 19 I believe) because it gives the most flexibility. Keep in mind in order to go through step 2 completely the database will need to be bounced.
alter system set WALLET_ROOT='/home/oracle/app/oracle/admin/tdetest/wallet/' scope=spfile;
startup force;
alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE' scope=both;
Step 3 - We are going to take a look at the database and the parameters that are set for the encryption wallet. Below is the formatted query I am going to be using throughout this post.
Below is the output of the query and the current settings as of this point. You can see that there are rows for all my PDBs, and that the status is "NOT_AVAILABLE" since I have not created a wallet or any master keys yet. You can also see that the keystore is UNITED, meaning that all the keys (both for the CDB and all the PDBs) are assumed to be contained in the same Wallet file.
Also note that the WRL_PARAMETER is set based on the WALLET_ROOT setting. The TDE wallet file is defaulted to be the WALLET_ROOT/tde directory for the CDB.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ NOT_AVAILABLE UNKNOWN NONE UNDEFINED
PDB$SEED FILE NOT_AVAILABLE UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE NOT_AVAILABLE UNKNOWN UNITED UNDEFINED
TDEPDB2 FILE NOT_AVAILABLE UNKNOWN UNITED UNDEFINED
TDEPDB3 FILE NOT_AVAILABLE UNKNOWN UNITED UNDEFINED
Step 4. Now I need to create the keystore and open it for the CDB, and all my individual PDBs. Note that each PDB shares the keystore with the CDB. In isolated mode, I would create an individual keystore for each PDB and they would be in subdirectories under the WALLET_ROOT location.
But first I need to create the directory to hold the keystore wallet.
ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde' IDENTIFIED BY "F1LE2021!";
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!"; alter session set container=tdepdb1;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT; alter session set container=tdepdb2;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT; alter session set container=tdepdb3;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT;
Now let's look at the encryption settings in v$encryption_wallet. Below you can see that there is a single wallet setting (UNITED keystore), and the status is "OPEN_NO_MASTER_KEY". The master key has not been set for CDB, or the PDBs.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN_NO_MASTER_KEY PASSWORD NONE UNDEFINED
PDB$SEED FILE CLOSED UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE OPEN_NO_MASTER_KEY PASSWORD UNITED UNDEFINED
TDEPDB2 FILE OPEN_NO_MASTER_KEY PASSWORD UNITED UNDEFINED
TDEPDB3 FILE OPEN_NO_MASTER_KEY PASSWORD UNITED UNDEFINED
Step 5. Now we create the master keys for the CDB and each PDB. A master key is needed to encrypt the tablespace encryption keys stored in the datafiles.
NOTE: I added a tag that identifies the key with the CDB or PDB it is created for. I highly recommend using tags to identify the keys within the wallet. Identifying the master encryption key for an individual PDB will be important when moving PDBs between CDBs.
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'TDETEST MASTERKEY_APRIL19' IDENTIFIED BY "F1LE2021!" WITH BACKUP USING 'TDETEST_TDEKEY_APR1_backup';
alter session set container=tdepdb1;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'TDEPDB1 MASTERKEY_APRIL19' IDENTIFIED BY "F1LE2021!" WITH BACKUP USING 'TDEPDB1_TDEKEY_APR1_backup' container=current; alter session set container=tdepdb2;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'TDEPDB2 MASTERKEY_APRIL19' IDENTIFIED BY "F1LE2021!" WITH BACKUP USING 'TDEPDB2_TDEKEY_APR1_backup' container=current; alter session set container=tdepdb3;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'TDEPDB3 MASTERKEY_APRIL19' IDENTIFIED BY "F1LE2021!" WITH BACKUP USING 'TDEPDB3_TDEKEY_APR1_backup' container=current;
And once again let's look at the settings in v$encryption_wallet. This time you will see that the wallet is open for all CDBs/PDBs except for the PDB$SEED. The wallet type is "PASSWORD" which means that the wallet needs to be manually opened with a password.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN PASSWORD NONE NO
PDB$SEED FILE CLOSED UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE OPEN PASSWORD UNITED NO
TDEPDB2 FILE OPEN PASSWORD UNITED NO
TDEPDB3 FILE OPEN PASSWORD UNITED NO
Step 6 - We have the master keys set and the wallets are open. We now need to implement TDE by encrypted the tablespaces in my PDBs. As I said before, in my example, I used "restore tablespace as encrypted".
Another option is to encrypt online (as of 12c). In this process the database will encrypt each datafile sequentially while the database is online and active.
NOTE : If using online encryption be aware that
It takes much longer than performing a restore, as datafiles are encrypted sequentially. Using "restore tablespace as encrypted" You can parallelize the restore across multiple channels.
The process needs enough space for 2 copies of the largest datafile. If using bigfiles, this can be quite a bit of storage.
You need to monitor the process to ensure it completes successfully.
Next step is I am going to startup mount and open the wallets, restore my 3 PDBs users tablespaces, along with the users tablespace in my CDB, and then recover and open the database.
sqlplus> shutdown immediate;
sqlplus> startup mount;
sqlplus> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!";
sqlplus> alter session set container=tdepdb1;
sqlplus> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT;
sqlplus> alter session set container=tdepdb2;
sqlplus> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT;
sqlplus> alter session set container=tdepdb3;
sqlplus> SDMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "F1LE2021!" CONTAINER = CURRENT;
rman target / catalog rmancat/oracle@rmancat
rman> restore tablespace users as encrypted;
rman> restore tablespace tdepdb1:users as encrypted;
rman> restore tablespace tdepdb2:users as encrypted; rman> restore tablespace tdepdb3:users as encrypted;
rman> recover database;
rman> alter database open;
Step 7 - Make sure all new tablespaces are encrypted by default
In order to ensure all new tablespaces are encrypted I am going to set the database parameter.
sql>alter system set encrypt_new_tablespaces = ALWAYS scope = both sid = '*';
Step 8 - Encrypt all credentials that contained in the root container
In order to encrypt all credentials (like scheduler credentials, and DB Link credentials) that are stored in the system catalogs, you need to login as a user granted "SYSKM" role and execute
sql> alter
database dictionary encrypt credentials container = current;
Step 9 - I am going to verify that the pluggable databases are encrypted, along with the catalog.
First let's look at the existing keys using the query below
I can see the keys that are created in each container, including the ROOT
PDB Name Activation Time Key ID Tag
--------------- -------------------------- ------------------------------------------------------- ----------------------------------------
CDB$ROOT 19-Apr-2021 05:12:41pm AbwcWGicr0+rvyhrSB+rKQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA TDETEST MASTERKEY_APRIL19
TDEPDB1 19-Apr-2021 05:19:11pm AX9pkB+zQE/Wv6Qek13TeLkAAAAAAAAAAAAAAAAAAAAAAAAAAAAA TDEPDB1 MASTERKEY_APRIL19
TDEPDB2 19-Apr-2021 05:19:11pm AUKkROD1TE8wv0jfJhN63JYAAAAAAAAAAAAAAAAAAAAAAAAAAAAA TDEPDB2 MASTERKEY_APRIL19
TDEPDB3 19-Apr-2021 05:19:13pm AZUWZDWpxk9sv0GrljDFr7sAAAAAAAAAAAAAAAAAAAAAAAAAAAAA TDEPDB3 MASTERKEY_APRIL19
I am going to use the query below to look at the encryption setting on each datafile.
Below is the output. I see that all the datafiles were properly encrypted and are available.
PDB Name Tablespace Name Encrypted Master Key ID
--------------- ------------------------------ ---------- ------------------------------------------------------------
CDB$ROOT SYSAUX NO AbwcWGicr0+rvyhrSB+rKQA=
SYSTEM NO AbwcWGicr0+rvyhrSB+rKQA=
TEMP NO AbwcWGicr0+rvyhrSB+rKQA=
UNDOTBS1 NO AbwcWGicr0+rvyhrSB+rKQA=
USERS YES AbwcWGicr0+rvyhrSB+rKQA=
TDEPDB1 SYSAUX NO AX9pkB+zQE/Wv6Qek13TeLk=
SYSTEM NO AX9pkB+zQE/Wv6Qek13TeLk=
TEMP NO AX9pkB+zQE/Wv6Qek13TeLk=
UNDOTBS1 YES AX9pkB+zQE/Wv6Qek13TeLk=
USERS YES AX9pkB+zQE/Wv6Qek13TeLk=
TDEPDB2 SYSAUX NO AUKkROD1TE8wv0jfJhN63JY=
SYSTEM NO AUKkROD1TE8wv0jfJhN63JY=
TEMP NO AUKkROD1TE8wv0jfJhN63JY=
UNDOTBS1 YES AUKkROD1TE8wv0jfJhN63JY=
USERS YES AUKkROD1TE8wv0jfJhN63JY=
TDEPDB3 SYSAUX NO AZUWZDWpxk9sv0GrljDFr7s=
SYSTEM NO AZUWZDWpxk9sv0GrljDFr7s=
TEMP NO AZUWZDWpxk9sv0GrljDFr7s=
UNDOTBS1 YES AZUWZDWpxk9sv0GrljDFr7s=
USERS YES AZUWZDWpxk9sv0GrljDFr7s=
And I am going to look at the catalog to be sure
select enforcement from DICTIONARY_CREDENTIALS_ENCRYPT;
ENFORCEM
--------
ENABLED
Step 10 - I am adding the ability to access the keystore without having to enter the password. This can make it much easier to open the keystore if you don't chose to make the Keystore Wallet AUTO_LOGON.
I am putting my keystore password in an AUTO_LOGIN wallet as a secret. This wallet needs to be created in a directory called TDE_SEPS. In my case since, I am using WALLET_ROOT, the directory containing the secret must be WALLET_ROOT/tde_seps.
The password portion of "ADMINISTER KEY" becomes IDENTIFIED by EXTERNAL STORE;
NOTE: I am using the phrase "FOR CLIENT 'TDE_WALLET'"
'TDE_WALLET' is necessary to
ADMINISTER KEY MANAGEMENT
ADD SECRET 'F1LE2021!' FOR CLIENT 'TDE_WALLET'
USING TAG 'TDE file keystore password'
TO LOCAL AUTO_LOGIN KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde_seps';
Now to verify it I am going to restart my database and open the wallet using the external store (rather than hardcoding in the password).
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY EXTERNAL STORE;
alter session set container=tdepdb1;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY EXTERNAL STORE CONTAINER = CURRENT;
alter pluggable database open;
alter session set container=tdepdb2;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY EXTERNAL STORE CONTAINER = CURRENT;
alter pluggable database open;
alter session set container=tdepdb3;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY EXTERNAL STORE CONTAINER = CURRENT;
alter pluggable database open;
And finally verify that that the Keystores are open (using the external store) for my CDB and PDBs.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN PASSWORD NONE NO
PDB$SEED FILE CLOSED UNKNOWN UNITED UNDEFINED
TDEPDB1 FILE OPEN PASSWORD UNITED NO
TDEPDB2 FILE OPEN PASSWORD UNITED NO
TDEPDB3 FILE OPEN PASSWORD UNITED NO
Step 11 - I am going to change the wallets to be AUTO_LOGIN, bounce the database and verify that the encryption settings are all correct.
sqlplus / as sysdba
sql> ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY EXTERNAL STORE; sql> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde/' IDENTIFIED BY "F1LE2021!";
sql> shutdown immediate
sql> startup
And v$encryption_wallet shows me that my wallets are all open, and that they are AUTOLOGIN.
PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN AUTOLOGIN NONE NO
PDB$SEED FILE OPEN AUTOLOGIN UNITED NO
TDEPDB1 FILE OPEN AUTOLOGIN UNITED NO
TDEPDB2 FILE OPEN AUTOLOGIN UNITED NO
TDEPDB3 FILE OPEN AUTOLOGIN UNITED NO
Now I am ready to perform a new FULL backup of the pluggable databases, and they are ready for use.
That's all there is to implementing TDE with a wallet file. Next post, I am going to convert my wallet to OKV managed wallets.
Recently version 19.1.1.2 of ZDLRA software was released, and one the features is something called "Smart Incremental". I will walk through how this feature works, and help you understand why features like this are "ZDLRA Only".
I am going to start by walking through how incremental backups become "virtual full backups", and that will give you a better picture of how "smart incremental" is possible.
The most important thing to understand about these features is that the RMAN catalog itself is within the ZDLRA AND the ZDLRA has the ability to update the RMAN catalog.
How does a normal backup strategy work ?
That is probably the best place to start. What DBAs typically do is perform a WFDI (Weekly Full Daily Incremental) backup. To keep my example simple, I will use the following assumptions.
My database contains 3 datafile database. SYSTEM, SYSAUX, USERS, but I will only use the example of backing up datafile users.
Each of these 3 datafiles are 50 GB in size
I am only performing a differential backup which creates a backup containing the changes since the last backup (full OR incremental).
My database is in archivelog *
* NOTE: With ZDLRA you can back up a nologging database, and still take advantage of virtual fulls. The database needs to be in a MOUNTED state when performing the incremental backup.
If placed in a table the backups for datafile USERS would look this. Checkpoint SCN is the current SCN number of the database at the start of the backup.
If I were to look at what is contained in the RMAN catalog (RC_BACKUP_DATAFILE), I would see the same backup information but I would see the SCN information 2 columns.
Incremental change # is the oldest SCN contained in the backupset. This is the starting SCN number of the previous backup, this backup is based on.
Checkpoint Change # is starting SCN number of the backup. Everything newer than this SCN (including this SCN) needs to be defuzzied.
Normal backup progression (differential)
When performing an incremental RMAN backup of a datafile, the first thing that RMAN does is decide which blocks needs to be backed up. Because you are performing an incremental backup, you may be backing up all of the blocks, only some of the blocks, or even none of the blocks if the file has not changed.
This is a decision RMAN makes by querying the RMAN catalog entries (or the controlfile entries if you not using an RMAN catalog).
Now let's walk through this decision process. Each RMAN incremental differential's starting SCN is based on the beginning SCN of the previous backup (except for the full).
By looking at the RMAN catalog (or controlfile), RMAN determines which blocks need to be contained in each incremental backup.
Normal backup progression (cumulative differential)
Up to release 19.1.1.2, the recommendation was to perform a Cumulative Differential backup. The cumulative differential backup compares the starting SCN number of the last full backup to determine the starting point of the incremental backup (rather than the last incremental backup) .
The advantage of the cumulative over differential, is that a cumulative backups can be applied to the last full and take the place of applying multiple differential backups. However, cumulative backups are bigger every day that passes between full backups because they contain all blocks since the last full.
Below is what a cumulative schedule would look like and you can compare this to the differential above.
You can see that each cumulative backups starts with the Checkpoint SCN of the last full to ensure that all blocks changed since the full backup started are captured.
The RMAN catalog entries would look like this.
If you were astute, you would notice a few things happened with the cumulative differential vs the differential.
The backup size got bigger every day
The time it took to perform the incremental backup got longer
The range of SCNs contained in the incremental is larger for a cumulative backup.
As you most likely know, one the most important features of the ZDLRA is the ability to create a "virtual full" from an incremental backup.,
If we look at what happens with a cumulative differential (from above), I will fill in the virtual full RMAN catalog entries by shading them light green.
The process of performing backups on the ZDLRA is exactly the same as it is for the above cumulative, but the RMAN catalog looks like this.
What you will noticed by looking at this compared to the normal cumulative process that
For every cumulative incremental backup there is a matching virtual full backup The Virtual full backup appears (from the newly inserted catalog entry) to have beeen taken at the same time, and the same starting SCN number as the cumulative incremental. Virtual full backups, and incremental backups match time, and SCN as catalog entries.
The size of the virtual full is 0 since it is virtual and does not take up any space.
The completion time for the cumulative incremental backup is the same as the differential backups. Because the RMAN logic can see the virtual full entry in the catalog, it executes the cumulative incremental EXACTLY as if it is the first differential incremental following a full backup.
Smart Incremental backups -
Now all of this led us to smart incremental backups. Sometimes the cumulative backup process doesn't work quite right. A few of the reasons this can happen are.
You perform a full backup to a backup location other than the ZDLRA. This could be because you are backing up to the ZDLRA for the first time replacing a current backup strategy, or maybe you created a special backup to disk to seed a test environment (Using a keep backup for this will alleviate this issue). The cumulative incremental backup will compare against the last full regardless of where it was taken (there is exceptions if you always use tags to compare).
You implement TDE or rekey the database. Implementing TDE (Transparent Data Encryption) changes the blocks, but does not change the SCN numbers of the blocks. A new full backup is required.
Previously, you would have to perform a special full backup to correct these issues. In the example below you can see what happens (without smart incremental) to the RMAN catalog if you perform a backup on Thursday at 12:00 to disk to refresh a development environment.
Since the cumulative backups are based on the last full backup, the Thursday - Saturday backups contain all the blocks that have changed since the disk backup started on Thursday at 12:00.
And, since it is cumulative, each days backup is larger, and takes longer.
This is when you would typically have to force a new level 0 backup of the datafile.
What the smart incremental does
Since the RMAN catalog is controlled by the ZDLRA it can correct the problem for you. You no longer need to perform cumulative backups as the ZDLRA can fill in any issues that occur.
In the case of the Full backup to disk, it can "hide" that entry, and continue to correctly perform differential backups. It would "hide" the disk backup that occured, and inform the RMAN client that the last full backup as of Thursday night is NOT the disk backup, but it is the previous virtual full backup.
\
In the case of the TDE, it can "hide" all of the Level 0 virtual full backups, and the L1 differential backups (which will force a new level 0).
All of this is done without updating the DB client version. All the magic is done within the RMAN catalog on the ZDLRA.