Thursday, July 18, 2024

DBMS_CLOUD Debugging with ZFS Object Storage

 In the course of testing the DBMS_CLOUD functionality against OCI object storage on ZFS, I have wanted to perform debugging by looking at the packets sent to the Web Listener on my ZFS.

Unfortunately for debugging purposes, DBMS_CLOUD requires all calls to object storage to be HTTPS calls which are encrypted.

In this blog post, I will go through the architecture below to show you how I was able to use a Load Balancer in OCI on port 443 (HTTPS traffic) to send the requests to my ZFS using Port 80 (HTTP traffic).

By doing this I was able to see all the packets going to ZFS.

You can use this same process to debug network traffic, while leaving the application interface encrypted.


Below are the steps in the OCI console, but I am not going to include the policies that need to be configured.

1) Create a vault

You can find create Vault under "Identity & Security" --> "Key Management & Secret Management".

Click on "Create Vault" and all you need to do is to give the vault a name, and choose the compartment to store the vault.

Once you fill them in click on "Create Vault" to have the vault created.

2) Create a Master Encryption Key

Once the Vault is created, click on the vault name, and this will bring up the window where you can enter a Master Encryption Key to be created within the vault.

Click on "Create Key" and enter the information to create a new Key in this vault.  Note that 

  • The key MUST be an HSM key, you cannot use a software key
  • The key must be asymmentric. The default is symmetric and must be changed.

3) Create a Certificate Authority

Under "Identity & Security" --> "Certificates" you will see "Certificate Authorities". We need to create a new one.

Click on "Create Certificate Authority", and in this case we are creating a Root Certificate Authority. You need to give it a "Name" and "Description" and click on the 'Next" button in the lower left corner.

Then on the next window give it a "Common Name" and click on Next.


On the next window, you must choose a "not valid before". In my case, I chose today.

Then you must enter the Vault and the Encryption key that you had created previously.

Then click on 'Next"


Then set the expiry rule and click on "next".  I left the defaults.


On the next window I changed "Revocation Configuration"  to "skip" and I clicked on "Next"


Then on the "Summary" window I clicked on "Create Certificate Authority" to create the Certificate Authority.


4) Create a Load Balancer

This can be found under "Networking" --> "Load balancers". Click on Load Balancer.

Once here, click on the "Create Load balancer" button.
Give the load balancer a name (if you want) to make it easier to find.
You then need to scroll down to the bottom of the screen to choose your network and subnet for the Load balancer.
Once you fill these in Click on "NEXT".





After clicking on Next, I left everything defaulted. This will do a health check on the ZFS using port 80.  Then I clicked on "Next" again.


In this window, I changed from HTTPS to HTTP. This allows me to create the Load Balancer without having a Certificate yet.  


I left the logging off, and clicked on "Submit" to create the Load Balancer.


5) Determine the Public IP for the Load Balancer

Once the load balancer is created, I go to the list of of load balancers under Networking--> Load balancers --> Load Balancer and it shows me the public IP for the Load Balancer that was created.  The overall Health is showing "incomplete" since I haven't added any backend hosts yet.



6) Create the certificate


Now that I know the Public IP address (129.146.220.252) I can create a certificate for it in my Certificate Authority.
I go back to "Identity & Security" --> Certificates and click on "certificates"

I click on "Create Certificate" and I enter the name and description and Click on "next"







I give the "Common Name" my IP address so that the Certificate Name matches the URL I am going to use to connect.  Then I click on "Next".


In the next window I fill in the "not valid before" and click on "next".





I leave the rules default for the certificate and click on "next"


Then when I get to the "Summary" window I click on "Create Certificate".

7) Create a Backend set for the load balancer

I now go back to Networking --> Load Balancers --> Load Balancer and choose the Load Balancer I had previously created.

On the left hand side of the window I click on "Backend Sets" to list the existing Backend sets.  By default a backend set was created for me, but it has no members.
I click on the default Backend set to bring up the window to add members.
This will bring up a window showing that the backend set is "incomplete"
From here I click on "Backends(0)" on left hand side of the window.


This brings up a window with an "Add backends" button. Click on this button to bring up the window to enter backends.



On the window above I entered the IP address of the HTTP interface I am using ZFS, leaving the port as 80 so that the traffic will be unencrypted, and click on "ADD" to add it to the backend list.

8) Change the Health Check to TCP

On the Backends window I changed the "Update Health Check" to use TCP protocol from HTTP protocol and clicked on "Save Changes".




9) Change the Load Balancer to HTTPS

I now go back to Networking --> Load Balancers --> Load Balancer and choose the Load Balancer I had previously created.

From the left had side, I click on "Listeners" and then I click on "Create Listener".



In the window that comes up, I want to make this a HTTPS listener, I change the protocol to HTTPS, and I choose the certificate I created in the previous step. This allows the load balancer to encrypted receive traffic with a registered certificate.
In this step, I also need to ensure it is using the Backend set I just updated. Once complete choose "Create Listener".




That's all there is to it.

Now I can access the Object storage on ZFS using the "Public IP" using DBMS_CLOUD (which is encrypted) and it will be passed on to the ZFS as HTTP traffic.


Wednesday, July 10, 2024

Creating Archival Backups from ZDLRA using EM Cloud Control


The ability for the ZDLRA to create archive backups was added with release 21.1 and I wrote a blog post (here) on how to do this.  I recently noticed that the latest plugin for ZDLRA (13.5.1.0.0) allows you to dynamically schedule your archival jobs from EM Cloud Control.

Create Archival Backup


In this blog post I will go through how to use this new feature.

First the release that I am using for this is

  •  EM Cloud Control 13.5.0.19
  • Zero Data Loss Recovery Appliance Plugin Release 13.5.1.0.0

Where to find the feature:

If you have the correct plugin, you will notice that there is a new choice in the "Recovery Appliance"  pull down menu provided by the plugin.


There is an entry for "Archival Backups" that appears just below "replication".  When you chose this option it will bring up a new window that you can use to prepare to create an archival backup.


Notice that there is nothing listed here.  I did create an archival backup earlier, but it isn't listed.

In order to create an archival backup, Click on the "Create Archival Backup" button and continue to one of the next sections.  You can either create a "one-time" archival backup, or schedule a recurring backup.  The default is to create a recurring scheduled backup

Create a recurring scheduled Backup:

Protected Databases

I am going to create a recurring scheduled backup for my database "testdb".   I can choose only one database.

Recovery Point Time

  • This should be for every month.  I chose every month individually, and I ensure that I chose all 12 months.
  • This should occur on the "last" day of the month
  • The recovery point should be 11:00 PM based on the browser time (I can also chose the DB time, or UTC).
  • I want to set the restore point prefix to be "MONTHLY_KEEP_BACKUP_". The job will affix the timestamp to the end of the the prefix

Retention Time

  • Keep this backup for 3 years (I can also choose a time period based on months or weeks).

Properties

  • Use the attribute set "TESTDB" that I created earlier.
  • Leave the default format of the backup pieces, but I can change the format if I'd like to.
  • I am setting not setting encryption algorithm on (I would need to for a copy-to-cloud job).
  • I am not setting a compression algorithm on.
My screen for creating the recurring backup looks like the image below.


Once I complete everything I can click on OK, and it will submit my schedule to run.


Viewing recurring scheduled Backup  Procedures:

The recurring backups are not scheduled as jobs, they are scheduled as Procedures because they have a few steps to execute.
You can find these scheduled backups in EM Cloud Control under Enterprise --> Provisioning and Patching --> Procedure Activity.
At this point, I scheduled 2 jobs (actually procedures) , prior and you can see them in this section.


In order to see more detail on these 2 procedures I can select one of them and click on the "Reschedule" button at the top of the list of procedures.
I know the first procedure is for executing scheduled archival backups for TESTDB because the name of the procedure contains TESTDB followed by the timestamp.

Below is what it shows it when I choose to reschedule it.


You can see that during this test, I created a monthly schedule that creates a new backup at 7:00 AM PT on the 10th of the months listed.  During my test I did not include all months, and those that I included, I did not choose them in order.  
When I go back to the list of procedures, and drill into the procedure, I can see there there are just a couple of steps, and I can't see any detail as to what the steps do.


Viewing executed scheduled Backup Procedures:

In order to view any executed scheduled backup you would look in the same place as you do for schedule procedures.  Along with the 2 scheduled procedures I had above, I also had one of the actually execute and I see it in the list.


You can see that the first scheduled job had successfully executed, now let's take a look at the executed step and output.
If you click on the highlighted "Run" name, you can drill into the procedure and steps. Below is what I see for the step detail for this execution.


Below is what the output of the last step looks like.

You can see all of the attributes that were set when I created this procedure, and you can see the actual command that executed to create the archival backup.


Create a One-time only archival Backup:

Similar to creating an recurring backup, you go the "Create Archival Backup" section within the ZDRLA plugin.

Protected Databases

I am going to create a One-time archival backup for my database "testdb".   I can choose only one database.

Create Archival Backup For


Within this section there are 3 choices

Point-in-Time : Using a date picker choose the point in time you want to create the archival backup as of. 


SCN : Enter the SCN you want to use. The text tells the range of SCN numbers you can use.


Restore Point : Enter the restore point from the drop down menu.



Retention Time (same as recurring backups)

  • Keep this backup for 3 years (I can also choose a time period based on months or weeks).

Properties (same as recurring backups)

  • Use the attribute set "TESTDB" that I created earlier.
  • Leave the default format of the backup pieces, but I can change the format if I'd like to.
  • I am setting not setting encryption algorithm on (I would need to for a copy-to-cloud job).
  • I am not setting a compression algorithm on.
Click "OK" after filling in all of the detail, and submit the job.


Viewing archival Backups:


In the window that you choose the "Create Archival Bucket" you can view existing backups.  In order to view the backups, you must first choose the "Protected Database" you want to view the backups for. Below is what you would see once a backup is initiated.



Summary:

You still might find it easier to create the archive log yourself using the PL/SQL package. This can be done either manually or through scripting.  The GUI gives you nice way to schedule individual database jobs, but for 100's, or 1000's of databases with varying requirements, scripting can be more flexible.

































Wednesday, June 26, 2024

Using APEX to upload objects to ZFSSA

 When working on my latest project, I wanted to be able to provide an easy web interface that can be used to upload images into OCI object storage on ZFSSA by choosing the file on my local file system.

In this blog post, I will go through the series of steps I used to create a page in my APEX application that allows a user to choose a local file on their PC, and upload that file (image in my case) to OCI object storage on ZFSSA.



Below are the series of steps I followed.


Configure ZFSSA as OCI object storage

First you need to configure your ZFSSA as OCI object storage.  Below are a couple of links to get you started.

During this step you will

  • Create a user on ZFSSA that will be be the owner of the object storage
  • Add a share that is owned by the object storage user
  • Enable OCI API mode "Read/Write" as the protocol for this share
  • Under the HTTP service enable the service and enable OCI.
  • Set the default path as the share.
  • Add a public key for the object storage user under "Keys" within the OCI configuration.

NOTE: You can find an example of how to create public/private key pair here.

Create a bucket in the OCI object storage on ZFSSA

In order to create a bucket in the OCI object storage you need to use the "OCI cli" interface.
If you have not installed it already, you can use this link for instructions on how to install it.

Once installed, you need to configure the ~/.oci/config file and I explain the contents in my "OCI access to ZFS" section of this blog post.

Now you should have the oci cli installed, and the configuration file created, and we are ready for the command to create the bucket.

oci os bucket create --endpoint http:{ZFSSA name or IP address} --namespace-name {share name} --compartment-id {share name} --name {bucket name}

For my example below:

Parameter value
ZFSSA name or IP address zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com
share name objectstorage
bucket name newobjects

The command to create my bucket would is:
oci os bucket create --endpoint http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com --namespace-name objectstorage --compartment-id objectstorage --name newobjects


Ensure you have the authentication information for APEX

This step is to make sure you have what you need for APEX in order to configure and upload an object into object storage on ZFSSA.

If you successfully created a bucket in the last step, you should have everything you need in the configuration file that you used.  Looking at the contents of my config file (below) I have almost all the parameters I need for APEX.

From the step above I have the correct  URL to access the object storage and the bucket.

http://{ZFSSA name or IP address}/n/{share name}/b/{bucket name}/o/

which becomes

http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com/n/objectstorage/newobjects/o/

The rest of the information except for tenancy is in the configuration file.

Parameter in config file value
user ocid1.user.oc1..{ZFS user} ==> ocid1.user.oc1..oracle 
fingerprint {my fingerprint} ==> 88:bf:b8:95:c0:0a:8c:a7:ed:55:dd:14:4f:c4:1b:3e
key_file This file contains the private key, and we will use this in APEX
region This is always us-phoenix-1 and is 
namespace share name ==> objectstorage
compartment
share name ==> objectstorage


NOTE: The tenancy ID for ZFSSA is always  "ocid1.tenancy.oc1..nobody"


In APEX configure web credentials

Now that we have all of the authentication information outlined in the previous step, we need to configure web credentials to access the OCI object storage on ZFSSA as a rest service.

In order to add the web credentials I log into my workspace in APEX. Note I am added the credentials at the workspace level rather than at the application level.
Within your workspace make sure you are within the "App Builder" section and click on "Workspace Utilities". 



Within "Workspace Utilities" click on "web Credentials".



Now click on "Create >" to create new web credential



Enter the information below (also see screen shot)

  • Name of credential
  • Type is OCI
  • user Id from above
  • private key from above
  • Tenancy ID is always ocid1.tenancy.oci1..nobody for ZFSSA
  • Fingerprint that matches the public/private key
  • URL for the ZFS




In apex create the upload region and file selector

I have an existing application, or you can create a new application in apex. I am going to start by creating a blank page in my application.



After clicking on "Next >", I give the new page a name and create the page.






Then on the new page I created a new region by right clicking on "Body"


Once I created the region, I named the region "upload" by changing the identification on the right hand side of Page Designer.



Then on the left hand side of Page Designer, I right clicked on my region "upload" and chose to create a new "Page Item".


After creating the new page item I needed to give the item a better identification name and change the type to "file upload". See the screen shot below.


In apex create the Button to submit the file to be stored in object storage.


Next we need to add a button to upload the file to object storage.  Right click on the "upload" region, and this time choose "create button below".


I gave the button a clearer name to identify what it's there for


And I scrolled down the attributes of the button on the right hand side, and made sure that the behavior for the button was "Submit Page"



In apex add the upload process itself

Click on the processing section in the top left corner of Page Designing and you will see the sections for page process.  Right click on "Processing" and click on "Create process"


The next step is to give the process a better identifier, and I named my "file_upload", and I also need to include the PL/SQL code to execute as part of this process.

The items we need to customer for the code snippet are.

ITEM VALUE
File Browse Page Item ":" followed by the name of the file selector. Mine is ":FILE_NAME"
Object Storage URL This is the whole URL including namespace and bucket name
Web Credentials This is the name for the Web Credentials created for the workspace


My PL/SQL code is below with the values I've mentioned throughout this blog.



declare
    l_request_url varchar(32000);
    l_content_length number;
    l_response clob;
    upload_failed_exception exception;
    l_request_object blob;
    l_request_filename varchar2(500);
    begin
        select blob_content, filename into l_request_object, l_request_filename from apex_application_temp_files where name = :FILE_NAME;
        l_request_url := 'http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com/n/objectstorage/b/newobjects/o/' || apex_util.url_encode(l_request_filename);        
l_response := apex_web_service.make_rest_request(
            p_url => l_request_url,
            p_http_method => 'PUT',
            p_body_blob => l_request_object,
            p_credential_static_id => 'ZFSAPI'
        );end;


In the APEX database ensure you grant access to the URL

The final step before we test this is to add the ACL grant for the URL.
NOTE: This needs to be granted to to the apex application owner, in my case APEX_230200

BEGIN
    DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
        host => '*',
        ace => xs$ace_type(privilege_list => xs$name_list('connect', 'resolve'),
            principal_name => 'APEX_230200',
            principal_type => xs_acl.ptype_db
        )
    );
END;
/


Upload the object and verify it was successful

After saving the page in Page Designer run the page to upload an object.
Choose an object from your local file system and click on the "Upload Object" button.

If there were no errors, it was successful and you can verify it was uploaded by listing the objects in the bucket.
Below is my statement to list the objects.

oci os object list --endpoint http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com  --namespace-name objectstorage --bucket-name newobjects


 That's all there is to it

Friday, May 31, 2024

ZDLRA's space efficient encrypted backups with TDE explained

 In this post I will explain what typically happens  when RMAN either compresses, or encrypts backups and how the new space efficient encrypted backup feature of the ZDLRA solves these issues.


TDE - What does a TDE encrypted block look like ?

Oracle Block contents

In the image above you can see that only the data is encrypted with TDE.  The header information (metadata) remains unencrypted.  The metadata is used by the database to determine the information about the block, and is used by the ZDLRA to create virtual full backups.


Normal backup of TDE encrypted datafiles

First let's go through what happens when TDE is utilized, and you perform a RMAN backup of the database.

In the image below, you can see that the blocks are written and are not changed in any way. 

NOTE: Because the blocks are encrypted, they cannot be compressed outside of the database.  


TDE backup no compression

Compressed backup of TDE encrypted datafiles

Next let's go through what happens if you perform an RMAN backup of the database AND tell RMAN to create compressed backupsets.  As I said previously, the encrypted data will not compress., and because the data is TDE the backup must remain encrypted.
Below you can see that RMAN handles this with series of steps.  

RMAN will
  1. Decrypt the data in the block using the tablespace encryption key.
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the whole block (including the headers) using a new encryption key generated by the RMAN job

You can see in the image below, after executing two RMAN backup jobs the blocks are encrypted with two different encryption keys. Each subsequent backup job will also have new encryption keys.

Compressed TDE data



Compression or Deduplication

This leaves you with having to chose one or the other when performing RMAN backup jobs to a deduplication appliance.  If you execute a normal RMAN backup, there is no compression available, and if you utilize RMAN compression, it is not possible to dedupe the data. The ZDLRA, since it needs to read the header data, didn't support using RMAN compression.

How space efficient encrypted backups work with TDE

So how does the ZDLRA solve this problem to be able provide both compression and the creation of virtual full backups?
The flow is similar to using RMAN compression, BUT instead of using RMAN encryption, the ZDLRA library encrypts the blocks in a special format that leaves the header data unencrypted.  The ZDLRA library only encrypts the data contents of blocks.

  1. Decrypt the data in the block using the tablespace encryption
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the data portion of the block (not the headers) using a new encryption key generated by the RMAN job
In the image below you can see the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full backup.

This allows the ZDLRA to both compress the blocks AND provide space efficient virtual full backups




How space efficient encrypted backups work with non-TDE blocks


So how does the ZDLRA feature work with non-TDE data ?
The flow is similar to that of TDE data, but the data does not have to be unencrypted first.  The blocks are compressed using RMAN compression, and are then encrypted using the new ZDLRA library.


In the image below you can the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full.





I hope this helps to show you how space efficient encrypted backups work, and how it is a much more efficient way to both protect you backups with encryption, and utilize compression.

NOTE: using space efficient encrypted backups does not require with the ACO or the ASO options.









Wednesday, April 17, 2024

Autonomous Recovery Service Prechecks

 If you are configuring backups to utilize the Autonomous Recovery Service, there are some prerequisites that you need to be aware of.  If your Oracle Database was originally created in OCI and has always been OCI, those prerequisites are already configured for your database.  But, if you migrated a database to an OCI service, you might not realize that these items are required.


Prerequisites for Autonomous Recovery Service


1) WALLET_ROOT must be configured in the SPFILE.

WALLET_ROOT is a new parameter that was added in 19c, and its purpose is to replace the SQLNET.ENCRYTPION_WALLET_LOCATION in the sqlnet.ora file. Configuring the encryption wallet location in the sqlnet.ora file is depreciated.
WALLET_ROOT points to the directory path on the DB node(s) where the encryption wallet is stored for this database, and possibly the OKV endpoint client if you are using OKV to manage your encryption keys.
WALLET_ROOT allows each database to have it's own configuration location specific to each database.

There is a second parameter that goes with WALLET_ROOT that tells the database what kind of wallet is used (file, HSM or OKV), and that parameter is tde_configuration.


Running the script below should return the WALLET_ROOT location, and the tde_configuration information.


Checking the WALLET_ROOT and tde_configuration


Below you can see that both of these parameters are configured and I am using a wallet file.


Parameter            Value
-------------------- ------------------------------------------------------------
wallet_root          /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root
tde_configuration    keystore_configuration=FILE


If you need to migrate to WALLET_ROOT use the tooling .

dbaascli tde enableWalletRoot - enable wallet_root spfile parameter for existing database.

           Usage: dbaascli tde enableWalletRoot --dbname <value> [--dbRestart <value>] [--executePrereqs] [--resume [--sessionID <value>]]
                     Where:
                          --dbname - Oracle database name.
                          [--dbRestart - database restart option. Valid values : full|rolling ]
                          [ --executePrereqs - run the prerequisite checks and report the results. ]
                          [--resume - to resume the previous operation]
                          [--sessionID - to resume a specific session id.]


2) Encryption keys must be configured and available

In order to leverage the Autonomous Recovery Service, you must have an encryption key set and available for the CDB and each PDB.  If you migrated a non-TDE database (or plugged in a nonTDE PDB) to OCI you might not have configured encryption for one ore more PDBs.  The next step is to ensure that you have an encryption key set, and the wallet is open.  The query below should return "OPEN" for each CDB/PDB showing that the encryption key is available.


Below is the output from the query showing that the wallet is open for the CDB and the PDBs. 



   INST_ID PDB Name   Type       WRL_PARAMETER                                                Status
---------- ---------- ---------- ------------------------------------------------------------ ---------------
         1 BGRENNPDB1 FILE                                                                    OPEN
           CDB$ROOT   FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/tde/         OPEN
           PDB$SEED   FILE                                                                    OPEN

         2 BGRENNPDB1 FILE                                                                    OPEN
           CDB$ROOT   FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/tde/         OPEN
           PDB$SEED   FILE                                                                    OPEN



3) All tablespaces are TDE encrypted

TDE encryption is mandatory in OCI, and the Autonomous Recovery Service cannot be used if all of your tablespaces are not encrypted.  Below is a query to run that will tell you if your tablespaces are all encrypted.


In my case I can see that all of the tablespaces are encrypted

Encrypted tablespace information
------------------------------------------------------------
Number of encrypted tablespaces   :      12
Number of unencrypted tablespaces :      0
                                         ----
Total Number of tablespaces       :      12



To find any tablespaces that are not encrypted you can run the query below.



4) Turn Off Any Manual Operational Backups.


In some cases, OCI users perform manual operational backups. These backups are run outside the standard tooling and support point-in-time recovery (non-KEEP backups).
Performing incremental backups to multiple locations can cause integrity issues with recovery.
The original backups can be kept to support the original retention window, and ensure that you have operational backups for a point-in-time prior to onboarding to the Recovery Service.  
Choose an appropriate cutover time, and switch to the Recovery Service, and slowly remove older backups as they expire until they are all completely removed.



Monday, April 15, 2024

Restoring OCI object store backups onto Exadata Cloud Service

 This blog post covers the steps necessary to restore backups made using the Oracle Database Cloud backup Service, onto Exadata Cloud Service in the event of DR situation.  


In this post, I am going to assume that you have already configured an ExaCS environment and have a VM defined to restore the database into.

The database I am going to use for testing has the characteristics below.

DBNAME:    bgrenndb

DB version:    19.19

DB_UNIQUE_NAME: BGRENNDB_HS7_IAD/

NOTE:  have been creating "KEEP" backups for this database and I want to use one of them to restore from in OCI.  This may not be case, you might be sending a weekly full backup, and a daily incremental backup.


Prerequisites:

There are some prerequisites that I found are important to make the restoration go smoothly

  • Backup your TDE encryption wallet - It is important to make sure you have the encryption keys for your database.  When using the Oracle Database backup service, ALL backup pieces are encrypted, including the backups of the spfile, and controlfile. It is critical to have the encryption wallet to restore the backups.  You want to backup just the "ewallet.p12" file. I recommend you DO NOT backup the cwallet.sso file, as this is the autologin wallet.  Best MSA (Maximum Security Architecture) practice is to backup the wallet stored separate from the database backups, and recreate the autologin wallet using the password. This is much more secure than backing up the autologin wallet.
  • Store the backup logs in a bucket - When restoring from a database backup you need to determine the backup pieces that are needed, especially when restoring the controlfile.  If you store the log files, it will make it much easier to restore the database without an RMAN catalog.
  • Create a bucket for DB backups and Metadata - This is where the database backups will be stored, and I recommend adding a retention lock to the bucket.  Instructions on creating the retention lock can be found here.
PRO TIP : The easiest way to upload the RMAN backup log files, and backups of the wallets is to use Pre-Authenticated URLs (PARS). These make it secure (because they can only be used to drop the backup into a bucket), and they also make it easier to deal with authentication.

Steps to restore a database from object storage.

1) Create a stub database 

Because I want to use the tooling in OCI to manage my database, I am starting with a "stub" database with the same name as my backed up database, and it should be the same DB release  or higher. 

NOTE: When creating the stub database, you should use the same password as you are using for the original database.  In my case the SYS password, and the wallet password are the same.  If your wallet password is different from the SYS password, you can create the stub database with different passwords.

Stub database

DBNAME:    bgrenndb

DB version:    19.22

DB_UNIQUE_NAME: BGRENNDB_S39_IAD


PRO TIP  - In hindsight, I should have named the DB_UNIQUE_NAME the same as my production database to make it easier to restore.

2) Backup a copy of the stub SPFILE


In sqlplus I backed up the SPFILE to a PFILE that I will use later to ensure my parameters which are local to this VM are correct when I restore my database.

SQL> create pfile='/tmp/bgrenndb.origpfile' from spfile;

3) Shutdown the database and delete all files.

I shut down the database in srvctl since this is a RAC instance

#> srvctl stop database -d bgrenndb_s39_iad

I deleted all the files on ASM from both +DataC1 and +RecoC1 for this database


4) Download and configure the Oracle Database Backup Service

You need to download the Oracle Database backup service installation jar file.  Once this is downloaded, you need to run the installation which will download the library, create a wallet file, and create the configuration file used by the library.

Instructions on how to do this are documented in my last blog post you can find here.

Pro Tip : Since the I am restoring the database to a RAC cluster it would be easier if I install the Database Service configuration to a shared locations across all nodes.  In my environment, I am going to install the Backup Service configuration in "/acfs01/dbaas_acfs/bgrenndb" in a directory called opc.


Once I go through the installation, I will have the following directories

/acfs01/dbaas_acfs/bgrenndb/opc/lib        --> contains libopc.so used during restore

/acfs01/dbaas_acfs/bgrenndb/opc/config    --> backupconfig.ora containg the library parameters

/acfs01/dbaas_acfs/bgrenndb/opc/wallet     --> contains the authentication information


5) Download and configure the TDE Wallet from my backup

The easiest way to to download the most current wallet from OCI object storage is by using a Pre-authenticated URL (PAR).  I created a PAR on the object and then used curl to download my wallet file.

curl -o {name of the restored file } {PAR which is a long URL pointing to the object}

Once I download the wallet, I am going to :
  • Go to the wallet directory (under WALLET_ROOT/tde and delete the original wallet files (ewallet.p12 and cwallet.sso).
  • Replace the ewallet.p12 with my downloaded wallet from my source database.
Now that I have the wallet downloaded, I need to create the autologin wallet.

NOTE: it is not recommended to backup the autologin wallet, just the passworded wallet

To create the autologin wallet from the passworded wallet I execute

>mkstore -wrl {wallet_location} -createSSO

I enter the password for the wallet, and it creates the autologin wallet for me.

6) Startup the database nomount and validate wallet


Now that I have the wallet in the correct location, I created a basic pfile.  I only need the following parameters.  You can look at the backup of the stub spfile to get the appropriate setting for the "control_files", "db_unque_name", and proper disk groups for DATA and RECO.

*.control_files='+DATAC1/BGRENNDB_S39_IAD/CONTROLFILE/current.327.1166013711'
*.db_name='bgrenndb'
*.enable_pluggable_database=true
*.db_recovery_file_dest='+RECOC1'
*.db_recovery_file_dest_size=6538932518912
*.db_unique_name='bgrenndb_s39_iad'
*.diagnostic_dest='/u02/app/oracle'
*.pga_aggregate_target=5000m
*.processes=2048
*.sga_target=7600m
*.tde_configuration='keystore_configuration=FILE'
*.wallet_root='/var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root'


NOTE: I am going to restore the spfile, so this is only temporary.

I started the database nomount with this small pfile

SQL> startup nomount pfile=bgrenndb.ora;

Once the database started, I used the first TDE query from my blog to check the status of the wallet.  You want to make sure the encryption wallet is OPEN before proceeding.

 INST_ID PDB Name   Type       WRL_PARAMETER                                      Status                         WALLET_TYPE          KEYSTORE Backed Up
---------- ---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
         1            FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/td OPEN                           UNKNOWN              NONE     NO
                                 e/


7) Locate the name of the SPFILE and Controlfile backup pieces

As part of my backup script, I also uploaded the log file associated with the backup. This gave me
  • The DBID
  • The name of the spfile backup piece associated with the backup I am going to restore
  • The name of the controlfile backup piece associated with the backup I am going to restore

8) Restore the spfile and update it.

Using the backup piece name, I restored my spfile to the file system, and created a pfile copy of it so that I can make a few changes.

RMAN>
 run {
 allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/acfs01/dbaas_acfs/bgrenndb/opc/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/acfs01/dbaas_acfs/bgrenndb/opc/config/backupconfig.ora)';
 set dbid=367184428;
 restore spfile to '/tmp/bgrenndb.spfile' from 'BGRENNDB_KEEP_20240227_3776_1' ;
}

RMAN> 2> 3> 4> 5>
allocated channel: c1
channel c1: SID=2142 device type=SBT_TAPE
channel c1: Oracle Database Backup Service Library VER=19.0.0.1

executing command: SET DBID

Starting restore at 12-APR-24

channel c1: restoring spfile from AUTOBACKUP BGRENNDB_KEEP_20240227_3776_1
channel c1: SPFILE restore from AUTOBACKUP complete
Finished restore at 12-APR-24
released channel: c1

RMAN> create pfile='/tmp/bgrenndb.pfile' from spfile='/tmp/bgrenndb.spfile';

Statement processed


I then edited my pfile, "/tmp/bgrenndb.pfile" and made the following changes.
  • I changed custer_interconnects to match the entries in the original spfile from the stub.
  • I changed entries that were pointing to DATAC6 and RECOC6 to DATAC1 and RECOC1 to match the VM I am restoring to.
  • I changed the REMOTE_LISTENER to match the original spfile.
  • I changed the bgrenndb_hs7_iad to bgrenndb_s39_iad since that will be new db_unique_name.
I then bounced the database and started it up NOMOUNT again with the new pfile

9) Restore the controlfile

Now I am going to identify the backup location of the controlfile I want, and restore the control file 

RMAN>

 run {
  allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/acfs01/dbaas_acfs/bgrenndb/opc/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/acfs01/dbaas_acfs/bgrenndb/opc/config/backupconfig.ora)';
  set dbid=367184428;
 restore controlfile from 'BGRENNDB_KEEP_20240227_3777_1' ;
}
4> 5>
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=9 instance=bgrenndb1 device type=SBT_TAPE
channel c1: Oracle Database Backup Service Library VER=19.0.0.1

executing command: SET DBID

Starting restore at 12-APR-24

channel c1: restoring control file
channel c1: restore complete, elapsed time: 00:00:04
output file name=+DATAC1/BGRENNDB_S39_IAD/CONTROLFILE/current.332.1166124375
Finished restore at 12-APR-24
released channel: c1

Once restored the controlfile, I updated the pfile to the location the controlfile was restored to.
Then I created the spfile from pfile.

SQL> create spfile from pfile='/tmp/bgrenndb.pfile';

I then shutdown the instance and started it mount and ensured the parameters were correct, and once again ensured the wallet was open.

10) Change the channel configuration in RMAN and restore

I changed the channel configuration to match the backup service settings, and restored the database using the TAG

 restore database from tag=KEEP_BGRENNDB_HS7_IAD_20240227;
 recover database from tag=KEEP_BGRENNDB_HS7_IAD_20240227;

11) I opened the database reset logs



RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 04/12/2024 19:59:12
ORA-19751: could not create the change tracking file
ORA-19750: change tracking file: '+DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109'
ORA-17502: ksfdcre:4 Failed to create file +DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109
ORA-15046: ASM file name '+DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109' is not in single-file creation form
ORA-17503: ksfdopn:2 Failed to open file +DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109
ORA-15001: diskgroup "DATAC6" does not ex



Oops, I then disabled block change tracking.


RMAN> alter database disable block change tracking;

RMAN> alter database open resetlogs;

Statement processed
PL/SQL package SYS.DBMS_BACKUP_RESTORE version 19.19.00.00 in TARGET database is not current
PL/SQL package SYS.DBMS_RCVMAN version 19.19.00.00 in TARGET database is not current

Now it was successful, and I see I have to upgrade the database.


12) Patch the database from 19.19 to 19.22

I ran through the patch upgrade process 

> cd $ORACLE_HOME/OPatch
>./datapatch -verbose


Summary :

Once I patched the database, I turned on automatic backups which was successful. This was a great sign that I had everything correct and my new database ready to go !