Wednesday, June 26, 2024

Using APEX to upload objects to ZFSSA

 When working on my latest project, I wanted to be able to provide an easy web interface that can be used to upload images into OCI object storage on ZFSSA by choosing the file on my local file system.

In this blog post, I will go through the series of steps I used to create a page in my APEX application that allows a user to choose a local file on their PC, and upload that file (image in my case) to OCI object storage on ZFSSA.



Below are the series of steps I followed.


Configure ZFSSA as OCI object storage

First you need to configure your ZFSSA as OCI object storage.  Below are a couple of links to get you started.

During this step you will

  • Create a user on ZFSSA that will be be the owner of the object storage
  • Add a share that is owned by the object storage user
  • Enable OCI API mode "Read/Write" as the protocol for this share
  • Under the HTTP service enable the service and enable OCI.
  • Set the default path as the share.
  • Add a public key for the object storage user under "Keys" within the OCI configuration.

NOTE: You can find an example of how to create public/private key pair here.

Create a bucket in the OCI object storage on ZFSSA

In order to create a bucket in the OCI object storage you need to use the "OCI cli" interface.
If you have not installed it already, you can use this link for instructions on how to install it.

Once installed, you need to configure the ~/.oci/config file and I explain the contents in my "OCI access to ZFS" section of this blog post.

Now you should have the oci cli installed, and the configuration file created, and we are ready for the command to create the bucket.

oci os bucket create --endpoint http:{ZFSSA name or IP address} --namespace-name {share name} --compartment-id {share name} --name {bucket name}

For my example below:

Parameter value
ZFSSA name or IP address zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com
share name objectstorage
bucket name newobjects

The command to create my bucket would is:
oci os bucket create --endpoint http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com --namespace-name objectstorage --compartment-id objectstorage --name newobjects


Ensure you have the authentication information for APEX

This step is to make sure you have what you need for APEX in order to configure and upload an object into object storage on ZFSSA.

If you successfully created a bucket in the last step, you should have everything you need in the configuration file that you used.  Looking at the contents of my config file (below) I have almost all the parameters I need for APEX.

From the step above I have the correct  URL to access the object storage and the bucket.

http://{ZFSSA name or IP address}/n/{share name}/b/{bucket name}/o/

which becomes

http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com/n/objectstorage/newobjects/o/

The rest of the information except for tenancy is in the configuration file.

Parameter in config file value
user ocid1.user.oc1..{ZFS user} ==> ocid1.user.oc1..oracle 
fingerprint {my fingerprint} ==> 88:bf:b8:95:c0:0a:8c:a7:ed:55:dd:14:4f:c4:1b:3e
key_file This file contains the private key, and we will use this in APEX
region This is always us-phoenix-1 and is 
namespace share name ==> objectstorage
compartment
share name ==> objectstorage


NOTE: The tenancy ID for ZFSSA is always  "ocid1.tenancy.oc1..nobody"


In APEX configure web credentials

Now that we have all of the authentication information outlined in the previous step, we need to configure web credentials to access the OCI object storage on ZFSSA as a rest service.

In order to add the web credentials I log into my workspace in APEX. Note I am added the credentials at the workspace level rather than at the application level.
Within your workspace make sure you are within the "App Builder" section and click on "Workspace Utilities". 



Within "Workspace Utilities" click on "web Credentials".



Now click on "Create >" to create new web credential



Enter the information below (also see screen shot)

  • Name of credential
  • Type is OCI
  • user Id from above
  • private key from above
  • Tenancy ID is always ocid1.tenancy.oci1..nobody for ZFSSA
  • Fingerprint that matches the public/private key
  • URL for the ZFS




In apex create the upload region and file selector

I have an existing application, or you can create a new application in apex. I am going to start by creating a blank page in my application.



After clicking on "Next >", I give the new page a name and create the page.






Then on the new page I created a new region by right clicking on "Body"


Once I created the region, I named the region "upload" by changing the identification on the right hand side of Page Designer.



Then on the left hand side of Page Designer, I right clicked on my region "upload" and chose to create a new "Page Item".


After creating the new page item I needed to give the item a better identification name and change the type to "file upload". See the screen shot below.


In apex create the Button to submit the file to be stored in object storage.


Next we need to add a button to upload the file to object storage.  Right click on the "upload" region, and this time choose "create button below".


I gave the button a clearer name to identify what it's there for


And I scrolled down the attributes of the button on the right hand side, and made sure that the behavior for the button was "Submit Page"



In apex add the upload process itself

Click on the processing section in the top left corner of Page Designing and you will see the sections for page process.  Right click on "Processing" and click on "Create process"


The next step is to give the process a better identifier, and I named my "file_upload", and I also need to include the PL/SQL code to execute as part of this process.

The items we need to customer for the code snippet are.

ITEM VALUE
File Browse Page Item ":" followed by the name of the file selector. Mine is ":FILE_NAME"
Object Storage URL This is the whole URL including namespace and bucket name
Web Credentials This is the name for the Web Credentials created for the workspace


My PL/SQL code is below with the values I've mentioned throughout this blog.



declare
    l_request_url varchar(32000);
    l_content_length number;
    l_response clob;
    upload_failed_exception exception;
    l_request_object blob;
    l_request_filename varchar2(500);
    begin
        select blob_content, filename into l_request_object, l_request_filename from apex_application_temp_files where name = :FILE_NAME;
        l_request_url := 'http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com/n/objectstorage/b/newobjects/o/' || apex_util.url_encode(l_request_filename);        
l_response := apex_web_service.make_rest_request(
            p_url => l_request_url,
            p_http_method => 'PUT',
            p_body_blob => l_request_object,
            p_credential_static_id => 'ZFSAPI'
        );end;


In the APEX database ensure you grant access to the URL

The final step before we test this is to add the ACL grant for the URL.
NOTE: This needs to be granted to to the apex application owner, in my case APEX_230200

BEGIN
    DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
        host => '*',
        ace => xs$ace_type(privilege_list => xs$name_list('connect', 'resolve'),
            principal_name => 'APEX_230200',
            principal_type => xs_acl.ptype_db
        )
    );
END;
/


Upload the object and verify it was successful

After saving the page in Page Designer run the page to upload an object.
Choose an object from your local file system and click on the "Upload Object" button.

If there were no errors, it was successful and you can verify it was uploaded by listing the objects in the bucket.
Below is my statement to list the objects.

oci os object list --endpoint http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com  --namespace-name objectstorage --bucket-name newobjects


 That's all there is to it

Friday, May 31, 2024

ZDLRA's space efficient encrypted backups with TDE explained

 In this post I will explain what typically happens  when RMAN either compresses, or encrypts backups and how the new space efficient encrypted backup feature of the ZDLRA solves these issues.


TDE - What does a TDE encrypted block look like ?

Oracle Block contents

In the image above you can see that only the data is encrypted with TDE.  The header information (metadata) remains unencrypted.  The metadata is used by the database to determine the information about the block, and is used by the ZDLRA to create virtual full backups.


Normal backup of TDE encrypted datafiles

First let's go through what happens when TDE is utilized, and you perform a RMAN backup of the database.

In the image below, you can see that the blocks are written and are not changed in any way. 

NOTE: Because the blocks are encrypted, they cannot be compressed outside of the database.  


TDE backup no compression

Compressed backup of TDE encrypted datafiles

Next let's go through what happens if you perform an RMAN backup of the database AND tell RMAN to create compressed backupsets.  As I said previously, the encrypted data will not compress., and because the data is TDE the backup must remain encrypted.
Below you can see that RMAN handles this with series of steps.  

RMAN will
  1. Decrypt the data in the block using the tablespace encryption key.
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the whole block (including the headers) using a new encryption key generated by the RMAN job

You can see in the image below, after executing two RMAN backup jobs the blocks are encrypted with two different encryption keys. Each subsequent backup job will also have new encryption keys.

Compressed TDE data



Compression or Deduplication

This leaves you with having to chose one or the other when performing RMAN backup jobs to a deduplication appliance.  If you execute a normal RMAN backup, there is no compression available, and if you utilize RMAN compression, it is not possible to dedupe the data. The ZDLRA, since it needs to read the header data, didn't support using RMAN compression.

How space efficient encrypted backups work with TDE

So how does the ZDLRA solve this problem to be able provide both compression and the creation of virtual full backups?
The flow is similar to using RMAN compression, BUT instead of using RMAN encryption, the ZDLRA library encrypts the blocks in a special format that leaves the header data unencrypted.  The ZDLRA library only encrypts the data contents of blocks.

  1. Decrypt the data in the block using the tablespace encryption
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the data portion of the block (not the headers) using a new encryption key generated by the RMAN job
In the image below you can see the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full backup.

This allows the ZDLRA to both compress the blocks AND provide space efficient virtual full backups




How space efficient encrypted backups work with non-TDE blocks


So how does the ZDLRA feature work with non-TDE data ?
The flow is similar to that of TDE data, but the data does not have to be unencrypted first.  The blocks are compressed using RMAN compression, and are then encrypted using the new ZDLRA library.


In the image below you can the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full.





I hope this helps to show you how space efficient encrypted backups work, and how it is a much more efficient way to both protect you backups with encryption, and utilize compression.

NOTE: using space efficient encrypted backups does not require with the ACO or the ASO options.









Wednesday, April 17, 2024

Autonomous Recovery Service Prechecks

 If you are configuring backups to utilize the Autonomous Recovery Service, there are some prerequisites that you need to be aware of.  If your Oracle Database was originally created in OCI and has always been OCI, those prerequisites are already configured for your database.  But, if you migrated a database to an OCI service, you might not realize that these items are required.


Prerequisites for Autonomous Recovery Service


1) WALLET_ROOT must be configured in the SPFILE.

WALLET_ROOT is a new parameter that was added in 19c, and its purpose is to replace the SQLNET.ENCRYTPION_WALLET_LOCATION in the sqlnet.ora file. Configuring the encryption wallet location in the sqlnet.ora file is depreciated.
WALLET_ROOT points to the directory path on the DB node(s) where the encryption wallet is stored for this database, and possibly the OKV endpoint client if you are using OKV to manage your encryption keys.
WALLET_ROOT allows each database to have it's own configuration location specific to each database.

There is a second parameter that goes with WALLET_ROOT that tells the database what kind of wallet is used (file, HSM or OKV), and that parameter is tde_configuration.


Running the script below should return the WALLET_ROOT location, and the tde_configuration information.


Checking the WALLET_ROOT and tde_configuration


Below you can see that both of these parameters are configured and I am using a wallet file.


Parameter            Value
-------------------- ------------------------------------------------------------
wallet_root          /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root
tde_configuration    keystore_configuration=FILE


If you need to migrate to WALLET_ROOT use the tooling .

dbaascli tde enableWalletRoot - enable wallet_root spfile parameter for existing database.

           Usage: dbaascli tde enableWalletRoot --dbname <value> [--dbRestart <value>] [--executePrereqs] [--resume [--sessionID <value>]]
                     Where:
                          --dbname - Oracle database name.
                          [--dbRestart - database restart option. Valid values : full|rolling ]
                          [ --executePrereqs - run the prerequisite checks and report the results. ]
                          [--resume - to resume the previous operation]
                          [--sessionID - to resume a specific session id.]


2) Encryption keys must be configured and available

In order to leverage the Autonomous Recovery Service, you must have an encryption key set and available for the CDB and each PDB.  If you migrated a non-TDE database (or plugged in a nonTDE PDB) to OCI you might not have configured encryption for one ore more PDBs.  The next step is to ensure that you have an encryption key set, and the wallet is open.  The query below should return "OPEN" for each CDB/PDB showing that the encryption key is available.


Below is the output from the query showing that the wallet is open for the CDB and the PDBs. 



   INST_ID PDB Name   Type       WRL_PARAMETER                                                Status
---------- ---------- ---------- ------------------------------------------------------------ ---------------
         1 BGRENNPDB1 FILE                                                                    OPEN
           CDB$ROOT   FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/tde/         OPEN
           PDB$SEED   FILE                                                                    OPEN

         2 BGRENNPDB1 FILE                                                                    OPEN
           CDB$ROOT   FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/tde/         OPEN
           PDB$SEED   FILE                                                                    OPEN



3) All tablespaces are TDE encrypted

TDE encryption is mandatory in OCI, and the Autonomous Recovery Service cannot be used if all of your tablespaces are not encrypted.  Below is a query to run that will tell you if your tablespaces are all encrypted.


In my case I can see that all of the tablespaces are encrypted

Encrypted tablespace information
------------------------------------------------------------
Number of encrypted tablespaces   :      12
Number of unencrypted tablespaces :      0
                                         ----
Total Number of tablespaces       :      12



To find any tablespaces that are not encrypted you can run the query below.



4) Turn Off Any Manual Operational Backups.


In some cases, OCI users perform manual operational backups. These backups are run outside the standard tooling and support point-in-time recovery (non-KEEP backups).
Performing incremental backups to multiple locations can cause integrity issues with recovery.
The original backups can be kept to support the original retention window, and ensure that you have operational backups for a point-in-time prior to onboarding to the Recovery Service.  
Choose an appropriate cutover time, and switch to the Recovery Service, and slowly remove older backups as they expire until they are all completely removed.



Monday, April 15, 2024

Restoring OCI object store backups onto Exadata Cloud Service

 This blog post covers the steps necessary to restore backups made using the Oracle Database Cloud backup Service, onto Exadata Cloud Service in the event of DR situation.  


In this post, I am going to assume that you have already configured an ExaCS environment and have a VM defined to restore the database into.

The database I am going to use for testing has the characteristics below.

DBNAME:    bgrenndb

DB version:    19.19

DB_UNIQUE_NAME: BGRENNDB_HS7_IAD/

NOTE:  have been creating "KEEP" backups for this database and I want to use one of them to restore from in OCI.  This may not be case, you might be sending a weekly full backup, and a daily incremental backup.


Prerequisites:

There are some prerequisites that I found are important to make the restoration go smoothly

  • Backup your TDE encryption wallet - It is important to make sure you have the encryption keys for your database.  When using the Oracle Database backup service, ALL backup pieces are encrypted, including the backups of the spfile, and controlfile. It is critical to have the encryption wallet to restore the backups.  You want to backup just the "ewallet.p12" file. I recommend you DO NOT backup the cwallet.sso file, as this is the autologin wallet.  Best MSA (Maximum Security Architecture) practice is to backup the wallet stored separate from the database backups, and recreate the autologin wallet using the password. This is much more secure than backing up the autologin wallet.
  • Store the backup logs in a bucket - When restoring from a database backup you need to determine the backup pieces that are needed, especially when restoring the controlfile.  If you store the log files, it will make it much easier to restore the database without an RMAN catalog.
  • Create a bucket for DB backups and Metadata - This is where the database backups will be stored, and I recommend adding a retention lock to the bucket.  Instructions on creating the retention lock can be found here.
PRO TIP : The easiest way to upload the RMAN backup log files, and backups of the wallets is to use Pre-Authenticated URLs (PARS). These make it secure (because they can only be used to drop the backup into a bucket), and they also make it easier to deal with authentication.

Steps to restore a database from object storage.

1) Create a stub database 

Because I want to use the tooling in OCI to manage my database, I am starting with a "stub" database with the same name as my backed up database, and it should be the same DB release  or higher. 

NOTE: When creating the stub database, you should use the same password as you are using for the original database.  In my case the SYS password, and the wallet password are the same.  If your wallet password is different from the SYS password, you can create the stub database with different passwords.

Stub database

DBNAME:    bgrenndb

DB version:    19.22

DB_UNIQUE_NAME: BGRENNDB_S39_IAD


PRO TIP  - In hindsight, I should have named the DB_UNIQUE_NAME the same as my production database to make it easier to restore.

2) Backup a copy of the stub SPFILE


In sqlplus I backed up the SPFILE to a PFILE that I will use later to ensure my parameters which are local to this VM are correct when I restore my database.

SQL> create pfile='/tmp/bgrenndb.origpfile' from spfile;

3) Shutdown the database and delete all files.

I shut down the database in srvctl since this is a RAC instance

#> srvctl stop database -d bgrenndb_s39_iad

I deleted all the files on ASM from both +DataC1 and +RecoC1 for this database


4) Download and configure the Oracle Database Backup Service

You need to download the Oracle Database backup service installation jar file.  Once this is downloaded, you need to run the installation which will download the library, create a wallet file, and create the configuration file used by the library.

Instructions on how to do this are documented in my last blog post you can find here.

Pro Tip : Since the I am restoring the database to a RAC cluster it would be easier if I install the Database Service configuration to a shared locations across all nodes.  In my environment, I am going to install the Backup Service configuration in "/acfs01/dbaas_acfs/bgrenndb" in a directory called opc.


Once I go through the installation, I will have the following directories

/acfs01/dbaas_acfs/bgrenndb/opc/lib        --> contains libopc.so used during restore

/acfs01/dbaas_acfs/bgrenndb/opc/config    --> backupconfig.ora containg the library parameters

/acfs01/dbaas_acfs/bgrenndb/opc/wallet     --> contains the authentication information


5) Download and configure the TDE Wallet from my backup

The easiest way to to download the most current wallet from OCI object storage is by using a Pre-authenticated URL (PAR).  I created a PAR on the object and then used curl to download my wallet file.

curl -o {name of the restored file } {PAR which is a long URL pointing to the object}

Once I download the wallet, I am going to :
  • Go to the wallet directory (under WALLET_ROOT/tde and delete the original wallet files (ewallet.p12 and cwallet.sso).
  • Replace the ewallet.p12 with my downloaded wallet from my source database.
Now that I have the wallet downloaded, I need to create the autologin wallet.

NOTE: it is not recommended to backup the autologin wallet, just the passworded wallet

To create the autologin wallet from the passworded wallet I execute

>mkstore -wrl {wallet_location} -createSSO

I enter the password for the wallet, and it creates the autologin wallet for me.

6) Startup the database nomount and validate wallet


Now that I have the wallet in the correct location, I created a basic pfile.  I only need the following parameters.  You can look at the backup of the stub spfile to get the appropriate setting for the "control_files", "db_unque_name", and proper disk groups for DATA and RECO.

*.control_files='+DATAC1/BGRENNDB_S39_IAD/CONTROLFILE/current.327.1166013711'
*.db_name='bgrenndb'
*.enable_pluggable_database=true
*.db_recovery_file_dest='+RECOC1'
*.db_recovery_file_dest_size=6538932518912
*.db_unique_name='bgrenndb_s39_iad'
*.diagnostic_dest='/u02/app/oracle'
*.pga_aggregate_target=5000m
*.processes=2048
*.sga_target=7600m
*.tde_configuration='keystore_configuration=FILE'
*.wallet_root='/var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root'


NOTE: I am going to restore the spfile, so this is only temporary.

I started the database nomount with this small pfile

SQL> startup nomount pfile=bgrenndb.ora;

Once the database started, I used the first TDE query from my blog to check the status of the wallet.  You want to make sure the encryption wallet is OPEN before proceeding.

 INST_ID PDB Name   Type       WRL_PARAMETER                                      Status                         WALLET_TYPE          KEYSTORE Backed Up
---------- ---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
         1            FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/td OPEN                           UNKNOWN              NONE     NO
                                 e/


7) Locate the name of the SPFILE and Controlfile backup pieces

As part of my backup script, I also uploaded the log file associated with the backup. This gave me
  • The DBID
  • The name of the spfile backup piece associated with the backup I am going to restore
  • The name of the controlfile backup piece associated with the backup I am going to restore

8) Restore the spfile and update it.

Using the backup piece name, I restored my spfile to the file system, and created a pfile copy of it so that I can make a few changes.

RMAN>
 run {
 allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/acfs01/dbaas_acfs/bgrenndb/opc/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/acfs01/dbaas_acfs/bgrenndb/opc/config/backupconfig.ora)';
 set dbid=367184428;
 restore spfile to '/tmp/bgrenndb.spfile' from 'BGRENNDB_KEEP_20240227_3776_1' ;
}

RMAN> 2> 3> 4> 5>
allocated channel: c1
channel c1: SID=2142 device type=SBT_TAPE
channel c1: Oracle Database Backup Service Library VER=19.0.0.1

executing command: SET DBID

Starting restore at 12-APR-24

channel c1: restoring spfile from AUTOBACKUP BGRENNDB_KEEP_20240227_3776_1
channel c1: SPFILE restore from AUTOBACKUP complete
Finished restore at 12-APR-24
released channel: c1

RMAN> create pfile='/tmp/bgrenndb.pfile' from spfile='/tmp/bgrenndb.spfile';

Statement processed


I then edited my pfile, "/tmp/bgrenndb.pfile" and made the following changes.
  • I changed custer_interconnects to match the entries in the original spfile from the stub.
  • I changed entries that were pointing to DATAC6 and RECOC6 to DATAC1 and RECOC1 to match the VM I am restoring to.
  • I changed the REMOTE_LISTENER to match the original spfile.
  • I changed the bgrenndb_hs7_iad to bgrenndb_s39_iad since that will be new db_unique_name.
I then bounced the database and started it up NOMOUNT again with the new pfile

9) Restore the controlfile

Now I am going to identify the backup location of the controlfile I want, and restore the control file 

RMAN>

 run {
  allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/acfs01/dbaas_acfs/bgrenndb/opc/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/acfs01/dbaas_acfs/bgrenndb/opc/config/backupconfig.ora)';
  set dbid=367184428;
 restore controlfile from 'BGRENNDB_KEEP_20240227_3777_1' ;
}
4> 5>
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=9 instance=bgrenndb1 device type=SBT_TAPE
channel c1: Oracle Database Backup Service Library VER=19.0.0.1

executing command: SET DBID

Starting restore at 12-APR-24

channel c1: restoring control file
channel c1: restore complete, elapsed time: 00:00:04
output file name=+DATAC1/BGRENNDB_S39_IAD/CONTROLFILE/current.332.1166124375
Finished restore at 12-APR-24
released channel: c1

Once restored the controlfile, I updated the pfile to the location the controlfile was restored to.
Then I created the spfile from pfile.

SQL> create spfile from pfile='/tmp/bgrenndb.pfile';

I then shutdown the instance and started it mount and ensured the parameters were correct, and once again ensured the wallet was open.

10) Change the channel configuration in RMAN and restore

I changed the channel configuration to match the backup service settings, and restored the database using the TAG

 restore database from tag=KEEP_BGRENNDB_HS7_IAD_20240227;
 recover database from tag=KEEP_BGRENNDB_HS7_IAD_20240227;

11) I opened the database reset logs



RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 04/12/2024 19:59:12
ORA-19751: could not create the change tracking file
ORA-19750: change tracking file: '+DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109'
ORA-17502: ksfdcre:4 Failed to create file +DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109
ORA-15046: ASM file name '+DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109' is not in single-file creation form
ORA-17503: ksfdopn:2 Failed to open file +DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109
ORA-15001: diskgroup "DATAC6" does not ex



Oops, I then disabled block change tracking.


RMAN> alter database disable block change tracking;

RMAN> alter database open resetlogs;

Statement processed
PL/SQL package SYS.DBMS_BACKUP_RESTORE version 19.19.00.00 in TARGET database is not current
PL/SQL package SYS.DBMS_RCVMAN version 19.19.00.00 in TARGET database is not current

Now it was successful, and I see I have to upgrade the database.


12) Patch the database from 19.19 to 19.22

I ran through the patch upgrade process 

> cd $ORACLE_HOME/OPatch
>./datapatch -verbose


Summary :

Once I patched the database, I turned on automatic backups which was successful. This was a great sign that I had everything correct and my new database ready to go !




Friday, March 8, 2024

Autonomous Recovery Service Checklist

 Utilizing the Autonomous Recovery Service (ARS) for your Oracle Databases in OCI is the best method for backing up your Oracle databases.  In this post I will go through the steps required to successfully implement this service.  To learn more about this service you can find the documentation here.


1. Ensure your tenancy's resource limits are adequate

Before implementing the ARS, you first must see what the resource settings are in your tenancy. You want to make sure that the "Space Used for Recovery Window (GB)" and "Protected Database Count" allow for the number of databases, and backup size of the databases you want to utilize the service for.

Below is what you would see for the ARS. This is a screen shot from my free tenancy.  In your tenancy you should see what the current limits are.  When looking at the root compartment, this will show you the Limits and usage for the whole tenancy.


If you need to increase the limits for your tenancy click on the 3 dots to the right of the limit you want to increase. It will bring up a choice to "Open Support Request".  After choosing "Open Support Request" you will see a window that allows you to request a limit increase for your tenancy.

NOTE: There is a second choice when clicking on the 3 dots to "Create Quota Policy Stub". Using the stub displayed you can limit the quota of specific compartments.  This can be used to limit the usage for your "dev" compartment for example, ensuring there is space within your limits for production


2. Verify the policies for the tenancy

A) Set root compartment policies for service



Tenancy Polices for ARS
Policy Statement Purpose

Allow service database to manage recovery-service-family in tenancy

Enables the OCI Database Service to access protected databases, protection policies, and Recovery Service subnets within your tenancy.

Allow service database to manage tagnamespace in tenancy

Enables the OCI Database Service to access the tag namespace in a tenancy.

Allow service rcs to manage recovery-service-family in tenancy

Enables Recovery Service to access and manage protected databases, Recovery Service subnets, and protection policies within your tenancy.

Allow service rcs to manage virtual-network-family in tenancy

Enables Recovery Service to access and manage the private subnet in each database VCN within your tenancy. The private subnet defines the network path for backups between a database and Recovery Service.

Allow group admin to manage recovery-service-family in tenancy

Enables users in a specified group to access all Recovery Service resources. Users belonging to the specified group can manage protected databases, protection policies, and Recovery Service subnets.


B) Allow users (in group) to manage the Recovery Service


Group Policy Statement by Compartment
Policy Statement Create In Purpose

Allow group {group name} to manage recovery-service-policy in compartment {location}

Compartment that owns the protection policies. Enables all users in a specified group to create, update, and delete protection policies in Recovery Service.


C) Allow users (in group) to manage the required subnet for the Recovery Service


Group Policy Statement by Compartment
Policy StatementCreate InPurpose

Allow Group {group name} to manage recovery-service-subnet in compartment {location}

Compartment that owns the Recovery Service subnets.Enables all users in a specified group to create, update, and delete Recovery Service subnets.


3. Configure Network Resources for Recovery Service

The Recovery Service uses Private endpoints to control backup traffic between your database and the recovery service.  Below is the architecture.



Each Recovery service subnet needs to be created within the VNC where your database resides.  

The minimum size of the subnet is /24 (256 IP addresses).  You can create a new subnet, or use an preexisting subnet in you database VCN.  This subnet must be IPv4.

Security rules (implemented through Security Lists, or Network Security Groups) for the private subnet must include stateful ingress rules to allow destination ports 8005 and 2484.

NOTE: You can use a public subnet, but it is not recommended for security reasons.

This private subnet must be registered as a Recovery Service Subnet.

Checklist for Security rules (Security List or NSG)

1. Rule 1 - Ingress. Allow HTTPS traffic from anywhere

  • Stateless: No (All rules must be stateful)
  • Source Type: CIDR
  • Source CIDR : CIDR of the VCN where the database resides
  • IP Protocol: TCP
  • Source Port Range: All
  • Destination Port Range: 8005

2. Rule 2 - Ingress. Allow SQLNet traffic from anywhere

  • Stateless: No (All rules must be stateful)
  • Source Type: CIDR
  • Source CIDR : CIDR of the VCN where the database resides
  • IP Protocol: TCP
  • Source Port Range: All
  • Destination Port Range: 2484
NOTE: If your VCN restricts network traffic between subnets, ensure to add an egress rule for ports 2484, and 8005 from the database subnet to the Recovery Service subnet that you create.

3. Register the subnet with recovery service


Under Oracle Database --> Database Backups you need to click on "Recovery Service Subnets" and register the Recovery Service Subnet.



4. Ensure the Recovery Service Subnet can communicate with Oracle services.

The Recovery Service Subnet that you registered needs to communicate with the Recovery Service. In order to access the service, the routing table for this subnets needs to include "All IAD Services In Oracle Services Network".


If all these pieces are in place you should be ready to successfully configure your database backups to go the Recover Service for backup.


Short checklist

  1. Check your limits and quotas for the recovery service
  2. Create the policies for the Recovery Service, and groups (users) to manage the recovery service
  3. Create the subnet for the Recovery Service making sure you have the correct security settings, and the subnet has access to Oracle services
  4. Register the subnet as the Recovery Service Subnet.


Wednesday, February 14, 2024

DB Script management through pre-authenticated URLs

 Pre-authenticated URLs in OCI are fast becoming one of my favorite features of using object storage.  In this blog post I will explain how I am using them for both:

  • Storing the current copy of my backup scripts and dynamically pulling it from my central repository
  • uploading all my logs files to a central location
Pre-authenticated URL creation

PROBLEM:


The problem I was trying to solve, is that I wanted to create a script to run on all my database nodes to create a weekly archival backup.
Since I have databases that are both Base DB databases, and ExaCS I found that I was continuously making changes to my backup script.  Sometimes it was checking something different in my environment, and sometimes it was improving the error checking.
Each time I made a change, I was going out to every DB host and copying the new copy of my script.
Eventually I realized that Pre-authenticated URLs could not only help me ensure all my DB hosts are running the current copy of my backup script, they could be the central log location.

Solution:


Solution #1 - Script repository


The first problem I wanted to solve, was that I wanted to configure a script repository that I could dynamically pull the most current copy of my scripts from. Since I am running in OCI, I was looking for a "Cloud Native" solution rather than using NFS mounts that are shared across all my DB hosts.
To complicate things, I have databases that are running in different tenancies.

Step #1 - Store scripts in a bucket

The first step was to create a bucket in OCI to store both the scripts and logs.  Within that bucket, under "More Actions" I chose "Create New Folder" and I created 2 new folders, "logs" and "scripts".
Then within the scripts folder I uploaded by current scripts
  • rman.sh - My executable script that will set the environment and call RMAN
  • backup.rman - My RMAN script that contains the RMAN command to backup my database.

Step #2 - Create a Pre-Authenticated Request

The next step was to create a Pre-Authenticated request on the "scripts" folder.  Next to the scripts folder I clicked on the  3 dots and chose "Create Pre-Authenticated Request".
On the window that came up, I changed the expiration to be 1 year in the future (the default is 7 days).  I chose the "Objects with prefix" box so that I could download any scripts that I put in this folder to the DB hosts.  I also made sure the "Access Type" is "Permit object reads on those with specified prefix".
I did not chose "Enable Object Listing".
These settings will allow me to download the scripts from this bucket using the Pre-Authenticated URL only.  From this URL you will not be able to list the objects, or upload any changes.


Step #3 - Create wrapper script to download current scripts

Then using the Pre-Authenticated URL in a wrapper script, I download the current copies of the scripts to the host and then executed my execution script (rman.sh) with a parameter.

Below you can see that I am using curl to download my script (rman.sh) and storing it my local script directory (/home/oracle/archive_backups/scripts).  I am doing the same thing for the RMAN command file.
Once I download the current scripts, I am executing the shell script (rman.sh) .


curl -X GET https://{my tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{actual URL is removed }/n/id20skavsofo/b/bgrenn/o/scripts/rman.sh --output /home/oracle/archive_backups/scripts/rman.sh
curl -X GET https://{my tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{actual URL is removed }/n/id20skavsofo/b/bgrenn/o/scripts/backup.rman --output /home/oracle/archive_backups/scripts/backup.rman


/home/oracle/archive_backups/scripts/rman.sh $1


Solution #2 - Log repository

The second problem I wanted to solve was to make it easy review the execution of my scripts.  I don't want to go to each DB host and look at the log file.  I want to have the logs stored in a central location that I can check.  Again Pre-Authenticated URLs to the rescue !

Step #1 - Create the Pre-Authenticated URL

In the previous steps I already create a "logs" folder within the bucket. In this step I want to create a Pre-Authenticated URL like I did for the scripts, but in this case I want to use it to store the logs.
Like before I chose "Create Pre-Authenticated Request" for the "logs" folder.
This time, I am choosing "Permit object writes to those with the specified prefix". This will allow me to write my log files to this folder in the bucket, but not list the logs, or download any logs.


Step #2 - Upload the log output from my script

The nice thing was once I implemented Solution #1, and had all of my DB nodes already downloading the current script.  I updated the script to add an upload to object storage of the log file and they will all use my new script.
In my script I already had 2 variables set
  • NOW - The current date in "yyyymmdd" format
  • LOGFILE - The name of the output log file from my RMAN backup.
Now all I had to do was to add a curl command to upload my log file to the bucket.

Note I ma using the NOW variable to create a new folder under "logs" with the data so that my script executions are organized by date.

curl --request PUT --upload-file /home/oracle/archive_backups/logs/${LOGFILE} https://{My tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{URL removed}/n/id20skavsofo/b/bgrenn/o/logs/${NOW}/${LOGFILE}

BONUS


If I wanted to get fancy I could have put my LOGS in a new bucket, and configured  a lifecycle management rule to automatically delete logs after a period of time from the bucket.

Tuesday, January 9, 2024

RMAN create standby database - Restore or Duplicate ?

RMAN create standby database - Are you like me and use "restore database" for large databases, or like most people (based on my Linkedin poll) and use "duplicate for standby"? 

The table below shows you the 3 main differences between the 2 methods.


This post started with a discussion within my team around which method you use. I, being of the "restore database" camp, didn't realize how commonly used "duplicate for standby" is. 
I have also dug through the documentation, and there is no common method that is mentioned. Even the 21c documentation for creating a standby database doesn't mention using the duplicate command.
I also was pointed to a MOS note that goes through creating a standby directly from the primary across multiple nodes, and with encryption.  Creating a Physical Standby database using RMAN restore database from service (Doc ID 2283978.1)

Well in this post, I will explain why "restore database" has been my preference. 

NOTE : If you are creating a standby database that is encrypted and the source database is not (creating a standby database in OCI for example) then I have instructions at the end of this post for how to use "Restore Database" to create a hybrid standby database.

Duplicate database for standby


From the poll I ran, this is the most common way to create a standby database.  It is probably the simplest way also because a lot of the configuration of the standby database is done automatically as part of the automated process.
Below is the simplified steps to perform this process.

PRE work

  1. Create simple initfile on the standby host.  The real SPFILE will be brought over as part of the duplication process.  This may contain location parameters for datafiles and redo logs if different from the primary.
  2. Create directories on the standby host.  This includes the audit directory, and possibly the database file directories if they are different from the host.
  3. Startup nomount.

Duplicate 

The duplicate process automatically performs these major steps using the standby as an auxiliary instance.

  1.  Create an SPFILE. The process creates an SPFILE for the standby and sets parameters for the standby.
  2. Shutdown/Startup standby database. This will use the newly created SPFILE during the rest of the processing
  3. Restore backup controlfile for standby database. The controlfile for the standby database is put in place, and the spfile is updated to it's location
  4. Mount controlfile . Mount the controlfile that was restored
  5. Restore database . Restore the datafiles files for the CDB and PDBs to their new location on the standby
  6. Switch datafile . Uses the new location of the datafiles that were restored.
  7. Create standby redo logs.
  8. Set parameters for standby database. The parameters necessary to communicate with the primary database are set.
  9. Put standby in recover mode . By this time, you should have set the primary database to communicate with the standby database.

NOTES

If you noticed above, I highlighted the second step which forces a shutdown/startup of the standby database. Because of this step, it is not possible to use this method and restore across nodes in a RAC database.  This can cause the duplicate operation to take much longer for larger databases.
Then in step #5 you can see that the "Restore Database" is automatic in the processing and it is not possible to perform a "restore as encrypted" if you are migrating to OCI from a non-TDE database.  The duplicate process does support "restore as encrypted", but only for creating a new Database, not a standby database.

Restore Database


This is the method that I've always used.  There is no automation, but it gives you much more control over the steps.  

PRE work

  1. Restore copy of prod SPFILE to standby host.  For this process, it doesn't matter if it is an intifile or spfile.  In this file you set all the parameters that are needed for the standby database to communicate with the primary and store datafiles/logfiles in the correct location.
  2. Create directories on the standby host.  This includes the audit directory, and possibly the database file directories if they are different from the host.
  3. Startup nomount.
  4. Create copy of primary controlfile for standby. This will be used for the standby database, and should contain the backup catalog  of the primary database, and the RMAN settings including the  channel definitions.
  5. Copy standby controlfile to standby host. The controlfile is copied to the standby host, and may be put in ASM at this point. Ensure the spfile points to the controlfile (and/or srvctl).
  6. Alter database mount.  Mount the controlfile. 
  7. Start up ALL nodes in the RAC cluster in mount mode.  This will allow you to restore the database across ALL nodes in the RAC cluster, and include all the networking from these nodes.  For a large database hosted on multiple DB nodes this can make a HUGE difference when restoring the database.
  8. Create (or copy) TDE wallet.  If the standby database is going to be TDE, then include the wallet if the primary is TDE, or create a new wallet and key if the standby database is going to be TDE.

Restore Database 

The restore process is a manual process

  1.  RMAN Connect to database (and possibly RMAN catalog). Connect to the database and make sure you have access to the backups. For ZDLRA this may mean connecting to the RMAN catalog.
  2. Restore Database (as encrypted). This will restore the database to the new location.  With Restore Database, the database can be encrypted during the restore operation.  With 19c it is supported to have the standby database be encrypted without the primary database being encrypted (Hybrid dataguard).
  3. Switch datafile . Uses the new location of the datafiles that were restored.
  4. Recover database. This will use the archive logs that are cataloged to bring the standby database forward
  5. Create standby redo logs.
  6. Set parameters for standby database. The parameters necessary to communicate with the primary database are set.
  7. Put standby in recover mode . By this time, you should have set the primary database to communicate with the standby database.


NOTES

With the restore database, there are 2 sections I highlighted and these are the advantages that I love about using this method.
  • RMAN is restoring across multiple nodes in a RAC cluster which can make the restore operation much faster.
  • Restore as encrypted allows you take a database that may have TDE partially implemented, or not implemented and create a new standby database that is encrypted. With the duplicate method, TDE would have to be implemented separately.
If you are restoring a VERY large database (200 TB for example) that was not TDE from object storage to the Exadata Cloud Service, both of these advantages can make a HUGE difference when creating a standby database.

Comparison

The chart below compares the the differences between "Duplicate Database" and "Restore Database".

WARNING: When using a ZDLRA for backups, it is NOT recommended to use the "Restore Database" to clone a database as a new copy. Registering the restored copy can cause issues with the RMAN catalog because the "restore database" leaves entries in the RC_SITE table.



Data Guard Hybrid Cloud Configuration

The ability to create a hybrid cloud configuration was introduced in Version 19.16 and there is a great blog post from Glen Hawkins explaining this feature.
This feature allows you have your Primary database remain unencrypted (no ASO license), but still have the standby database in OCI be TDE encrypted.

In this section I want to talk about how you can use "Restore Database as Encrypted" to implement this configuration quickly.

If you want to implement this feature using "Duplicate for standby" you have to separately encrypt the datafiles once they are restored in OCI.  This can be done online, or offline, but it is still a time consuming task.

Prepare the primary and future standby databases

The first step is prepare the primary database and future standby database by creating a wallet file and setting encryption keys.  There is a great video put together by Peter Wahl (PM for TDE and OKV) that goes through a lot of the steps.

Below is a summary of the steps you need to perform.  You can follow along the steps in Peter's video and I will point out where in the video you will find each step.

  • Create the directories on the primary (3:40) -  Directories are specified in the video and need to be created on all nodes in a RAC cluster.
  • Create the directories on the standby database (4:18) -Directories are specified in the video and need to be created on all nodes in a RAC cluster.
  • Set the wallet_root in the primary (4:25) - This is set in the SPFILE only
  • Set tablespace_encryption to decrypt_only on primary (4:40) -  This is set in the SPFILE only
  • Set the default algorithm to AES256 on primary (4:50) - This is set in the SPFILE only
  • Set wallet_root on standby, tablespace_encryption to auto_enable, and default algorithm on standby --  This is set in the initfile that you create prior to performing the restore.  This step is different from the video because there is no standby at this point.
  • Bounce the primary database (5:50) - This can be in a rolling manner.
  • Create a password protected wallet on the primary (7:25) - This gets created in the default location specified from WALLET_ROOT
  • Create an auto open wallet on the primary (7:30) - This will make it automatically open for the database.
  • Set the encryption keys in the primary (7:40) - The keys will be created so that they can be used when restoring the datafiles on the standby database.
  • Copy the wallets from the primary to the standby (7:55) - This provides the standby database with the keys to encrypt.