Sunday, October 17, 2021

ZFS now supports Object Store Pre-Authenticated Requests

ZFS now supports Pre-Authenticated requests which can be useful for loading data into your data warehouse.


Version OS8.8.36 of the ZFS software was recently released, and one of the most interesting features is the support for Pre-Authenticated Requests. 

Here is the note "On-premise Object Storage Best Practices and Recommended Use Cases". This Document outlines how to use the new Object Store features, some of which I will cover in future posts.

Here is my post on configuring your database to point to ZFS as an object store. It is necessary to  configure ZFS as an object store if you want to do the same testing.

Here is my post on configuring your database to access ZFS as an object store.  This document walks through how to configure DBMS_CLOUD in an 19c+ database.

By going through these notes you can reach the same point that I am with my sample database and ZFS.  Below is the environment that I will be using for this demo.

Environment:

  • Oracle Database 21c (though 19c would work just as well)
    • I have updated the DATABASE_PROPERTIES to point to my wallet for SSL certificates.
    • I added the unsigned SSL certificate for my ZFS simulator HTTPS server to the wallet.
    • I have updated the C##CLOUD$SERVICE.dbms_cloud_store table to point to my ZFS appliance using the ORACLE_BMC authentication method.
    • I installed the sales schema into my account.
  • I am running the ZFS simulator
    • I updated the release of the simulator to OS8.8.36 (2013.06.05.8.36 of the ZFS software.
    • I created a user "salesdata" as a local user
    • I created a share named "salesupload" that is owned by salesdata.
    • The share "salesupload" is configured read/write using the OCI API within HTTP
    • I added the user "salesdata" and it's public SSH keys for authentication with OCI Protocol within HTTP.
  • I have the OCI Client tool installed
    • I installed the OCI client tool
    • I configured an entry for my object store in the ~/.oci/config file called salesdata

Pre-Authenticated Requests for uploading files


The first part of this post will go through creating a request for uploading files to an object store.
In my example, I will be uploading the sales.dat file which comes with the sample sales history schema.

Step 1: Create the bucket to store the files.


The first step is to create a bucket that  I am going to use to store the files that are being uploaded to the object store.  The beauty of using an object store, is that I can have many buckets on the same share that are isolated from each other.

I am going to call my bucket "salesdrop".

Below is the OCI client call I am going to use to create my bucket "salesdrop".


[oracle@oracle-server]$ oci os bucket create --config-file ~/.oci/config --profile SALESDATA --compartment-id salesupload --namespace-name salesupload --endpoint http://10.0.0.231/oci   --name salesdrop
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "compartment-id": "salesupload",
    "created-by": "salesdata",
    "defined-tags": null,
    "etag": "b7ced3b97859a3cc22a23670fc59a535",
    "freeform-tags": null,
    "id": null,
    "is-read-only": null,
    "kms-key-id": null,
    "metadata": null,
    "name": "salesdrop",
    "namespace": "salesupload",
    "object-events-enabled": null,
    "object-lifecycle-policy-etag": null,
    "public-access-type": "NoPublicAccess",
    "replication-enabled": null,
    "storage-tier": "Standard",
    "time-created": "2021-10-17T19:06:47+00:00",
    "versioning": "Disabled"
  },
  "etag": "b7ced3b97859a3cc22a23670fc59a535"
}


Step 2: Create a Pre-Authenticated URL for this bucket


Below is my OCI client call, and the what the parameters mean.


oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop --name upload_sales_data --access-type AnyObjectWrite --time-expires="2022-11-21T23:00:00+00:00"

To go through the parameter used they are
  • config-file: Location of the configuration file
  • profile: Entry to use within the configuration file (if not the default)
  • namespace-name: For ZFS this is the share name
  • endpoint: This is the URL for the ZFS http server + "/oci" to use the OCI API
  • bucket-name: Bucket to create the Pre-Authenticated Request for.
  • name: Identifying name given to this request
  • access-type: What type of Pre-Authenticated request to create
  • time-expires: When will this URL expire? This is mandatory.
Now to execute my request and create the URL.

[oracle@oracle-server ~]$ oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop  --name upload_sales_data --access-type AnyObjectWrite --time-expires="2022-11-21T23:00:00+00:00"
{
  "data": {
    "access-type": "AnyObjectWrite",
    "access-uri": "/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/",
    "id": "11c01b5c-92d8-4c2d-8cba-d9ec4e2649c5",
    "name": "upload_sales_data",
    "object-name": null,
    "time-created": "2021-10-17T19:15:32+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
  }
}


My request was successful, and I can see the URL that was created.  I just need to add the access-uri to the end of  HTTP host URL.

http://10.0.0.231/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/


Step 3: Upload my file

Now I am going to upload the file from my Windows PC using curl.
The file "sh_sales.dat" is on my d: drive.

d:\> curl -X PUT --data-binary '@d:\sh_sales.dat' http://10.0.0.231/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat
d:\> 

No errors. Let's check with and make sure the file got uploaded using the OCI Client tool

[oracle@oracle-server ~]$ oci os object list --endpoint http://10.0.0.231/oci --namespace-name salesupload   --config-file ~/.oci/config --profile SALESDATA     --bucket-name salesdrop --fields name,size,timeCreated
{
  "data": [
    {
      "etag": null,
      "md5": null,
      "name": "loadfiles/sales_history_05012021.dat",
      "size": 55180902,
      "time-created": "2021-10-17T19:35:34+00:00",
      "time-modified": null
    }
  ],
  "prefixes": []
}

I can see the file is there, and the size is 55MB.

Now where can you go with this ? Below is a diagram of how the Oracle IOT cloud can be used as a hub for datafiles from IOT. You can do the same thing by having all your IOT devices "drop" their data onto a central object store (hosted on ZFS), then filtered and loaded into a database.


Pre-Authenticated Requests for loading files

The part of this post is going to show you how to use Pre-Authenticated Requests to load data into your database.

First I wanted to do shout out to @thatjeffsmith. Jeff Smith is the product manager for SQL Developer, and he has a blog http://www.thatjeffsmith.com where he constantly blogs about SQL Developer and all the great work his team is doing.
I saw one of his posts on Pre-Authenticated Requests  to load data (which you can find here), and I realized that you can do almost the same things on any version of 19c+ with the object store on ZFS.

I am going to go through most of the same steps Jeff did in his post.

Step 1:Create the Pre-Authenticated Request to read the object.

Jeff does this in the Console, but I am going to do it with the OCI Client tool.

The command is similar to the command I used to create the "upload" request.
I am going to use a different access-type. I am going to use "ObjectRead" and create a request that points to the object that was uploaded.

[oracle@oracle-server]$ oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop  --name upload_sales_data --access-type ObjectRead --time-expires="2022-11-21T23:00:00+00:00" --object-name loadfiles/sales_history_05012021.dat
{
  "data": {
    "access-type": "ObjectRead",
    "access-uri": "/oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat",
    "id": "547227b4-73b0-4980-bb94-ab5ee87d4c81",
    "name": "upload_sales_data",
    "object-name": "loadfiles/sales_history_05012021.dat",
    "time-created": "2021-10-17T19:56:45+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
  }
}


Now I have my request URL.

http://10.0.0.231//oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat


Step 2: Load the data

Now back to Jeff's example, I am going to log onto my database and load the data.

First I am going to count the rows in my table, then check again after.

SQL> select count(1) from sales;

  COUNT(1)
----------
         0

 SQL> BEGIN
  DBMS_CLOUD.COPY_DATA(
     table_name =>'SALES',
     file_uri_list =>'https://10.0.0.231/oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat',
     format => json_object('delimiter' VALUE '|') );
END;


PL/SQL procedure successfully completed.


SQL> select count(1) from sales;

  COUNT(1)
----------
   1016271

SQL>


I can see that 1 Million rows were successfully loaded into the table.


Step 3: Verify through USER_LOAD_OPERATIONS

Now, like Jeff did with his example, I am going to look at the view USER_LOAD_OPERATIONS to see the information about my load job.


col id format 999
col type format a8
col status format a10
col start_time format a15
col owner_name format a10
col table_name format a20
col file_uri_list format a70
set linesize 160


select
        id,
        type,
        to_char(update_time,'mm/dd/yy hh24:mi:ss') update_time,
        status,
        owner_name,
        table_name,
        substr(file_uri_list,60,160) File_uri_list,
        rows_loaded
from
        user_load_operations
where status='COMPLETED';

SQL> 

  ID TYPE     UPDATE_TIME       STATUS     OWNER_NAME TABLE_NAME           FILE_URI_LIST                                                          ROWS_LOADED
---- -------- ----------------- ---------- ---------- -------------------- ---------------------------------------------------------------------- -----------
   3 COPY     10/17/21 16:13:21 COMPLETED  BGRENN     SALES                salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat             1016271


Other Pre-Authenticated Requests 


There are 2 other "access-types" for Pre-Authenticated requests.

  • ObjectReadWrite: This will give both read and write to a specific object. 
  • ObjectWrite: This give write only access to a specific object (rather than having full access to the bucket).
If you try to use the Pre-Authenticated Requested to anything other than object granted you get an error message.

{"code": "NotAuthenticated", "message": "The required information to complete authentication was not provided"}

List Pre-Authenticated Requests 

You can list all of your Pre-Authenticated Requests to keep a handle on them.


[oracle@oracle-server ~]$ oci os preauth-request list --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop
{
  "data": [
    {
      "access-type": "AnyObjectWrite",
      "id": "11c01b5c-92d8-4c2d-8cba-d9ec4e2649c5",
      "name": "upload_sales_data",
      "object-name": null,
      "time-created": "2021-10-17T19:15:32+00:00",
      "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
      "access-type": "ObjectRead",
      "id": "547227b4-73b0-4980-bb94-ab5ee87d4c81",
      "name": "load_sales_data",
      "object-name": "loadfiles/sales_history_05012021.dat",
      "time-created": "2021-10-17T19:56:45+00:00",
      "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
      "access-type": "ObjectReadWrite",
      "id": "87b4fe97-3e2e-4b22-96aa-a7e3b566dc59",
      "name": "read_write_sales_data",
      "object-name": "loadfiles/sales_history_06012021.dat",
      "time-created": "2021-10-17T20:37:23+00:00",
      "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
      "access-type": "ObjectWrite",
      "id": "828a0651-60f7-4d2a-998c-b3518e1bfa92",
      "name": "write_sales_data",
      "object-name": "loadfiles/sales_history_07012021.dat",
      "time-created": "2021-10-17T20:40:08+00:00",
      "time-expires": "2022-11-21T22:59:59+00:00"
    }
  ]
}


Get Detail on a Pre-Authenticated Requests 


If you want the detail of a specific Pre-Authenticated Request you can use the "get" option and include the --par-id (which is the ID from the list request command).

[oracle@oracle-server ~]$ oci os preauth-request get --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop   --par-id 828a0651-60f7-4d2a-998c-b3518e1bfa92
{
  "data": {
    "access-type": "ObjectWrite",
    "id": "828a0651-60f7-4d2a-998c-b3518e1bfa92",
    "name": "upload_sales_data",
    "object-name": "loadfiles/sales_history_07012021.dat",
    "time-created": "2021-10-17T20:40:08+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
  }
}


NOTE: this does not give you the URL.

Delete a Pre-Authenticated Requests 


Finally you can delete a Pre-Authenticated Request if it is no longer needed using the par-id of the request.


[oracle@oracle-server ~]$ oci os preauth-request delete --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop   --par-id 828a0651-60f7-4d2a-998c-b3518e1bfa92
Are you sure you want to delete this resource? [y/N]: y





Hopefully this gives you an idea of all the things you can do with Pre-Authenticated URLs.




Tuesday, August 31, 2021

TDE–How to implement TDE in your database and what to think about (part 4)

 In this post, I am going to include some lessons learned from implementing "Restore as encrypted" of a large database with over 500,000 objects.

 


The error we were receiving when trying open our database was

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00604: error occurred at recursive SQL level 1
ORA-25153: Temporary Tablespace is Empty
Process ID: 133196
Session ID: 1769 Serial number: 6805

    

And in the alert log we saw.

      Parallel first-pass transaction recovery timed out. Switching to serial recovery.
Undo initialization recovery: Parallel FPTR failed: start:685625075 end:685692452 diff:67377 ms (67.4 seconds)
2021-08-27T10:02:39.567998-04:00
Undo initialization recovery: err:0 start: 685625075 end: 685693406 diff: 68331 ms (68.3 seconds)
2021-08-27T10:02:43.015891-04:00
[339055] Successfully onlined Undo Tablespace 17.
Undo initialization online undo segments: err:0 start: 685693406 end: 685696854 diff: 3448 ms (3.4 seconds)
Undo initialization finished serial:0 start:685625075 end:685697235 diff:72160 ms (72.2 seconds)
Dictionary check beginning
2021-08-27T10:02:44.819881-04:00
TT03 (PID:360221): Sleep 80 seconds and then try to clear SRLs in 6 time(s)
2021-08-27T10:02:54.759120-04:00
Tablespace 'PSAPTEMP' #3 found in data dictionary,
but not in the controlfile. Adding to controlfile.
2021-08-27T10:02:55.826700-04:00
Errors in file /u02/app/oracle/diag/rdbms/bsg/BSG1/trace/BSG1_ora_339055.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-25153: Temporary Tablespace is Empty
2021-08-27T10:02:55.826827-04:00
Errors in file /u02/app/oracle/diag/rdbms/bsg/BSG1/trace/BSG1_ora_339055.trc:
ORA-00604: error occurred at recursive SQL level 1

    


What we found is that there some work the database has to do when opening for the first time after encrypting tablespaces offline.

Background:

Movement of data to disk that includes any objects that reside in an encrypted tablespace is encrypted. This means that if an object resides in an encrypted tablespace, the following data is also encrypted.

  • TEMP - If an object resides in an encrypted tablespace, any sort information in the TEMP tablespace is encrypted. This includes joins to other tables.  Any piece of data in a sort operation on disk causes the whole set of data to be encrypted.
  • UNDO - If an object resides in an encrypted tablespace, the blocks stored in UNDO are encrypted.
  • REDO/Archive - If an object resides in an encrypted tablespace, the changes to that object are encrypted in the redo stream (including redo sent through the network to a standby database).

How this happens:


The way the database manages encryption is to internally mark an object as an encrypted object so that it ensures the objects data stays encrypted on disk. 
Now back to "restore as encrypted".  Since we restored the database and encrypted the tablespaces, the database needs to mark all the  objects in the "newly encrypted" tablespaces as encrypted.
This is part of the database open operation.  The open database operation will sort through the internal object metadata to determine what objects now reside in "newly encrypted" tablespaces.
There are a few things to be aware of around this process.
  1. It requires a sorting of objects.  Because of this you may need a much bigger sort_area_size or PGA_TARGET.  This is only needed to open the database after encrypting, but this was cause of the issue I was seeing.
  2. It may take some time. Lots of time depending on the # of objects.

How to mitigate it:


Since we know this is going to happen, there are a few ways to mitigate it.

  1. Empty out your recycle bin to limit the # of objects to update.
  2. Proactively increase your PGA (or sort_area_size) for opening the database for the first time after encrypting.
  3. Encrypt the database in sections. Do not encrypt every tablespace at once to decrease the # of objects that will be marked encrypted. After encrypting a tablespace, open the database, shut it and do the next tablespace. NOTE: this may not be practical.
  4. Encrypt the tablespace online, as this will mark object as the processing of each tablespace completes.
  5. Check the number of objects that will need to be updated. This can be done by look at the TAB$ internal table using the TS# matching to the tablespaces that will be encrypted.

NOTE:


Remember a standby database may also have this same issue when opened up read only.
Also, it is possible to have the Primary Database encrypted, and the Standby database unencrypted. Or the opposite if encrypting your standby database first.  Restoring from encrypted --> unencrypted or unencrypted --> encrypted and opening up the database will cause this update of metadata to occur.

Saturday, August 14, 2021

Using rclone to download Objects from OCI

 I previously created a post that walked through how to configure rclone to easily access objects within the Oracle Cloud Object Store.


Object Store access with rclone


This post is going to go a little deeper on how to quickly download objects from the OCI object store onto your host.

In my example, I needed to download RMAN disk backup files that were copied to the Object Store in OCI.

I have over 10 TB of RMAN backup pieces, so I am going to create an ACFS mount point to store them on.


1) Create ACFS mount point

Creating the mount point is made up of multiple small steps that are documented here. This is a link to the 19c documentation so note it is subject to change over time.

  • Use ASMCMD to create a volume on the data disk group of 20 TB 
- Start ASMCMD connected to the Oracle ASM instance. You must be a user in the OSASM operating system group.

                    - Create the volume "volume1" on the "data" disk group

                    ASMCMD [+] > volcreate -G data -s 20G volume1

  • Use ASMCMD to list the volume information  NOTE: my volume name is volume1-123
                             
ASMCMD [+] > volinfo -G data volume1
Diskgroup Name: DATA

         Volume Name: VOLUME1
         Volume Device: /dev/asm/volume1-123
         State: ENABLED
     ... 

SQL> SELECT volume_name, volume_device FROM V$ASM_VOLUME 
     WHERE volume_name ='VOLUME1';

VOLUME_NAME        VOLUME_DEVICE
-----------------  --------------------------------------
VOLUME1            /dev/asm/volume1-123


  • Create the file system with mkfs from the volume "/dev/asm/volume1-123"
$ /sbin/mkfs -t acfs /dev/asm/volume1-123
mkfs.acfs: version                   = 19.0.0.0.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/volume1-123
mkfs.acfs: volume size               = 21474836480  (   20.00 GB )
mkfs.acfs: Format complete.
  • Register the file system with srvctl
# srvctl add filesystem -device /dev/asm/volume1-123 -path /acfsmounts/acfs2
       -user oracle -mounttowner oracle -mountgroup dba -mountperm 755
NOTE: This will mount the filesystem on /acfsmounts/acfs2

  • Start the filesystem with srvctl
$ srvctl start filesystem -device /dev/asm/volume1-123

  • Change the ownership to oracle

chown -R oracle:dba /acfsmounts/acfs2

2) Use rclone to view objects

The next step is to look at the objects I want to copy to my new ACFS file system. The format of accessing the object store in the commands is
 "rclone {command} [connection name]:{bucket/partial object name - optional}.


NOTE: For all examples my connection name is oci_s3 

I am going to start with the simplest command list buckets (lsd).

NOTE: We are using the s3 interface to view the objects in the namespace.  There is a single namespace space for the entire tenancy.  With OCI there is the concept of "compartments" which can be used to separate applications and users.  The S3 interface does not have this concept, which means that all buckets are visible.
  • rclone lsd - This is the simplest command to list the buckets, and as I noted previously, it lists all buckets, not just my bucket.
       ./rclone lsd oci_s3:
          -1 2021-02-22 15:33:06        -1 Backups
          -1 2021-02-16 21:31:05        -1 MyCloudBucket
          -1 2020-09-23 22:21:36        -1 Test-20200923-1719
          -1 2021-07-20 20:03:27        -1 ZDM_bucket
          -1 2020-11-23 23:47:03        -1 archive
          -1 2021-01-21 13:03:33        -1 bsgbucket
          -1 2021-02-02 15:35:18        -1 bsgbuckets3
          -1 2021-03-03 11:42:13        -1 osctransfer
          -1 2021-03-19 19:57:16        -1 repo
          -1 2021-01-21 19:35:24        -1 short_retention
          -1 2020-11-12 13:41:48        -1 jsmithPublicBucket
          -1 2020-11-04 14:10:33        -1 jsmith_top_bucket
          -1 2020-11-04 11:43:55        -1 zfsrepl
          -1 2020-09-25 16:56:01        -1 zs-oci-bucket

If I want to list what is within my bucket (bsgbucket) I can list that bucket. In this case it treats the flat structure of the object name as if it is a file system, and lists only the top "directories" within my bucket.

./rclone lsd oci_s3:bsgbucket
           0 2021-08-14 23:58:02        -1 file_chunk
           0 2021-08-14 23:58:02        -1 sbt_catalog


  • rclone tree - command will list what is within my bucket as a tree structure.
[opc@rlcone-test rclone]$ ./rclone tree oci_s3:bsgbucket
/
├── expdat.dmp
├── file_chunk
│   └── 2985366474
│       └── MYDB
│           └── backuppiece
│               └── 2021-06-14
│                   ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
│                   │   └── yHqtjSE51L3B
│                   │       ├── 0000000001
│                   │       └── metadata.xml
│                   └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
│                       └── lS9Sdnka2nD0
│                           ├── 0000000001
│                           └── metadata.xml
└── sbt_catalog
    ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
    │   └── metadata.xml
    └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
        └── metadata.xml


  • rclone lsl- command will list what is within my bucket as a long listing with more detail
[opc@rlcone-test rclone]$ ./rclone lsl oci_s3:bsgbucket
   311296 2021-01-21 13:04:05.000000000 expdat.dmp
337379328 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001
     1841 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml
 36175872 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001
     1840 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml
     1841 2021-06-14 19:48:46.000000000 sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml
     1840 2021-06-14 19:49:10.000000000 sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml


3) Use rclone to copy the objects to my local file system.


There are 2 command you can use to copy the files from the object store to the local file system.
  • copy - This is as you expect. It copies the files to the local file system and overwrites the local copy
  • sync - This syncronizes the local file system with the objects in the object store, and will not copy down the object if it already has a local copy.

In my case I am going to use the sync command. This will allow me to re-start copying the objects and it will ignore any objects that were previously successfully copies.

Below is the command I am using to copy (synchronize) the objects from my bucket in the object store (oci_s3:bsgbucket) to the local filesystem (/home/opc/acfs).

./rclone -vv sync -P --multi-thread-streams 12 --transfers 64  oci_s3:bsgbucket   /home/opc/acfs

To break down the command.

  • -vv  This option to rclone gives me "verbose" output so I can see more of what is being copied as the command is executed.
  • -P  This option to rclone gives me feedback on how much of the object has downloaded so far to help me monitor it.
  • --multi-threaded-streams 12 This option to rclone breaks larger objects into chunks to increase the concurrency.
  • --transfers 64 This option to rclone allows for 64 concurrent transfers to occur. This increases the download throughput
  • oci-s3:bsgbucket - This is the source to copy/sync
  • /home/opc/acfs - this is the destination to copy/.sync with

Finally, this is the what the command looks like when it is executing.

opc@rlcone-test rclone]$  ./rclone -vv sync -P --multi-thread-streams 12 --transfers 64  oci_s3:bsgbucket   /home/opc/acfs
2021/08/15 00:15:32 DEBUG : rclone: Version "v1.56.0" starting with parameters ["./rclone" "-vv" "sync" "-P" "--multi-thread-streams" "12" "--transfers" "64" "oci_s3:bsgbucket" "/home/opc/acfs"]
2021/08/15 00:15:32 DEBUG : Creating backend with remote "oci_s3:bsgbucket"
2021/08/15 00:15:32 DEBUG : Using config file from "/home/opc/.config/rclone/rclone.conf"
2021/08/15 00:15:32 DEBUG : Creating backend with remote "/home/opc/acfs"
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO  : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : expdat.dmp: md5 = f97060f5cebcbcea3ad6fadbda136f4e OK
2021-08-15 00:15:33 INFO  : expdat.dmp: Copied (new)
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for checks to finish
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for transfers to finish
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: Starting multi-thread copy with 2 parts of size 160.875Mi
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 2/2 (168689664-337379328) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO  : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 INFO  : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001: Copied (new)
2021-08-15 00:15:34 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi finished
Transferred:      333.398Mi / 356.554 MiByte, 94%, 194.424 MiByte/s, ETA 0s
Transferred:            6 / 7, 86%
Elapsed time:         2.0s
Transferring:

NOTE: it broke up the larger object into chunks, and you can see that it downloaded 2 chunks simultaneously.  At the end you can see the file that it was in the middle of transferring.

Conclusion.

rclone is great alternative to the OCI CLI to manage your objects and download them.  It has  more intuitive commands (like "rclone ls").  And the best part is that it doesn't require python and special privleges to install.

Thursday, August 5, 2021

Adding immutability to buckets in the Oracle Cloud Object Store

 I am going to demonstrate a new feature of the object store that you might not have known about.  The feature is "Retention Lock" and is used to protect the objects in a bucket.



Let me first start with a few  links to get you started and then I will demonstrate how to use this feature.


In order to add a retention lock to a bucket you create a rule for the individual bucket.

Below is a screen shot of where you will find the retention rules, and the "Create Rule" button. Also note that I highlighted the "Object Versioning" attribute of the bucket.

NOTE: You cannot add a retention lock to a bucket that has "Object Versioning" enabled. You can also not disable "Object Versioning" once enabled. You MUST suspend "Object Versioning" before adding any retention rules to your bucket.



 There are 3 types of retention locks and below I will describe them and show you how to implement them. They are listed from least restrictive to most restrictive.


DATA GOVERNANCE

Data Governance is a time based lock based on the modified time of EACH OBJECT in the bucket.

The Retention can be set in "days" or "years".

Below is what the settings look like for data governance. You choose "Time-Bound" for the rule type and ensure that you do not "enable retention rule lock".



With Data Governance you can both increase and decrease the duration of the retention lock.

Below you can see after the lock was created, the rule is not locked.



REGULATORY COMPLIANCE

Regulatory Compliance is similar to Data Governance with the exception that the duration can only be increased.
The retention lock of the individual objects, just like Data Governance is based on when the individual object was last modified.
Another key difference is that when you "enable retention rule lock", you also set when this rule is locked. The default is 14 days, and cannot be set less than 14 days.
The delay of 14 days is a "cooling off period" that gives you 14 days to test before the rule takes effect. This is because once the cooling off period ends, the retention time cannot be shortened.


Below is the screen shot of creating a retention rule for regulatory compliance and note that the retention rule lock MUST be enabled to ensure the duration is not shortened.


It also asked me to confirm the "lock date" before the rule is created.




Below are the rules that are set after both of these steps.


.NOTE: I now have 2 rules. I have the original rule that will lock the objects for 30 days (this can be changed as needed). I also have a Regulatory Compliance rule that will lock the objects for 1 day. The Regulatory Compliance rule not take effect for 14 days from today.


LEGAL HOLD

The final type of retention is a legal hold.  A legal hold will put a retention lock on the WHOLE bucket. All objects in the bucket are locked and cannot be modified/deleted until the hold is removed. There is no ending time period for a legal hold.

Below is how you create a legal hold.



SUMMARY

You can create the 3 types of retention locks, and you can even layer them. Below you can see that I have 3 locks. The Legal Hold rule will lock everything, but that can be removed leaving the 2 remaining rules.  I can remove the Data Governance rule, but the Regulatory Compliance rule is the most restrictive. Once the 14 day (or whatever you set) has passed this rule cannot be changed.


Now when I go to delete an object that is protected by a retention rule I get an error. Below is example of what you will see.




Wednesday, July 28, 2021

A New ZDLRA feature can help you migrate to a new ZDLRA

 A new feature was included in the 19.2.1.1.2 ZDLRA software release to help you migrate your backup strategy when moving to a new ZDLRA.


This feature allows you to continue to access your older backups during the cut-over period directly from the new ZDLRA.  You point your database restore to the the new ZDLRA  and it will automagically access the older backups if necessary. Once the cutover period has passed, the old ZDLRA can be retired.

I am going to walk through the steps.

1. Configure new ZDLRA

  • Add the new ZDLRA to OEM - The first step is to ensure that the new ZDLRA has been registered within your OEM environment. This will allow it to be managed, and of course monitored.
  • Add a replication VPC user to the new ZDLRA. This will be used to connect from the old ZDLRA.
  • Add the VPC users on the new ZDLRA that match the old ZDLRA
  • Configure policies on new ZDLRA to match old ZDLRA.
          This can done by dynamically executing DBMS_RA.CREATE_PROTECTION_POLICY. 
           Current protection policy information can be read from the RA_PROTECTION_POLICY view.
  • Add databases to proper protection policies on new ZDLRA.
        This can be done by dynamically executing DBMS_RA.ADD_DB. 
        Current database information can be read from the RA_DATABASE view.

  • Grant the replication VPC user access to all databases for replication.
        This can be done by dynamically executing DBMS_RA.GRANT_DB_ACCESS
        The current list of databases can be read from the RA_DATABASE view.

  • Grant the VPC users access to the database for backups/restores
        This can be by dynamically executing DBMS_RA.GRANT_DB_ACCESS
        The current list of grants can be read from the RA_DB_ACCESS view
  • Create a replication server on the old ZDLRA that points to the new ZDLRA
  • Add the protection policies on the old ZDLRA to the replication server created previously..

NOTE: When these steps are completed, the old ZDLRA will replicate the most recent L0 to the new ZDLRA, and will then replicate all new incremental backups and archive logs.




2. Switch to new ZDLRA for backups

  • Update the wallet on all clients to include the VPC user/Scan listener of the new ZDLRA.
  • Update the real-time redo configuration (if using real-time redo) to point to the new ZDLRA.
  • Update backup jobs to open channels to the new ZDLRA
  • Remove the VPC replication user from the new ZDLRA  
  • Drop the replication server on the old ZDLRA
NOTE: The backups will begin with an incremental backup based on the contents of the new ZDLRA and will properly create a "virtual full". Archive logs will automatically pick up with the sequence number following the last log replicated from the old ZDLRA.



3 . Configure "Read-Only Mode" replication to old ZDLRA

  • Add a replication VPC user on the old ZDLRA. This will be used to connect from the new ZDLRA.
  • Create a replication server from new ZDLRA to the old ZDLRA
  • Grant the replication VPC user on the old ZDLRA access to all databases for replication.
        This can be done by dynamically executing DBMS_RA.GRANT_DB_ACCESS
        The current list of databases can be read from the RA_DATABASE view.
  • Add a replication server for each policy that includes the "Read-Only" flag set to "YES".
NOTE: this will allow the new ZDLRA to pull backups from the old ZDLRA that only exist on the old ZDLRA.


4 . Retire old ZDLRA after cutover period

  • Remove replication server from new ZDLRA that points to old ZDLRA
NOTE: The old ZDLRA can now be decommissioned.



That's all there is to it. This will allow you to restore from the new ZDLRA, and not have to keep track of which backups are on which appliance during the cutover window !

Tuesday, May 18, 2021

TDE queries to view your configuration

 This post contains some of the scripts I have been using on my TDE encrypted database to see the big picture of what is being encrypted by what key.



1) Wallet information


 The first script I put together will list the status of wallets for all tenants on all nodes. This will give you the wallet location, type of wallet, united, etc.



Below is the output of this script for my single node, local wallet database.

  INST_ID PDB Name   Type	 WRL_PARAMETER					    Status			   WALLET_TYPE		KEYSTORE Backed Up
---------- ---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
	 1 CDB$ROOT   FILE	 /home/oracle/app/oracle/admin/tdecdb/wallet/tde/   OPEN			   AUTOLOGIN		NONE	 NO
	   PDB$SEED   FILE							    OPEN			   AUTOLOGIN		UNITED	 NO
	   PDBTDE1    FILE							    OPEN			   AUTOLOGIN		UNITED	 NO
	   PDBTDE2    FILE							    OPEN			   AUTOLOGIN		UNITED	 NO
	   PDBTDE3    FILE							    OPEN			   AUTOLOGIN		UNITED	 NO



Below is a the output from a 4 node cluster with OKV configured.



INST_ID PDB Name   Type       WRL_PARAMETER                                 Status               WALLET_TYPE      KEYSTORE Backed Up
------ ---------- ---------- ------------------------------------         --------------      ----------------    ------------- -------------------- -------- ----------
     1 CDB$ROOT   FILE       /u02/app/oracle/admin/jckey/wallet/tde/      OPEN_NO_MASTER_KEY  AUTOLOGIN            NONE     UNDEFINED
       CDB$ROOT   OKV                                                     OPEN                OKV                  NONE     UNDEFINED
       JCKPDB     FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       JCKPDB     OKV                                                     OPEN                OKV                  UNITED   UNDEFINED
       PDB$SEED   FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       PDB$SEED   OKV                                                     OPEN                OKV                  UNITED   UNDEFINED

     2 CDB$ROOT   FILE       /u02/app/oracle/admin/jckey/wallet/tde/      OPEN_NO_MASTER_KEY  AUTOLOGIN            NONE     UNDEFINED
       CDB$ROOT   OKV                                                     OPEN                OKV                  NONE     UNDEFINED
       JCKPDB     FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       JCKPDB     OKV                                                     OPEN                OKV                  UNITED   UNDEFINED
       PDB$SEED   FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       PDB$SEED   OKV                                                     OPEN                OKV                  UNITED   UNDEFINED

     3 CDB$ROOT   FILE       /u02/app/oracle/admin/jckey/wallet/tde/      OPEN_NO_MASTER_KEY  AUTOLOGIN            NONE     UNDEFINED
       CDB$ROOT   OKV                                                     OPEN                OKV                  NONE     UNDEFINED
       JCKPDB     FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       JCKPDB     OKV                                                     OPEN                OKV                  UNITED   UNDEFINED
       PDB$SEED   FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       PDB$SEED   OKV                                                     OPEN                OKV                  UNITED   UNDEFINED

     4 CDB$ROOT   FILE       /u02/app/oracle/admin/jckey/wallet/tde/      OPEN_NO_MASTER_KEY  AUTOLOGIN            NONE     UNDEFINED
       CDB$ROOT   OKV                                                     OPEN                OKV                  NONE     UNDEFINED
       JCKPDB     FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       JCKPDB     OKV                                                     OPEN                OKV                  UNITED   UNDEFINED
       PDB$SEED   FILE                                                    OPEN_NO_MASTER_KEY  AUTOLOGIN            UNITED   UNDEFINED
       PDB$SEED   OKV                                                     OPEN                OKV                  UNITED   UNDEFINED





2) Tablespace information

This script will list the tablespaces, if the tablespace is encrypted, and what the key is.


Below is the output from my database.

PDB Name   Tablespace Name Enc.          Master Key ID              Key ID                             tablespace Encryt key (trunc)
---------- --------------- -----         ------------------------- ----------------------------------- ------------------------------
CDB$ROOT   SYSAUX	       NO	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D
	   SYSTEM	       NO	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D
	   TEMP 	       NO	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D
	   UNDOTBS1	       NO	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D
	   USERS	       YES	 AQbOELhZAk9Dv8A2mADBKQQ=  06CE10B859024F43BFC0369800C12904    9C21DCFF8CB7DCC6E038239DD07D3D

PDBTDE1    SYSAUX	       NO	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A
    	   SYSTEM	       NO	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A
    	   TEMP 	       NO	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A
    	   UNDOTBS1	       NO	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A
    	   USERS	       YES	 AYQysCoXXk+Nv/Q//9sUAV4=  8432B02A175E4F8DBFF43FFFDB14015E    4D7007D0FFFCB3F2702233BDD2702A

PDBTDE2    SYSAUX	       NO	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8
    	   SYSTEM	       NO	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8
    	   TEMP 	       NO	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8
    	   UNDOTBS1	       NO	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8
    	   USERS	       YES	 AegHs2QPk09xv0HVO3B1alQ=  E807B3640F934F71BF41D53B70756A54    C3F9A04600AFE07F023589C0DE0ED8

PDBTDE3    SYSAUX	       NO	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6
	   SYSTEM	       NO	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6
    	   TEMP 	       NO	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6
    	   UNDOTBS1	       NO	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6
    	   USERS	       YES	 AW5TJ43d8E+ZvxD8A1YhdcM=  6E53278DDDF04F99BF10FC03562175C3    6911A4106D914681528706E03202E6




3) Wallet Contents

Now let's take a look at what's in my wallet.



Below you can see the master key ID for each CDB/PDB and information about when it was created.

Master Key ID                                           Tag                  PDB Name        KEYSTORE_TYPE     Origin     Key Creation Time  Key Act. Time
------------------------------------------------------- -------------------- --------------- ----------------- ---------- ------------------ ------------------
ASd1jY/loU8Bv6HuSfZZFqAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	cdbroot_first_key    CDB$ROOT	     SOFTWARE KEYSTORE LOCAL	  06/28/2021 17:46   06/28/2021 17:46
AQbOELhZAk9Dv8A2mADBKQQAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	cdbroot_second_key		     SOFTWARE KEYSTORE LOCAL	  06/28/2021 18:46   06/28/2021 18:46

AfhjvV/z/U9ev5bICBLYV1MAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde1_firstkey     PDBTDE1	     SOFTWARE KEYSTORE LOCAL	  06/28/2021 17:53   06/28/2021 17:53
AYQysCoXXk+Nv/Q//9sUAV4AAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde1_second_key		     SOFTWARE KEYSTORE LOCAL	  06/28/2021 18:50   06/28/2021 18:50

AVXCNjl3f0+Av+/osXobX2sAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde2_firstkey     PDBTDE2	     SOFTWARE KEYSTORE LOCAL	  06/28/2021 17:54   06/28/2021 17:54
AegHs2QPk09xv0HVO3B1alQAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde2_second_key		     SOFTWARE KEYSTORE LOCAL	  06/28/2021 18:50   06/28/2021 18:50

Ab1/+jaPck+Ev6rhmBKtxXEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde3_firstkey     PDBTDE3	     SOFTWARE KEYSTORE LOCAL	  06/28/2021 17:54   06/28/2021 17:54
AW5TJ43d8E+ZvxD8A1YhdcMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA	pdbtde1_second_key		     SOFTWARE KEYSTORE LOCAL	  06/28/2021 18:50   06/28/2021 18:50


NOTE: I rotated my master key, and you can see both keys.. Adding a tag to the key helps identify the key also.


4) Control file Contents

This query looks at the x$jcbdbk table to determine the master key(s) currently in use.



PDB Name        Key ID                              Master Key ID
--------------- ----------------------------------- -------------------------
CDB$ROOT        06CE10B859024F43BFC0369800C12904    AQbOELhZAk9Dv8A2mADBKQQ=

PDB$SEED        00000000000000000000000000000000    AQAAAAAAAAAAAAAAAAAAAAA=

PDBTDE1         8432B02A175E4F8DBFF43FFFDB14015E    AYQysCoXXk+Nv/Q//9sUAV4=

PDBTDE2         E807B3640F934F71BF41D53B70756A54    AegHs2QPk09xv0HVO3B1alQ=

PDBTDE3         6E53278DDDF04F99BF10FC03562175C3    AW5TJ43d8E+ZvxD8A1YhdcM=



Conclusion :

 By looking at the queries above you should have a better of idea of how the Master encryption key ties to the tablespace encryption.

 You can also see what happens when you rotate the master key, and how it affects the tablespaces.



Monday, May 10, 2021

Configuring ExaCC backups of an Oracle Database

This post covers how to configure your backups of an ExaCC database beyond the web interface. 


First off the documentation can be found below, along with using the "--help" option at the command line with "bkup_api"

Configuration - https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/customize-backup-configuration-using-bkup_api.html

Backup execution - https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/create-demand-backup.html#GUID-2370EA04-3141-4D02-B328-5EE9A10F66F2



    Step #1 - Configure backup settings in ExaCC

    The next step is to configure my database to be backed up using the tooling. This is pretty straightforward. I click on the "edit backup" button and fill in the information for my database and save it.  In my case I am using ZFS, and the need to make sure that I change my container to the container where the ZFS is configured.

    NOTE : The backup strategy is a Weekly L0 (full) backup every Sunday, and a daily L1 (differential incremental backup) on all other days. The time the backup is scheduled can be found in either the backup settings, or by looking at the crontab file.



    Then I just wait until I see complete. If I click on the work requests, I can see the progress until it's finished.



    Step #2 - Update the settings to use my RMAN catalog.

    First I need to get what the current settings are for my database (dbsg2) and save them in a config file so I can update them.

    I log into the first node, and su to root.
    Once there I execute "get config --all" and save all the settings to a file that I can update.

    NOTE : I an creating a new file under the bakup_api/cfg directory to make it easy to find.

    $ sudo su -
    Last login: Thu May  6 11:43:46 PDT 2021 on pts/0
    [root@ecc ~]## /var/opt/oracle/bkup_api/bkup_api get config --all --file=/var/opt/oracle/bkup_api/cfg/dbsg2.cfg --dbname dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : get_config
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_92303612_20210506125612.006275.log
    File /var/opt/oracle/bkup_api/cfg/dbsg2.cfg created
    
    

    Now I am going to edit it and make some changes.

    I changed to RMAN catalog settings to use my catalog.
    NOTE: The entry has to be the connect string, not a tnsnames.ora entry.

    #### This section is applicable when using a rman catalog ####
    # Enables RMAN catalog. Can be set to yes or no.
    bkup_use_rcat=yes
    
    ## Below parameters are required if rman catalog is enabled
    # RMAN catalog user
    bkup_rcat_user=rco
    
    
    # RMAN catalog password
    #bkup_rcat_passwd=RMan19c#_
    
    # RMAN catalog conn string
    bkup_rcat_conn=ecc-scan.bgrenn.com:1521:rmanpdb.bgrenn.com
    
    
    

    Now I am going to commit (set) the changes using the "set config" command
    # /var/opt/oracle/bkup_api/bkup_api set config --file=/var/opt/oracle/bkup_api/cfg/dbsg2.cfg --dbname dbsg2 
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : set_config
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_b800281f_20210506130824.084259.log
    cfgfile : /var/opt/oracle/bkup_api/cfg/dbsg2.cfg
    Using configuration file: /var/opt/oracle/bkup_api/cfg/dbsg2.cfg
    API::Parameters validated.
    UUID d0845ea0aea611eb98fb52540068a695 for this set_config(configure-backup)
    ** process started with PID: 86143
    ** see log file for monitor progress
    -------------------------------------
    
    


    And after a few minutes, I am going to check and make sure it was successful by using the configure_status command

    
    /var/opt/oracle/bkup_api/bkup_api configure_status --dbname dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : configure_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_fa81558e_20210507060019.504831.log
    * Last registered operation: 2021-05-07 12:58:41 UTC 
    * Configure backup status: finished
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 120531
    *
    * RETURN CODE:0
    ##################################################
    
    
    Everything looks good !  It removed my configuration file (which is good because it had the password in it).  
    I found that 2 things happened as part of adding an RMAN catalog
    1. The password  for the RMAN catalog user is now stored in the wallet file.
    2. There is an entry in my tnsnames file on all nodes for "CATALOG" which points to the rman catalog.

    NOTE: When part of this process is to register the database with the RMAN catalog. You do not have to manually register the database in the catalog.

    Step #3 - Take a manual backup

    Now logged in as OPC, and becoming Root, and can run a special backup using bkup_api


    # /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_start
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_9458c30f_20210510084341.430481.log
    UUID 7f6622f8b1a611eb865552540068a695 for this backup
    ** process started with PID: 336757
    ** see log file for monitor progress
    -------------------------------------
    
    

    I can see the status while it's running

    /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_46545e6f_20210510084812.014419.log
    (' Warning: unable to get current configuration of:', 'catalog')
    * Current backup settings:
    * Last registered Bkup: 05-10 15:44 UTC API::336757:: Starting dbaas backup process
    * Bkup state: running
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 336757
      API:: Starting dbaas backup process
    *
    * RETURN CODE:0
    ##################################################
    
    

    And I waited a few minutes, and I can see it was successful.

    # /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_8acd03e3_20210510085129.207757.log
    (' Warning: unable to get current configuration of:', 'catalog')
    * Current backup settings:
    * Last registered Bkup: 05-10 15:44 UTC API::336757:: Starting dbaas backup process
    * Bkup state: running
    **************************************************
    * API History: API steps
      API:: NEW PROCESS 336757
      API:: Starting dbaas backup process
    *************************************************
    * Backup steps
     -> 2021-05-10 08:44:20.651787 - API:: invoked with args : -dbname=dbsg2 -uuid=7f6622f8b1a611eb865552540068a695 -level1 
     -> 2021-05-10 08:44:23.458698 - API:: Wallet is in open AUTOLOGIN state
     -> 2021-05-10 08:44:24.204793 - API:: Oracle database state is up and running
     -> 2021-05-10 08:44:25.686134 - API:: CATALOG SETTINGS 
     -> 2021-05-10 08:45:19.767284 - API:: DB instance: dbsg2
     -> 2021-05-10 08:45:19.767424 - API:: Validating the backup repository ...... 
     -> 2021-05-10 08:46:38.263401 - API::      All backup pieces are ok
     -> 2021-05-10 08:46:38.263584 - API:: Validating the TDE wallet ...... 
     -> 2021-05-10 08:46:41.842706 - API:: TDE check successful.
     -> 2021-05-10 08:46:42.446560 - API:: Performing incremental backup to shared storage
     -> 2021-05-10 08:46:42.448228 - API:: Executing rman instructions
     -> 2021-05-10 08:49:21.161884 - API:: ....... OK
     -> 2021-05-10 08:49:21.162089 - API:: Incremental backup to shared storage is Completed
     -> 2021-05-10 08:49:21.163822 - API:: Starting backup of config files
     -> 2021-05-10 08:49:21.699197 - API:: Determining the oracle database id
     -> 2021-05-10 08:49:21.726308 - API::  DBID: 2005517379
     -> 2021-05-10 08:49:22.040891 - API:: Creating directories to store config files
     -> 2021-05-10 08:49:22.085476 - API:: Enabling RAC exclusions for config files.
     -> 2021-05-10 08:49:22.114211 - API:: Compressing config files into tar files
     -> 2021-05-10 08:49:22.173842 - API:: Uploading config files to NFS location
     -> 2021-05-10 08:49:22.222493 - API:: Removing temporary location /var/opt/oracle/log/dbsg2/obkup/7f6622f8b1a611eb865552540068a695.
     -> 2021-05-10 08:49:22.224071 - API:: Config files backup ended successfully
     -> 2021-05-10 08:49:26.052494 - API:: All requested tasks are completed
    *
    * RETURN CODE:0
    ##################################################
    
    


    Step #4 - Check my periodic backups


    Now it's been a few days (I started on Thursday and it's now Monday).
    I am going to check on the incremental backups, and the archive log backups.

    There are 2 ways I can do this.

    Using the bkup_api command to list the backups that have run.

    # /var/opt/oracle/bkup_api/bkup_api list --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_eddcd4e1_20210510064145.497707.log
    -> Listing all backups
      Backup Tag             Completion Date (UTC)            Type          keep    
    ----------------------   -----------------------      -----------    --------
       TAG20210506T123203     05/06/2021 19:32:03       full        False
       TAG20210506T131438     05/06/2021 20:14:38       incremental        False
       TAG20210507T012240     05/07/2021 08:22:40       incremental        False
       TAG20210508T012315     05/08/2021 08:23:15       incremental        False
       TAG20210509T012438     05/09/2021 08:24:38       full        False
       TAG20210510T012322     05/10/2021 08:23:22       incremental        False
    
    

    Using the RMAN catalog

    Backup Type         Encrypted Tag                                Backup Piece                                                 Backup Time           Day Of Week
    -------------------- --------- --------------------------------- ------------------------------------------------------------ -------------------- --------------------
    Full L0              YES       DBAAS_FULL_BACKUP20210506122626     /backup/dbaas_bkup_DBSG2_2005517379_0dvu5rp2_13_1          05/06/21 12:29:32    THURSDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210506131110     /backup/dbaas_bkup_DBSG2_2005517379_2avu5ud1_74_1          05/06/21 13:14:18    THURSDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210507011926     /backup/dbaas_bkup_DBSG2_2005517379_72vu792b_226_1         05/07/21 01:22:27    FRIDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210508011939     /backup/dbaas_bkup_DBSG2_2005517379_lbvu9tf3_683_1         05/08/21 01:22:51    SATURDAY
    Full L0              YES       DBAAS_FULL_BACKUP20210509011940     /backup/dbaas_bkup_DBSG2_2005517379_u3vuchr8_963_1         05/09/21 01:22:59    SUNDAY
    Differential L1      YES       DBAAS_INCR_BACKUP20210510011940     /backup/dbaas_bkup_DBSG2_2005517379_6rvuf672_1243_1        05/10/21 01:22:49    MONDAY
    
    
    

    NOTE: I can see that a periodic L1 (differential) is executed at 1:22 AM, every day but Sunday where a Full backup is executed.

    Now to look at archive log backups -- I am going to show a subset.

    Again I can use the bkup_api "list_jobs" command and see all the backup jobs that have been run (which include archive logs).


    # /var/opt/oracle/bkup_api/bkup_api list_jobs --dbname dbsg2 | more
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list_jobs
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_b2532724_20210510070545.552300.log
    UUID                             | DATE                | STATUS  | TAG                 | ACTION              
    e7ad1ef6aea011eb9c8252540068a695 | 2021-05-06 19:26:23 | success | TAG20210506T123203  | create-backup-full  
    03616d68aea211eba5aa52540068a695 | 2021-05-06 19:34:12 | success | TAG20210506T123516  | archivelog-backup   
    33fae162aea611eba0ed52540068a695 | 2021-05-06 20:04:12 | success | TAG20210506T130518  | archivelog-backup   
    267c21daaea711eb9d3852540068a695 | 2021-05-06 20:11:07 | success | TAG20210506T131438  | create-backup-incremental
    650fd222aeaa11ebb58652540068a695 | 2021-05-06 20:34:12 | success | TAG20210506T133516  | archivelog-backup   
    961831e4aeae11ebb0d452540068a695 | 2021-05-06 21:04:11 | success | TAG20210506T140517  | archivelog-backup   
    c6919f28aeb211eb957e52540068a695 | 2021-05-06 21:34:12 | success | TAG20210506T143518  | archivelog-backup   
    f7ce0d0caeb611eb97c552540068a695 | 2021-05-06 22:04:12 | success | TAG20210506T150522  | archivelog-backup   
    286e8ea6aebb11eb864c52540068a695 | 2021-05-06 22:34:11 | success | TAG20210506T153516  | archivelog-backup   
    598f77eeaebf11eb92c052540068a695 | 2021-05-06 23:04:11 | success | TAG20210506T160518  | archivelog-backup   
    89f4919aaec311eb9a9452540068a695 | 2021-05-06 23:34:11 | success | TAG20210506T163516  | archivelog-backup   
    bb5ba95eaec711ebb1ed52540068a695 | 2021-05-07 00:04:11 | success | TAG20210506T170518  | archivelog-backup   
     
    

    Step #5 - On demand backups 

    Now that I have my database configured, I am going to demonstrate some of the options you can add to your backup.

    I am going to create a keep backup and give it a tag using bkup_start

    $ /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=dbsg2 --keep --tag=Maymonthlybackup
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_start
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_7d923417_20210507113940.052080.log
    UUID 958a58beaf6311eba98a52540068a695 for this backup
    ** process started with PID: 262102
    ** see log file for monitor progress
    -------------------------------------
    
    

    Now to list it.

    $ /var/opt/oracle/bkup_api/bkup_api list --dbname dbsg2 --keep
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_19714a18_20210507114254.007083.log
    -> Listing all backups
      Backup Tag                           Completion Date (UTC)      Type          keep    
    ----------------------                 -----------------------   -----------    --------
       Maymonthlybackup20210507T113125     05/07/2021 18:31:25       keep-forever   True
    
    

    Step #6 - Restore my database


    The last step I'm going to do in my database is to restore it to a previous point in time.

    Below is what you see in the console.
    NOTE - If you chose a specific time it will be in UTC time.


    I pick a time to restore to, and click on the 'Restore Database' option. I can follow the process by looking at 'Workload Requests'.




    Step #7 - Validating backups


    A great feature of the command tool is the ability to validate backups that have been taken.  This is easy to do with the 'bkup_api reval_start' command.

    I started my validate for my database dbbsg and I saved the uuid to monitor it.

    # /var/opt/oracle/bkup_api/bkup_api reval_start --dbname=dbbsg
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : reval_start
    -> logfile: /var/opt/oracle/log/dbbsg/bkup_api_log/bkup_api_d0647aa8_20210511032638.300613.log
    UUID 5f204c4cb24311eb887252540068a695 for restore validation
    ** process started with PID: 15281
    ** Backup Request uuid     : 5f204c4cb24311eb887252540068a695
    
    

    Now to monitor it using the uuid until it's done, and I can see it completed successfully.

    # /var/opt/oracle/bkup_api/bkup_api --uuid=5f204c4cb24311eb887252540068a695 --dbname=dbbsg
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    @ STARTING CHECK STATUS 5f204c4cb24311eb887252540068a695
    [ REQUEST TICKET ]
    [UUID    ->  5f204c4cb24311eb887252540068a695 
    [DBNAME  ->  dbbsg 
    [STATE   ->  success 
    [ACTION  ->  start-restore-validate 
    [STARTED ->  2021-05-11 10:26:39 UTC 
    [ENDED   ->  2021-05-11 10:28:00 UTC 
    [PID     ->  15281 
    [TAG     ->  None 
    [PCT     ->  0.0 
    [LOG     ->  2021-05-11 03:26:39.780830 - API:: invoked with args : -dbname=dbbsg -reval=default  
    [LOG     ->  2021-05-11 03:26:42.324669 - API:: Wallet is in open AUTOLOGIN state 
    [LOG     ->  2021-05-11 03:26:42.996885 - API:: Oracle database state is up and running 
    [LOG     ->  2021-05-11 03:28:00.857565 - API:: ....... OK 
    [LOG     ->  2021-05-11 03:28:00.857645 - API:: Restore Validation is Completed 
    [ END TICKET ]
    
    

    Step #8 - Restoring/listing/backups with API

    There are many options to restoring with the API for both the "database" which consists of the CDB and all PDBs, or just a specific PDB.

    Below are some of the commands that help with this.
    NOTE: All commands are executed using "bkup_api" from /var/opt/oracle/bkup_api as "oracle"


    Command Options Description
    bkup_start   Start new special backup now
    bkup_start --keep Create keep backup
    bkup_start --level0 Perform a new FULL level 0 backup 
    bkup_start --level1 Perform a new level1 incremental backup
    bkup_start --cron Creates an incremntal backup through Cron
    bkup_chkcfg   Verifies that backups have been configured
    bkup_status   Shows the status of the most recent backup
    list   Shows the list of the most recent backups
    reval_start   Starts a restore validation of datafiles
    archreval_start   Starts a revalidation of archive logs
    recover_start --latest Recover from latest backup
    recover_start --scn Recover to SCN #
    recover_start --b Recover using a specific backup tag and defuzzy to archivelog following
    recover_start -t Recover to time. Specify --nonutc to use a non-UTC timestamp
    recover_status   Show status of most recent recover of this database


    With recovery you can also just recover a single PDB
    • --pdb={pdbname} - Recovery just a single PDB
    You can also specify if the config files should be restored
    • --cfgfiles - store the configuration files (controlfiles, spfiles etc) along with database files.

    Step #9 - Configuration changes

    You can execute the "bkup_api get config --dbname={dbname}" to create a file containing the  current configuration.  In that file you can see some of the other changes you can be.
    Below is what I see it using the version at the time of writing this.

    Config Parameter Settings Description
    bkup_cron_entry yes/no Enable/Disable automatic backups
    bkup_archlog_cron_entry yes/no Enable automatic archive log cleanup when not using tooling
    bkup_cfg_files yes/no Enable backup of Config files
    bkup_daily_time hh24:mi Time to execute daily backup
    bkup_archlog_frequency 15,20,30… How many minutes apart to execute archive log backups
    bkup_disk yes/no Backups to the FRA
    bkup_disk_recovery_window 1-14 Recover window of FRA
    bkup_oss_xxx   Backup settings when backing up to Object Store in Public Cloud
    bkup_zdlra_xx   Backup settings when backing up to a ZDLRA
    bkup_nfs_xxx   Backup settings when backing up to NFS
    bkup_set_section_size yes/no Set to yes to over ride the default setting
    bkup_section_size   Value for Over riding the default setting for section size
    bkup_channels_node xx Number of channels to be used by RMAN
    bkup_use_rcat yes/no If you are using an RMAN catalog
    bkup_rcat_xxx   RMAN catalog settings

    Step #10 - Scheduled backups


    Backups are scheduled in the crontab on the first node of a cluster. You can view schedule by executing "sudo su - " to become root, and look at the /etc/crontab file.
    Below is what is there for my database (dbsg2)

    # Example of job definition:
    # .---------------- minute (0 - 59)
    # |  .------------- hour (0 - 23)
    # |  |  .---------- day of month (1 - 31)
    # |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
    # |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
    # |  |  |  |  |
    # *  *  *  *  * user-name  command to be executed
    
    15 * * * * oracle /var/opt/oracle/misc/backup_db_wallets.pl
    15 * * * * oracle /var/opt/oracle/dbaascli/dbaascli tde backup --alldb
    19 1 * * * oracle /var/opt/oracle/bkup_api/bkup_api bkup_start --cron --dbname=dbsg2
    4,34 * * * * oracle /var/opt/oracle/bkup_api/bkup_api bkup_archlogs --cron --dbname=dbsg2
    
    
    The jobs that are scheduled to execute are.

    1. backup_db_wallets.pl - Every 15 minutes this script executes from the Crontab.  This script goes through the list of databases (regardless if database backups have been turned on) and it makes a copy of the SEPS wallet file in the current wallet location adding the current date/time. The old copy is removed and only one back exists.

    The following 2 settings are in my configuration file (/var/opt/oracle/creg/dbsg2.ini) are used as the source location of the wallet, and the location for the backup

     wallet_loc=/var/opt/oracle/dbaas_acfs/dbsg2/db_wallet
     wallet_loc_bak=/u02/app/oracle/admin/dbsg2/db_wallet

    NOTE: This wallet is used for storing user credentials and is an autologin wallet.
    I can see the credentials stored. In my case it is both the "sys" password, and the password for "rco". If I was using OSS (object store) my login credential would be stored in this wallet, and if I backed up to ZDLRA, this wallet would contain my connection to the ZDLRA(s) I was backing up to.
    2: CATALOG rco
    1: dbsg2 sys

    2. /var/opt/oracle/dbaascli/dbaascli tde backup --alldb - Every 15 minutes this script executes from the Crontab.  This script goes through the list of databases (regardless if database backups have been turned on) and it makes a copy of the TDE wallet file in the $ORACLE_BASE directory.
    The location is $ORACLE_BASE/{db_name}/tde_wallet/tde/

    The output from this script is in /var/opt/oracle/log/misc/backup

    3. /var/opt/oracle/bkup_api/bkup_api bkup_start --cron --dbname={mydb}- Every at 1:19 AM,  This time is determined in the bkup_api configuration using the parameter  "bkup_daily_time".  This is the same API that is called to perform an on-demand backup from the command line, but with a '--cron' parameter also.

    4. /var/opt/oracle/bkup_api/bkup_api bkup_archlogs --dbname={mydb} Every 30 minutes this script is executed based on the 'bkup_archlog_frequency' bkup_api configuration setting. This script will backup my archive logs to the backup location.