Showing posts with label ZFS. Show all posts
Showing posts with label ZFS. Show all posts

Friday, July 15, 2022

File Retention Lock on ZFSSA

File Retention Lock was recently released on ZFSSA and I wanted to take the time to explain how to set the retention time and view the retention of locked files. Below is an example of what happens. You can see that the files are locked until January 1st 2025

ZFS Retention Lock


The best place to start for information on how this works is by looking at my last blog post on authorizations.

First I will go through the settings that available at the share/project level


Grace period

The grace period is used to automatically lock a file when there has not been updates to the file for this period of time.
If the automatic file retention grace period is "0" seconds, then the default retention is NOT in effect.




NOTE: even with a grace period of "0" seconds files can be locked by manually setting a retention period. 
 Also, once a grace period is set (> "0") it cannot be increased or disabled.
Finally, if you set the grace period to a long period (to ensure all writes are to a file are completed), you can lock the file by removing the write bit. This does the same thing as expiring the grace period.

Below is an example

chmod ugo-w *

Running the "chmod" will  remove the write bit, and immediate cause all files to lock.

Default retention

The most common method to implement file retention is by using the default retention period. This causes the file to be locked for the default retention when the grace period expires for a file.
Note that the file is locked as of the time the grace period expires. For example, if I have a grace period of 1 day (because I want the ability to clean up a failed backup) and a default file retention period of 14 days, the file will be locked for 14 days AFTER the 1 day grace period. The lock on the file will expire 15 days after the file was last written to.

zfs file retention lock


In the example above you can see that all files created on this share are created with a default retention of 1 day (24 hours).

NOTE: If the grace period is not > "0' these settings will be ignored and files will not be locked by default.

Minimum/Maximum File retention

The second settings you see on the image above are the "minimum file retention period" and the "maximum file retention period".

These control the retention settings on files which follows the rules below.

  • The default retention period for files MUST be at least the minimum file retention period, and not greater than the maximum file retention period.

  • If the retention date is set manually on a file, the retention period must fall within the minimum and maximum retention period.

Display current Lock Expirations.

In order to display the lock expiration on Linux the first thing you need to do is to change the share/project setting "Update access time on read" to off . Through the CLI this is "set atime=false" on the share.


zfssa file retention lock

Once this settings is made, the client will then display the lock time as the "atime". In my example at the top of the blog, you can see by executing "ls -lu" the file lock time is displayed.

NOTE: you can also use the find command to search for files using the "atime" This will allow to find all the locked files.

Below is an example of using the find command to list files that have an lock expiration time in the future.


export CURRENT_DATE=`date +"%y-%m-%d %H:%M:%S"`
find . -type f -newerat "$CURRENT_DATE" -printf '%h\t%AD%AH:%AM:%AS\t%s \n'



Manually setting a retention date


It is possible to set a specific date/time that a file is locked until. You can even set the retention date on a file that currently locked (it must be a date beyond the current lock data).

NOTE: If you try to change the retention date on a specific file, the new retention date has to be greater than current retention date (and less than or equal to the maximum file retention period). This makes sense.  You cannot lower the retention period for a locked file.

Now how do you manually set the retention date ?  Below is an example of how it is set for a file.

Setting File retention lock

There are 3 steps that are needed to lock the file with a specific lock expiration date.

1. Touch the file and set the access date. This can be done with
    • "-a" to change the access date/time
    • "-d" or "-t" to specify the date format
 2. Remove the write bit with chmod guo-2

3.  execute a cmod to make the file read only.

Below is an example where I am taking a file that does not contain retention, and setting the date to January 1, 2025.


First I am going to create a file and touch it setting the atime to a future data.

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$rm myfile3.txt
$ls -lu myfile3.txt
ls: cannot access myfile3.txt: No such file or directory


You can see that I set the "atime" and it display a future date, but I was still able to delete the file.

Now I am going to move to  remove the write bit before deleting.

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$chmod ugo-w  myfile3.txt
$rm myfile3.txt
ls: cannot access myfile3.txt: No such file or directory


Still, I am able to delete the file.. Finally I am going to do all three 

$echo 'xxxx' > myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan  1  2025 myfile3.txt
$chmod ugo-w  myfile3.txt
$chmod a=r  myfile3.txt
#$rm myfile3.txt
rm: remove write-protected regular file ‘myfile3.txt’? y
rm: cannot remove ‘myfile3.txt’: Operation not permitted


Summary to manually set the lock on a file

If the file is NOT current locked  (the grace period is "0" or the grace period has not expired).


The commands below will lock the file "myfile.txt" until 01/01/25 12:00.

touch -a -t "2501011200" myfile.txt
chmod ugo-w  myfile.txt
chmod a=r  myfile.txt


If the file is already locked 

The commands below will adjust the lock on the file "myfile.txt" until 01/01/25 12:00.


touch -a -t "2501011200" myfile.txt


Tuesday, June 21, 2022

Migrate a large oracle database to OCI from disk backup

 Migrating an Oracle database from on-premise to OCI is especially challenging when the database is quite large.  In this blog post I will walk through the steps to migrate to OCI leveraging an on-disk local backup copied to object storage.

migrate Oracle database to OCI


The basic steps to perform this task are on on the image above.

Step #1 - Upload backup pieces to object storage.

The first step to migrate my database (acmedb) is to copy the RMAN backup pieces to the OCI object storage using the OCI Client tool.

In order to make this easier, I am breaking this step into a few smaller steps.

Step #1A - Take a full backup to a separate location on disk 


This can also be done by moving the backup pieces, or creating them with a different backup format.  By creating the backup pieces in a separate directory, I am able to take advantage of the bulk upload feature of the OCI client tool. The alternative is to create an upload statement for each backup piece.

For my RMAN backup example (acmedb) I am going to change the location of the disk backup and perform a disk backup.  I am also going to compress my backup using medium compression (this requires the ACO license).  Compressing the backup sets allows me to make the backup pieces as small as possible when transferring to the OCI object store.

Below is the output from my RMAN configuration that I am using for the backup.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ACMEDBP are:


CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/acmedb/ocimigrate/backup_%d_%U';
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

I created a new level 0 backup including archive logs and below is the "list backup summary" output showing the backup pieces.

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4125    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141019
4151    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141201
4167    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4168    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4169    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4170    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4171    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4172    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4173    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4174    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4175    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4176    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4208    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141309
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310



From the output you can see that there are a total of 14 backup pieces
  • 3 Archive log backup sets (two created before the backup of datafiles, and one after).
    • TAG20220621T141019
    • TAG20220621T141201
    • TAG20220621T141309
  • 10 Level 0 datafile backups
    • TAG20220621T141202
  • 1 controlfile backup 
    • TAG20220621T141310

Step #1B - Create the bucket in OCI and configure OCI Client

Now we need a bucket to upload the 14 RMAN backup pieces to. 

Before I can upload the objects, I need to download and configure the OCI Client tool. You can find the instructions to do this here.

Once the client tool is installed I can create the bucket and verify that the OCI Client tool is configured correctly.

The command to create the bucket is.



Below is the output when I ran it for my compartment and created the bucket "acmedb_migrate"

 oci os bucket create --namespace id2avsofo --name acmedb_migrate --compartment-id ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "auto-tiering": null,
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"created-by": "ocid1.user.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"defined-tags": { "Oracle-Tags": { "CreatedBy": "oracleidentitycloudservice/john.smith@oracle.com", "CreatedOn": "2022-06-21T14:36:19.680Z" } }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893", "freeform-tags": {}, "id": "ocid1.bucket.oc1.iad.aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"is-read-only": false, "kms-key-id": null, "metadata": {}, "name": "acmedb_migrate", "namespace": "id2avsofo",
"object-events-enabled": false, "object-lifecycle-policy-etag": null, "public-access-type": "NoPublicAccess", "replication-enabled": false, "storage-tier": "Standard", "time-created": "2022-06-21T14:36:19.763000+00:00", "versioning": "Disabled" }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893" }

Step #1C - Upload the backup pieces to Object Storage in OCI


The next step is to upload all the backup pieces that are in the directory "/acmedb/ocimigrate" to OCI using the bulk upload feature.



Below is the output of the upload - Notice I used a parallelism of 14 to ensure a quick upload.

 oci os object bulk-upload --namespace-name id20skavsofo    --bucket-name acmedb_migrate --src-dir /acmedb/ocimigrate/ --parallel-upload-count 10

Uploaded backup_RADB_3u10k6hj_126_1_1  [####################################]  100%
Uploaded backup_RADB_4710k6jl_135_1_1  [####################################]  100%
Uploaded backup_RADB_4610k6jh_134_1_1  [####################################]  100%
Uploaded backup_RADB_3n10k6b0_119_1_1  [####################################]  100%
Uploaded backup_RADB_3m10k6b0_118_1_1  [####################################]  100%
Uploaded backup_RADB_3r10k6ec_123_1_1  [####################################]  100%
Uploaded backup_RADB_4510k6jh_133_1_1  [####################################]  100%
Uploaded backup_RADB_4010k6hj_128_1_1  [####################################]  100%
Uploaded backup_RADB_3v10k6hj_127_1_1  [####################################]  100%
Uploaded backup_RADB_4110k6hk_129_1_1  [####################################]  100%
Uploaded backup_RADB_4210k6id_130_1_1  [####################################]  100%
Uploaded backup_RADB_4310k6ie_131_1_1  [####################################]  100%
Uploaded backup_RADB_3l10k6b0_117_1_1  [####################################]  100%
Uploaded backup_RADB_4410k6ie_132_1_1  [####################################]  100%
Uploaded backup_RADB_3k10k6b0_116_1_1  [####################################]  100%
Uploaded backup_RADB_3t10k6hj_125_1_1  [####################################]  100%

{
  "skipped-objects": [],
  "upload-failures": {},
  "uploaded-objects": {
    "backup_RADB_3k10k6b0_116_1_1": {
      "etag": "ab4a1017-3ba7-46e2-a2ee-3f4cd9a82ad3",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "W0hYIzfAWUVzACWNudcQDg==-3"
    },
    "backup_RADB_3l10k6b0_117_1_1": {
      "etag": "a620076e-975f-4d8c-87e8-394c4cf966cd",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-multipart-md5": "zapGBx8Imcdk91JM2+gORQ==-3"
    },
    "backup_RADB_3m10k6b0_118_1_1": {
      "etag": "a96c35c0-4c0b-4646-ae38-723f92c8496e",
      "last-modified": "Tue, 21 Jun 2022 14:57:32 GMT",
      "opc-content-md5": "vNAsU3vLcjzp6OwEeLXGgA=="
    },
    "backup_RADB_3n10k6b0_119_1_1": {
      "etag": "8f565894-5097-4ebb-9569-fdd31cc0c22d",
      "last-modified": "Tue, 21 Jun 2022 14:57:31 GMT",
      "opc-content-md5": "aSUSQWv5b+EfoLy9L9UBYQ=="
    },
    "backup_RADB_3r10k6ec_123_1_1": {
      "etag": "120dead4-c8ae-44de-9d27-39e1c28a2c48",
      "last-modified": "Tue, 21 Jun 2022 14:57:33 GMT",
      "opc-content-md5": "4wHBrgZXuIMlYWriBbs1ng=="
    },
    "backup_RADB_3s10k6hh_124_1_1": {
      "etag": "07d74b7f-68d6-4a77-9c4d-42f78c51c692",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "uzRd51bAKvFjhbbsfL1YAg=="
    },
    "backup_RADB_3t10k6hj_125_1_1": {
      "etag": "e5d3225b-a687-47e1-ad31-f4270ce31ddd",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "aZIirf98ZNqwBAlIeWzuhQ==-3"
    },
    "backup_RADB_3u10k6hj_126_1_1": {
      "etag": "5f5cc5ad-4aa3-4c3a-8848-16b3442a1e2c",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "dT6EYLv1yzf6LZCn1/Dsvw=="
    },
    "backup_RADB_3v10k6hj_127_1_1": {
      "etag": "297daece-be72-475f-b40d-982fb7115cd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "Zt3h5YfHU6F771ahltYhDQ=="
    },
    "backup_RADB_4010k6hj_128_1_1": {
      "etag": "9d723f2a-962e-4d03-9283-fc8a68f53af8",
      "last-modified": "Tue, 21 Jun 2022 14:57:35 GMT",
      "opc-content-md5": "KuNzVyUQrrSsA/kgioq9oA=="
    },
    "backup_RADB_4110k6hk_129_1_1": {
      "etag": "16f7f02a-e5ae-48a2-a7d2-b6d1dedc82ad",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "24SzzZwg7iu7PV8TBpMXEg=="
    },
    "backup_RADB_4210k6id_130_1_1": {
      "etag": "0584e14f-53dc-4251-8bad-907f357a283e",
      "last-modified": "Tue, 21 Jun 2022 14:57:37 GMT",
      "opc-content-md5": "sjPsmoeFsMhZISAmaVN0vQ=="
    },
    "backup_RADB_4310k6ie_131_1_1": {
      "etag": "176aea41-dd31-4404-99f4-ffd59c521fd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:40 GMT",
      "opc-content-md5": "2ksAQ2UuU/75YyRKujlLXg=="
    },
    "backup_RADB_4410k6ie_132_1_1": {
      "etag": "766c7585-3837-490b-8563-f3be3d24c98e",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-content-md5": "sh4CFUC/vnxjmMZ5mfgT3Q=="
    },
    "backup_RADB_4510k6jh_133_1_1": {
      "etag": "2de62d73-e44c-4f25-a41d-d45c556054dd",
      "last-modified": "Tue, 21 Jun 2022 14:57:34 GMT",
      "opc-content-md5": "4tVrHqwYG57STn9W6c2Mqw=="
    },
    "backup_RADB_4610k6jh_134_1_1": {
      "etag": "4667419d-9555-4edb-bd6d-749a1ee7660b",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "/MVdDn/vA2IXUcCmtdgKnw=="
    },
    "backup_RADB_4710k6jl_135_1_1": {
      "etag": "d467810a-d62e-42b3-bf7b-019913707312",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "hq8PEQ3PUwyTMWyUBfW4ew=="
    }
  }
}


Step #2 - Create the manifest for the backup pieces.


The next step covers creating the "metadata.xml" for each object which is the manifest the the RMAN library uses to read the backup pieces.

Again this is broken down into a few different steps.

Step #2A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

I executed the jar file which downloads/created the following files.
  • libopc.so - This is the library used by the Cloud Backup module, and I downloaded it into  "/home/oracle/ociconfig/lib/" on my host
  • acmedb.ora - This is the configuration file for my database backup. This was created in "/home/oracle/ociconfig/config/" on my host
This information is used to allocate the channel in RMAN for the manifest.

Step #2b - Generate the manifest create for each backup piece.

The next step is to dynamically create the script to build the manifest for each backup piece. This needs to be done for each backup piece, and the command is

"send channel t1 'export backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #2c - Execute the script with an allocated channel.

The next step is to execute the script in RMAN within a run block after allocating a channel to the bucket in object storage. This needs to be done for each backup piece. You create a run block with one channel allocation followed by "send" commands.

NOTE: This does not have be executed on the host that generated the backups.  In the example below, I set my ORACLE_SID to "dummy" and performed create manifest with the "dummy" instance started up nomount.


Below is an example of allocating a channel to the object storage and creating the manifest for one of the backup pieces.



export ORACLE_SID=dummy
 rman target /
RMAN> startup nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/19c/dbhome_1/dbs/initdummy.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area    1073737792 bytes

Fixed Size                     8904768 bytes
Variable Size                276824064 bytes
Database Buffers             780140544 bytes
Redo Buffers                   7868416 bytes

RMAN> run {
          allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
       send channel t1 'export backuppiece backup_RADB_3r10k6ec_123_1_1';
        }
2> 3> 4>
allocated channel: t1
channel t1: SID=19 device type=SBT_TAPE
channel t1: Oracle Database Backup Service Library VER=23.0.0.1

sent command to channel: t1
released channel: t1


Step #2d - Validate the manifest is created.

I logged into the OCI console, and I can see that there is a directory called "sbt_catalog". This is the directory containing the manifest files. Within this directory you will find a subdirectory for each backup piece. And within those subdirectories you will find a "metadata.xml" object containing the manifest.

Step #3 - Catalog the backup pieces.


The next step covers cataloging the backup pieces in OCI. You need to download the controlfile backup from OCI and start up mount the database.

Again this is broken down into a few different steps.

Step #3A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

Again, you need to configure the backup module (or you can copy the files from your on-premise host).

Step #3b - Catalog each backup piece.

The next step is to dynamically create the script to build the catalog each backup piece. This needs to be done for each backup piece, and the command is

"catalog device type 'sbt_tape'  backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #3c - Execute the script with a configured channel.

I created a configure channel command, and cataloged the backup pieces that in the object store.


RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';


  run {
           catalog device type 'sbt_tape' backuppiece 'backup_RADB_3r10k6ec_123_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3s10k6hh_124_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3t10k6hj_125_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3u10k6hj_126_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3v10k6hj_127_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4010k6hj_128_1_1';
          catalog device type 'sbt_tape' backuppiece ' backup_RADB_4110k6hk_129_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4210k6id_130_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4310k6ie_131_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4410k6ie_132_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4510k6jh_133_1_1';
        }

old RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN>
RMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13>
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=406 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=22 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=407 device type=SBT_TAPE
...
...
...
channel ORA_SBT_TAPE_4: SID=23 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: cataloged backup piece
backup piece handle=backup_RADB_4510k6jh_133_1_1 RECID=212 STAMP=1107964867

RMAN>


Step #3d - List the backups pieces cataloged

I performed a list backup summary to view the newly cataloged tape backup pieces.


RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310
4258    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141019
4270    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141201
4282    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4292    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4303    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4315    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4446    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4468    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4490    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4514    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4539    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202

RMAN>


Step #4 - Restore the database.


The last step is restore the cataloged backup pieces. Remember you might have to change the location of the datafiles.



The process above can be used to upload and catalog both additional archive logs (to bring the files forward) and incremental backups to bring the database forward.



Thursday, May 26, 2022

ZFSSA offers versatile data protection

 The latest release of ZFSSA software OS8.8.45 includes file retention locking, joining object retention lock and snapshot retention lock providing both versatility and protection of your data.

Retention Lock on ZFSSA


 

3 types of retention lock


Legal Hold


You might need to preserve certain business data in response to potential or on-going lawsuits. A legal hold does not have a defined retention period and remains in effect until removed.  Once the legal hold is removed, all protected data is immediately eligible for deletion unless other retention rules still apply.



NOTE: Both Data Governance and Regulatory Compliance can be use to protect from any kind of cyber/ransomware attack.  


Data Governance


Data Governance locks data sets (snapshot, object or file) for a period of time protecting the data from deletion.  You might need to protect certain data sets as a part of internal business process requirements or protect data sets as part of your cyber protection strategy. Data Governance allows for adjustments in the retention strategy from privileged users.



Regulatory Compliance


Your industry might require you to retain a certain class of data for a defined length of time. Your data retention regulations might also require that you lock the retention settings. Regulatory compliance only allows you to increase the retention time if at all.  Regulatory Compliance is the most restrictive locking strategy and often does not allow anyone, even an administrator, to make changes affecting retention.



 

3 implementations of retention lock


Object storage

Object storage retention is managed through the OCI client tool and Object retention is enforced through the API. Current retention settings are applied to all objects when they are accessed.  Adding a rule immediately takes affect for all objects.  

Administration of retention rules can be managed through the use of RSA certificates.  It is recommended to create a separation of duties between a security administrator, and the object owner.

Retention on object storage is implemented in the following way based on the retention lock type.


Legal hold


Legal holds are implemented by placing an indefinite retention rule on a bucket.  Creating this rule ensures that all objects within the bucket can not be deleted, and cannot be changed. Only new objects can be stored.



 

Data Governance


Data Governance is implemented by placing a time bound retention rule on a bucket.  The rule sets a lock on all objects for a set length of time.  The rule can be later deleted. For cyber protection it is recommended to implement this with a separation of duties.



 

Regulatory Compliance


Regulatory Compliance is implemented by placing a locked time bound retention rule on a bucket with a grace period.  When a locked time bound retention rule is created it immediately takes effect, but there is a grace period of at least 14 days before the rule becomes permanent which allows you to test the rule. Once the grace period expires (defined by a specific date and time) the rule cannot be deleted even by an administrator.



 

Snapshots


Snapshot locking is managed the BUI, or CLI.  Individual snapshots can be locked, and scheduled snapshots can be created and automatically locked.  Permission for controlling snapshot locking can be assigned to ZFSSA users allowing you to create a separation of duties. Shares or projects cannot be removed if they contained locked snapshots.

Retention on snapshots is implemented in the following way based on the retention lock type.



Legal hold


Because snapshots only affect data that is on the project/share when the snapshot is taken, it is not possible to lock all new data as it is written.  Manual snapshots can be used to provide a mechanism to capture the content of a share as of the current time.  This could suffice for a Legal Hold.  A manual snapshot can be created with a "retention lock" of UNLOCKED creating a snapshot that cannot be removed. The only way to remove the snapshot is by changing the "retention lock" to OFF, unlocking it for deletion. This creates a hold on the current data for an indefinite period of time.  Permissions for releasing the hold on a the snapshot can be assigned to specific individual account allowing for a separation of duties.

 

Data Governance


Data governance of snapshots is handled through the use of scheduled locked snapshots and enabling the retention policy for scheduled snapshots.  A LOCKED schedule is created with both a retention, and "keep at most" setting. This allows you to manage snapshots for a locked number of snapshots, while automatically cleaning up snapshots that are past the retention number.  The snapshots within the retention number can not be unlocked, and the schedule can not be removed as long as there is data contained in the snapshot. 

 


Regulatory Compliance


Regulatory compliance of snapshots is handled through the same method as Data Governance.  Snapshots cannot be be removed when they are locked, and the schedule remains locked.

 

File Retention


File retention is set at the share or project level and controls updating and deletion of all data contained on the share/project.  A default file retention length is set and all new files will inherit the default setting in effect when the file is created. It is also possible to manually set the retention on a file increasing the default setting inherited by the file.

 


Legal Hold


Legal Holds on files is implemented by manually increasing the retention on individual files.  Because a Legal Hold may be required for an indefinitely period of time, it is recommended to periodically extend the retention on files needed within the legal hold. This allows the files retention to expire once the need the for the Legal Hold has passed.

 

Data Governance

Data governance is implemented by creating a NEW project and share with a file retention policy of privileged.  Privileged mode allows you to create a default retention setting for all new files, and change that setting (longer or shorter) going forward.  Files created inherit the retention setting in effect when they are created.  Retention can also be adjusted manually to be longer by changing the unlock timestamp.  Projects/shares cannot be deleted as long as they have locked files remaining on them.

 

Regulatory Compliance

Regulatory compliance  is implemented by creating a NEW project and share with a file retention policy of mandatory (no override).  Mandatory mode does not allow you to decrease the default file retention. Retention can also be adjusted manually to be longer by changing the unlock timestamp. Regulatory Compliance uses the same mechanisms as Data Governance but is much more restrictive.  The project/share cannot be removed when locked files exist, and the storage pool cannot be removed when locked files exist within the pool. This mode also requires an NTP server be utilized, and root is locked out of any remote access.

 

The best way to explore these new features is by using the ZFSSA image in OCI to test different scenarios.

Wednesday, May 18, 2022

File Retention Lock now available in ZFSSA OS8.8.45

 File Retention Lock is introduced today in the much-awaited release of ZFSSA AK Software OS8.8.45 (aka 2013.1 Update 8.45) 

ZFSSA retention lock



OS8.8.45 introduces File Retention to ZFSSA.

 File retention is controlled by a new system attribute timestamp for files that, once set, makes the file read-only and unable to be deleted. Once the date/time specified by that timestamp has passed and the retention has expired, the file may be deleted. No other modification is allowed, even after expiration.

In a filesystem with retention enabled, rename of directories is blocked unless the directory is empty. This is done to preserve the name of a file, including its path, so that its location cannot be hidden or any meaning conveyed by a changed path.

File retention enforces one of two policies, set at filesystem creation:

  • Privileged mode: Allows a process with the FILE_RETENTION_OVERRIDE privilege to override retention and delete files. This privilege does not allow files to be modified once retained.           

Mandatory mode: No privilege or authorization allows deletion of a retained file until the retention timestamp has been surpassed. Mandatory mode's protection extends to the filesystem and pool in that they may not be destroyed until all retention on all files therein has expired. A mandatory-mode-protected filesystem also protects its ancestors and clone descendants from destruction.


NOTE: File retention must be enabled in the filesystem during creation before files can be retained because in most settings, taking away the ability to modify or delete a file would be undesirable behavior.

Saturday, February 13, 2021

Oracle Database 19c now supports DBMS_CLOUD.

 If you have been wondering why I've spent so much time blogging about how to configure ZFS as an object store, today is the day you get the answer.




Today MOS note 2748362.1 - How To Setup And Use DBMS_CLOUD Package was published.


You are probably saying, "so what?, 21c supports DBMS_CLOUD, and that's a long way off for me"..

This note goes through the steps to configure DBMS_CLOUD for 19c. Yes. it's backported !

NOTE: it is only supported for multitenant

If you were very astute, you might have noticed that the 19.9 release of the software contained scripts in the $ORACLE_HOME/rdbms/admin directory with names like dbms_cloud.sql.

Well today, this published notes explains how to install DBMS_CLOUD packages so that you can use it with your 19.9 + database.

I'm going to take this a step further, and show you how to use these scripts and connect the ZFS appliance.  Keep in mind, there is a ZFS simulator you can use, and do the same steps.

Here is some information on how to do this if you don't know where to start

Configuring ZFS as an object store

Step 1. Install DBMS_CLOUD in the CDB

I am going through the MOS, and I am following the same series of steps.  Creating a script in my /home/oracle/dbc directory, so that I can run this again for all my databases.

Just as in the note, I ran a perl script, and looked at the logs. Everything was successful.

I then ran the 2 queries. First against the CDB, then against the PDB.


So far so good.


Step 2 Create SSL Wallet with Certificates.

The next step is to create a wallet, and download the certificates using the link.  The certificates come in a zip file containing all 3 certificates.

VeriSign.cer
BaltimoreCyberTrust.cer
DigiCert.cer

These are the Certificate Authorities that will be used to authenticate the SSL certificates.  DBMS_CLOUD uses HTTPS, and requires that a valid certificate is used.



ZFS NOTE BEGIN : *************************************

At this point there is an additional step for using ZFS (or your own object store). You need to add the certificate to the wallet if it is a self-signed certificate (which is what ZFS will use normally).

In order to get the certificate you need to display it with the following command (filling in your IP address).

openssl s_client -showcerts -connect 10.136.64.85:443

From the output I want to grab the certificate which is between the BEGIN and END


-----BEGIN CERTIFICATE-----
MIIEWTCCA0GgAwIBAgIIXJAYBgAAAAIwDQYJKoZIhvcNAQELBQAwcDEtMCsGA1UE
AwwkenM3LTJjYXAtMjAwZi12bTAyLnVzLm9zYy5vcmFjbGUuY29tMT8wPQYDVQQN
DDZodHRwczovL3pzNy0yY2FwLTIwMGYtdm0wMi51cy5vc2Mub3JhY2xlLmNvbToy

..

oH4pa4Hv4/s0GKJcjQDTlhyyAQXHD+EDfa0KSqP6+Rcwv9+pzXhTJu6IYJLanKo

uM6RxG2XAIH82blU+A==
-----END CERTIFICATE-----

Once I put it in a file, I perform the same command to load these certificates from my file.

ZFS NOTE END : *****************************************

Once added I display what is in the wallet.

orapki wallet display -wallet .
Oracle PKI Tool Release 21.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2020, Oracle and/or its affiliates. All rights reserved.

Requested Certificates:
User Certificates:
Trusted Certificates:

Subject:        2.5.4.13=https://zs7-2cap-200f-vm02.bgrenn.com:215/\#cert,CN=zs7-2cap-200f-vm02.bgrenn.com
Subject:        2.5.4.13=https://zs7-2cap-200f-vm01.bgrenn.com:215/\#cert,CN=zs7-2cap-200f-vm01.bgrenn.com
Subject:        CN=DigiCert Global Root CA,OU=www.digicert.com,O=DigiCert Inc,C=US
Subject:        CN=Baltimore CyberTrust Root,OU=CyberTrust,O=Baltimore,C=IE
Subject:        CN=VeriSign Class 3 Public Primary Certification Authority - G5,OU=(c) 2006 VeriSign\, Inc. - For authorized use only,OU=VeriSign Trust Network,O=VeriSign\, Inc.,C=US


You can see that it captured the server names of the ZFS ports I am using.


Step 3 Configure the Oracle environment for the Wallet.


I followed the next set of instructions to update the sqlnet.ora file with the location of wallet.

A few items to note on this step
  • I am in a RAC environment so I need to make the change to ALL nodes in my RAC cluster, and I also need to copy the wallet to the nodes in the same location on all hosts.
  • The WALLET_LOCATION is also used by ZDLRA. if you are using a ZDLRA for backups, you need add the certificates to wallet that is used by the ZDLRA.
  • If you using Single Sign On which may use the WALLET_LOCATION, be especially careful since they often default to $ORACLE_BASE, but will get over ridden when this is set.
I completed these steps, and I now have the same sqlnet.ora file on all nodes, and my wallet is on all nodes in the same location under $ORACLE_BASE.

Step 4 Configure the Database with ACEs for DBMS_CLOUD.


The next step is to create Access Control Entries (ACEs) to allow communication. This only needs to be in CDB$ROOT.

I stored the script in a file and changed the script in 2 ways

  • define sslwalletdir= {my wallet locatioin}  --> I set this.
  • I removed all lines around proxy. I didn't need a proxy since I am only using a ZFS internal to my datacenter.
I verified with the query and it returned the location of SSL_WALLET.

Step 5 Verify the configuration of the DBMS_CLOUD

I put the script in a file and made a few changes.

  • wallet_path => {my wallet path}
  • wallet_password => {my wallet password}
  • get_page('https://zs7-2cap-200f-vm02.bgrenn.com') --> I put in the URL for my first ZFS network name (from the certificate) followed by the second name from the certificate.
I got a "valid response" backup.. 
I can also check the ACLs with the script below
SELECT host, lower_port, upper_port, acl
FROM   dba_network_acls;

Step 6 Configure users or roles to use  DBMS_CLOUD

I changed the script to use my username which I created in my PDB to create tables etc. and utilize DBMS_CLOUD, and ran it in my pdb.

I took the second script, removed the proxy information , entered the wallet path and executed in my  PDB.

Step 6 Configure users or roles to use DBMS_CLOUD


I changed the script and added my username in my PDB.


Step 7 Configure ACEs for a user role to use DBMS_CLOUD


Again I removed the proxy information since there was no proxy. I also entered the SSL_wallet directory.

Step 8  Configure the credential for OCI (or S3 if you prefer).


Using the create credential and the parameters I have pointed out in previous posts.

 I create a credential to point to my OCI bucket on ZFS.


exec DBMS_CLOUD.CREATE_CREDENTIAL ( -
    CREDENTIAL_NAME => 'ZFS', -
    USER_OCID => 'ocid1.user.oc1..oracle', -
    TENANCY_OCID => 'ocid1.tenancy.oc1..nobody', -
    PRIVATE_KEY => 'MIIEogIBAAKCAQEAnLe/u2YjNVac5z1j/Ce7YRSd6wpwaK8elS+TxucaLz32jUaDCUfMbzfSBP0WK00uxbdnRdUAss1F1sRUm+GqyEEvT2c1LRJ0FnfSFEXrJnDZfEVe/dFi90fctbx4BUSqRroh0RQbQyk24710zO2C3tev66eHEvfxxXGUqI+jrDKOJ7sFdGE42R9uRhhWxaWS4e43OEZk41gq2ykdVFlNp...mXU6w6blGpxWkzfPMJKuOhXYoEXM41uxykDX3nq/wPWxKJ7TnShGLyiFMWiuuQF+s29AbwtlAkQRcHnnkvDFHwE=', -
    FINGERPRINT => '1e:6e:0e:79:38:f5:08:ee:7d:87:86:01:13:54:46:c6');

Note the parameters for ZFS

  • CREDENTIAL_NAME - Name of the credential
  • USER_OCID - 'ocid1.user.oci..' || {ZFS user id}
  • TENANCY_ID - 'ocid1.tenancy.oci1..nobody' - hardocded in
  • PRIVATE_KEY - Private key matching the public key on the ZFS
  • FINGERPRINT - fingerprint for the public key on the ZFS.

Step 9  Load raw data to the object store.


First I am going to open a file, and put some data into it.. Upload the file to my OCI bucket and then create an external table on it.

Below is the input file.

 16TS$                           TABLE                    1904172019041720190417VALID
        20ICOL$                         TABLE                    1904172019041720190417VALID
         8C_FILE#_BLOCK#                CLUSTER                  1904172019041720190417VALID
        37I_OBJ2                        INDEX                    1904172019041720190417VALID
        22USER$                         TABLE                    1904172019041720190417VALID
        33I_TAB1                        INDEX                    1904172019041720190417VALID
        40I_OBJ5                        INDEX                    1904172019041720190417VALID
        31CDEF$                         TABLE                    1904172019041720190417VALID
        41I_IND1                        INDEX                    1904172019041720190417VALID
         3I_OBJ#                        INDEX                    1904172019041720190417VALID
         6C_TS#                         CLUSTER                  1904172019041720190417VALID
        51I_CON1                        INDEX                    1904172019041720190417VALID
        34I_UNDO1                       INDEX                    1904172019041720190417VALID
        11I_USER#                       INDEX                    1904172019041720190417VALID
        29C_COBJ#                       CLUSTER                  1904172019041720190417VALID
        49I_COL2                        INDEX                    1904172019041720190417VALID
        32CCOL$                         TABLE                    1904172019041720190417VALID
        14SEG$                          TABLE                    1904172019041720190417VALID
        23PROXY_DATA$                   TABLE                    1904172019041720190417VALID
        44I_FILE2                       INDEX                    1904172019041720190417VALID
        46I_USER1                       INDEX                    1904172019041720190417VALID
        56I_CDEF4                       INDEX                    1904172019041720190417VALID
        21COL$                          TABLE                    1904172019041720190417VALID
        47I_USER2                       INDEX                    1904172019041720190417VALID
        26I_PROXY_ROLE_DATA$_1          INDEX                    1904172019041720190417VALID
        18OBJ$                          TABLE                    1904172019041720190417VALID
        42I_ICOL1                       INDEX                    1904172019041720190417VALID
        19IND$                          TABLE                    1904172019041720190417VALID
        39I_OBJ4                        INDEX                    1904172019041720190417VALID
        59BOOTSTRAP$                    TABLE                    1904172019041720190417VALID
        36I_OBJ1                        INDEX                    1904172019041720190417VALID
        15UNDO$                         TABLE                    1904172019041720190417VALID
        10C_USER#                       CLUSTER                  1904172019041720190417VALID
         4TAB$                          TABLE                    1904172019041720190417VALID
         2C_OBJ#                        CLUSTER                  1904172019041720190417VALID
        28CON$                          TABLE                    1904172019041720190417VALID
         5CLU$                          TABLE                    1904172019041720190417VALID
        27I_PROXY_ROLE_DATA$_2          INDEX                    1904172019041720190417VALID
        24I_PROXY_DATA$                 INDEX                    1904172019041720190417VALID
        45I_TS1                         INDEX                    1904172019041720190417VALID
        13UET$                          TABLE                    1904172019041720190417VALID
        12FET$                          TABLE                    1904172019041720190417VALID
        17FILE$                         TABLE                    1904172019041720190417VALID

I created a file locally (/tmp/objects.csv), created a bucket (using the OCI CLI tool) and uploaded the file.

Create the bucket on zfs

oci os bucket create --endpoint http://zs7-2cap-200f-vm02.bgrenn.com/oci --namespace-name export/objectstoreoci --compartment-id export/objectstoreoci --name bucketoci  


And copy my file to my bucket.

oci os object put --endpoint http://zs7-2cap-200f-vm02.bgrenn.com/oci ---namespace-name export/objectstoreoci --bucket-name bucketoci  - --file /tmp/objects.csv  --name objects.csv


Step 10  Create an external table on the object.


Now we have the file in the bucket we are ready to create the external table.

ZFS NOTE BEGIN : ***************************************

There is an additional step to access the ZFS. There is a table owned by C##CLOUD$SERVICE which contains the objects store that can be accessed, and how to authenticated. By looking at the current entries you can see the types for OCI and S3.

until I do this you will an error like this..


ERROR at line 1:
ORA-20006: Unsupported object store URI -
https://zs7-2cap-200f-vm01.bgrenn.oracle.com/export/objectstoreoci/bucketoci
/objects.csv
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 917
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 2411
ORA-06512: at line 1

Here is the table that we need to change. You can see that it contains 
  • CLOUD_TYPE - authentication to use
  • BASE_URI_PATTERN - URI pattern to identify and allow
  • VERSION - This is used if different authentication versions exist for an object store
  • STATUS - Not sure, but they are all '1'


 desc C##CLOUD$SERVICE.dbms_cloud_store;
 Name                       Null?    Type
 ----------------------------------------- -------- ----------------------------
 CLOUD_TYPE                        VARCHAR2(128)
 BASE_URI_PATTERN                    VARCHAR2(4000)
 VERSION                        VARCHAR2(128)
 STATUS                         NUMBER

I add a row to this table for my object store.  ORACLE_BMC is the OCI authentication

SQL> insert into C##CLOUD$SERVICE.dbms_cloud_store values ('ORACLE_BMC','%.bgrenn.com',null,1);

1 row created.

SQL> commit;

Commit complete.


ZFS NOTE END : *****************************************


We are ready, now let's create the table and give it a go !!

Create the external table on the object
exec DBMS_CLOUD.CREATE_EXTERNAL_TABLE( -
    table_name      =>'CHANNELS_EXT_ZFS', -
    credential_name =>'ZFS', -
    file_uri_list   =>'https://zs7-2cap-200f-vm01.bgrenn.com/oci/n/export/objectstoreoci/b/bucketoci/o/objects.csv', -
    format          => json_object('trimspaces' value 'rtrim', 'skipheaders' value '1', 'dateformat' value 'YYYYMMDD'), -
    field_list      => 'object_id      (1:10)   char' || -
                      ', object_name    (11:40)  char' || -
                      ', object_type    (41:65)  char' || -
                      ', created_date1  (66:71)  date mask "YYMMDD"' || -
                      ', created_date2  (72:79)  date' || -
                      ', last_ddl_time  (80:87)  date' || -
                      ', status         (88:97)', -
   column_list     => 'object_id      number' || -
                      ', object_name    varchar2(30)' || -
                      ', object_type    varchar2(25)' || -
                      ', status         varchar2(10)' || -
                      ', created_date1  date' || -
                      ', created_date2  date' || -
                      ', last_ddl_time  date');

Select from the table.

OBJECT_ID OBJECT_NAME              OBJECT_TYPE            STATUS
---------- ------------------------------ ------------------------- ----------
CREATED_D CREATED_D LAST_DDL_
--------- --------- ---------
    20 ICOL$              TABLE             VALID
17-APR-19 17-APR-19 17-APR-19

     8 C_FILE#_BLOCK#          CLUSTER            VALID
17-APR-19 17-APR-19 17-APR-19

    37 I_OBJ2              INDEX             VALID
17-APR-19 17-APR-19 17-APR-19



That's all there is to it.

Enjoy !