Tuesday, June 21, 2022

Migrate a large oracle database to OCI from disk backup

 Migrating an Oracle database from on-premise to OCI is especially challenging when the database is quite large.  In this blog post I will walk through the steps to migrate to OCI leveraging an on-disk local backup copied to object storage.

migrate Oracle database to OCI


The basic steps to perform this task are on on the image above.

Step #1 - Upload backup pieces to object storage.

The first step to migrate my database (acmedb) is to copy the RMAN backup pieces to the OCI object storage using the OCI Client tool.

In order to make this easier, I am breaking this step into a few smaller steps.

Step #1A - Take a full backup to a separate location on disk 


This can also be done by moving the backup pieces, or creating them with a different backup format.  By creating the backup pieces in a separate directory, I am able to take advantage of the bulk upload feature of the OCI client tool. The alternative is to create an upload statement for each backup piece.

For my RMAN backup example (acmedb) I am going to change the location of the disk backup and perform a disk backup.  I am also going to compress my backup using medium compression (this requires the ACO license).  Compressing the backup sets allows me to make the backup pieces as small as possible when transferring to the OCI object store.

Below is the output from my RMAN configuration that I am using for the backup.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ACMEDBP are:


CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/acmedb/ocimigrate/backup_%d_%U';
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

I created a new level 0 backup including archive logs and below is the "list backup summary" output showing the backup pieces.

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4125    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141019
4151    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141201
4167    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4168    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4169    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4170    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4171    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4172    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4173    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4174    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4175    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4176    B  0  A DISK        21-JUN-22       1       1       YES        TAG20220621T141202
4208    B  A  A DISK        21-JUN-22       1       1       YES        TAG20220621T141309
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310



From the output you can see that there are a total of 14 backup pieces
  • 3 Archive log backup sets (two created before the backup of datafiles, and one after).
    • TAG20220621T141019
    • TAG20220621T141201
    • TAG20220621T141309
  • 10 Level 0 datafile backups
    • TAG20220621T141202
  • 1 controlfile backup 
    • TAG20220621T141310

Step #1B - Create the bucket in OCI and configure OCI Client

Now we need a bucket to upload the 14 RMAN backup pieces to. 

Before I can upload the objects, I need to download and configure the OCI Client tool. You can find the instructions to do this here.

Once the client tool is installed I can create the bucket and verify that the OCI Client tool is configured correctly.

The command to create the bucket is.



Below is the output when I ran it for my compartment and created the bucket "acmedb_migrate"

 oci os bucket create --namespace id2avsofo --name acmedb_migrate --compartment-id ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "auto-tiering": null,
    "compartment-id": "ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"created-by": "ocid1.user.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"defined-tags": { "Oracle-Tags": { "CreatedBy": "oracleidentitycloudservice/john.smith@oracle.com", "CreatedOn": "2022-06-21T14:36:19.680Z" } }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893", "freeform-tags": {}, "id": "ocid1.bucket.oc1.iad.aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"is-read-only": false, "kms-key-id": null, "metadata": {}, "name": "acmedb_migrate", "namespace": "id2avsofo",
"object-events-enabled": false, "object-lifecycle-policy-etag": null, "public-access-type": "NoPublicAccess", "replication-enabled": false, "storage-tier": "Standard", "time-created": "2022-06-21T14:36:19.763000+00:00", "versioning": "Disabled" }, "etag": "e0f028ac-d80d-4e09-8e60-876d90f57893" }

Step #1C - Upload the backup pieces to Object Storage in OCI


The next step is to upload all the backup pieces that are in the directory "/acmedb/ocimigrate" to OCI using the bulk upload feature.



Below is the output of the upload - Notice I used a parallelism of 14 to ensure a quick upload.

 oci os object bulk-upload --namespace-name id20skavsofo    --bucket-name acmedb_migrate --src-dir /acmedb/ocimigrate/ --parallel-upload-count 10

Uploaded backup_RADB_3u10k6hj_126_1_1  [####################################]  100%
Uploaded backup_RADB_4710k6jl_135_1_1  [####################################]  100%
Uploaded backup_RADB_4610k6jh_134_1_1  [####################################]  100%
Uploaded backup_RADB_3n10k6b0_119_1_1  [####################################]  100%
Uploaded backup_RADB_3m10k6b0_118_1_1  [####################################]  100%
Uploaded backup_RADB_3r10k6ec_123_1_1  [####################################]  100%
Uploaded backup_RADB_4510k6jh_133_1_1  [####################################]  100%
Uploaded backup_RADB_4010k6hj_128_1_1  [####################################]  100%
Uploaded backup_RADB_3v10k6hj_127_1_1  [####################################]  100%
Uploaded backup_RADB_4110k6hk_129_1_1  [####################################]  100%
Uploaded backup_RADB_4210k6id_130_1_1  [####################################]  100%
Uploaded backup_RADB_4310k6ie_131_1_1  [####################################]  100%
Uploaded backup_RADB_3l10k6b0_117_1_1  [####################################]  100%
Uploaded backup_RADB_4410k6ie_132_1_1  [####################################]  100%
Uploaded backup_RADB_3k10k6b0_116_1_1  [####################################]  100%
Uploaded backup_RADB_3t10k6hj_125_1_1  [####################################]  100%

{
  "skipped-objects": [],
  "upload-failures": {},
  "uploaded-objects": {
    "backup_RADB_3k10k6b0_116_1_1": {
      "etag": "ab4a1017-3ba7-46e2-a2ee-3f4cd9a82ad3",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "W0hYIzfAWUVzACWNudcQDg==-3"
    },
    "backup_RADB_3l10k6b0_117_1_1": {
      "etag": "a620076e-975f-4d8c-87e8-394c4cf966cd",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-multipart-md5": "zapGBx8Imcdk91JM2+gORQ==-3"
    },
    "backup_RADB_3m10k6b0_118_1_1": {
      "etag": "a96c35c0-4c0b-4646-ae38-723f92c8496e",
      "last-modified": "Tue, 21 Jun 2022 14:57:32 GMT",
      "opc-content-md5": "vNAsU3vLcjzp6OwEeLXGgA=="
    },
    "backup_RADB_3n10k6b0_119_1_1": {
      "etag": "8f565894-5097-4ebb-9569-fdd31cc0c22d",
      "last-modified": "Tue, 21 Jun 2022 14:57:31 GMT",
      "opc-content-md5": "aSUSQWv5b+EfoLy9L9UBYQ=="
    },
    "backup_RADB_3r10k6ec_123_1_1": {
      "etag": "120dead4-c8ae-44de-9d27-39e1c28a2c48",
      "last-modified": "Tue, 21 Jun 2022 14:57:33 GMT",
      "opc-content-md5": "4wHBrgZXuIMlYWriBbs1ng=="
    },
    "backup_RADB_3s10k6hh_124_1_1": {
      "etag": "07d74b7f-68d6-4a77-9c4d-42f78c51c692",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "uzRd51bAKvFjhbbsfL1YAg=="
    },
    "backup_RADB_3t10k6hj_125_1_1": {
      "etag": "e5d3225b-a687-47e1-ad31-f4270ce31ddd",
      "last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
      "opc-multipart-md5": "aZIirf98ZNqwBAlIeWzuhQ==-3"
    },
    "backup_RADB_3u10k6hj_126_1_1": {
      "etag": "5f5cc5ad-4aa3-4c3a-8848-16b3442a1e2c",
      "last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
      "opc-content-md5": "dT6EYLv1yzf6LZCn1/Dsvw=="
    },
    "backup_RADB_3v10k6hj_127_1_1": {
      "etag": "297daece-be72-475f-b40d-982fb7115cd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "Zt3h5YfHU6F771ahltYhDQ=="
    },
    "backup_RADB_4010k6hj_128_1_1": {
      "etag": "9d723f2a-962e-4d03-9283-fc8a68f53af8",
      "last-modified": "Tue, 21 Jun 2022 14:57:35 GMT",
      "opc-content-md5": "KuNzVyUQrrSsA/kgioq9oA=="
    },
    "backup_RADB_4110k6hk_129_1_1": {
      "etag": "16f7f02a-e5ae-48a2-a7d2-b6d1dedc82ad",
      "last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
      "opc-content-md5": "24SzzZwg7iu7PV8TBpMXEg=="
    },
    "backup_RADB_4210k6id_130_1_1": {
      "etag": "0584e14f-53dc-4251-8bad-907f357a283e",
      "last-modified": "Tue, 21 Jun 2022 14:57:37 GMT",
      "opc-content-md5": "sjPsmoeFsMhZISAmaVN0vQ=="
    },
    "backup_RADB_4310k6ie_131_1_1": {
      "etag": "176aea41-dd31-4404-99f4-ffd59c521fd3",
      "last-modified": "Tue, 21 Jun 2022 14:57:40 GMT",
      "opc-content-md5": "2ksAQ2UuU/75YyRKujlLXg=="
    },
    "backup_RADB_4410k6ie_132_1_1": {
      "etag": "766c7585-3837-490b-8563-f3be3d24c98e",
      "last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
      "opc-content-md5": "sh4CFUC/vnxjmMZ5mfgT3Q=="
    },
    "backup_RADB_4510k6jh_133_1_1": {
      "etag": "2de62d73-e44c-4f25-a41d-d45c556054dd",
      "last-modified": "Tue, 21 Jun 2022 14:57:34 GMT",
      "opc-content-md5": "4tVrHqwYG57STn9W6c2Mqw=="
    },
    "backup_RADB_4610k6jh_134_1_1": {
      "etag": "4667419d-9555-4edb-bd6d-749a1ee7660b",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "/MVdDn/vA2IXUcCmtdgKnw=="
    },
    "backup_RADB_4710k6jl_135_1_1": {
      "etag": "d467810a-d62e-42b3-bf7b-019913707312",
      "last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
      "opc-content-md5": "hq8PEQ3PUwyTMWyUBfW4ew=="
    }
  }
}


Step #2 - Create the manifest for the backup pieces.


The next step covers creating the "metadata.xml" for each object which is the manifest the the RMAN library uses to read the backup pieces.

Again this is broken down into a few different steps.

Step #2A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

I executed the jar file which downloads/created the following files.
  • libopc.so - This is the library used by the Cloud Backup module, and I downloaded it into  "/home/oracle/ociconfig/lib/" on my host
  • acmedb.ora - This is the configuration file for my database backup. This was created in "/home/oracle/ociconfig/config/" on my host
This information is used to allocate the channel in RMAN for the manifest.

Step #2b - Generate the manifest create for each backup piece.

The next step is to dynamically create the script to build the manifest for each backup piece. This needs to be done for each backup piece, and the command is

"send channel t1 'export backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #2c - Execute the script with an allocated channel.

The next step is to execute the script in RMAN within a run block after allocating a channel to the bucket in object storage. This needs to be done for each backup piece. You create a run block with one channel allocation followed by "send" commands.

NOTE: This does not have be executed on the host that generated the backups.  In the example below, I set my ORACLE_SID to "dummy" and performed create manifest with the "dummy" instance started up nomount.


Below is an example of allocating a channel to the object storage and creating the manifest for one of the backup pieces.



export ORACLE_SID=dummy
 rman target /
RMAN> startup nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/19c/dbhome_1/dbs/initdummy.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area    1073737792 bytes

Fixed Size                     8904768 bytes
Variable Size                276824064 bytes
Database Buffers             780140544 bytes
Redo Buffers                   7868416 bytes

RMAN> run {
          allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
       send channel t1 'export backuppiece backup_RADB_3r10k6ec_123_1_1';
        }
2> 3> 4>
allocated channel: t1
channel t1: SID=19 device type=SBT_TAPE
channel t1: Oracle Database Backup Service Library VER=23.0.0.1

sent command to channel: t1
released channel: t1


Step #2d - Validate the manifest is created.

I logged into the OCI console, and I can see that there is a directory called "sbt_catalog". This is the directory containing the manifest files. Within this directory you will find a subdirectory for each backup piece. And within those subdirectories you will find a "metadata.xml" object containing the manifest.

Step #3 - Catalog the backup pieces.


The next step covers cataloging the backup pieces in OCI. You need to download the controlfile backup from OCI and start up mount the database.

Again this is broken down into a few different steps.

Step #3A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

Again, you need to configure the backup module (or you can copy the files from your on-premise host).

Step #3b - Catalog each backup piece.

The next step is to dynamically create the script to build the catalog each backup piece. This needs to be done for each backup piece, and the command is

"catalog device type 'sbt_tape'  backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #3c - Execute the script with a configured channel.

I created a configure channel command, and cataloged the backup pieces that in the object store.


RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';


  run {
           catalog device type 'sbt_tape' backuppiece 'backup_RADB_3r10k6ec_123_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3s10k6hh_124_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3t10k6hj_125_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3u10k6hj_126_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_3v10k6hj_127_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4010k6hj_128_1_1';
          catalog device type 'sbt_tape' backuppiece ' backup_RADB_4110k6hk_129_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4210k6id_130_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4310k6ie_131_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4410k6ie_132_1_1';
          catalog device type 'sbt_tape' backuppiece 'backup_RADB_4510k6jh_133_1_1';
        }

old RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN>
RMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13>
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=406 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=22 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=407 device type=SBT_TAPE
...
...
...
channel ORA_SBT_TAPE_4: SID=23 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: cataloged backup piece
backup piece handle=backup_RADB_4510k6jh_133_1_1 RECID=212 STAMP=1107964867

RMAN>


Step #3d - List the backups pieces cataloged

I performed a list backup summary to view the newly cataloged tape backup pieces.


RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4220    B  F  A DISK        21-JUN-22       1       1       YES        TAG20220621T141310
4258    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141019
4270    B  A  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141201
4282    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4292    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4303    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4315    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4446    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4468    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4490    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4514    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202
4539    B  0  A SBT_TAPE    21-JUN-22       1       1       YES        TAG20220621T141202

RMAN>


Step #4 - Restore the database.


The last step is restore the cataloged backup pieces. Remember you might have to change the location of the datafiles.



The process above can be used to upload and catalog both additional archive logs (to bring the files forward) and incremental backups to bring the database forward.



Thursday, May 26, 2022

ZFSSA offers versatile data protection

 The latest release of ZFSSA software OS8.8.45 includes file retention locking, joining object retention lock and snapshot retention lock providing both versatility and protection of your data.

Retention Lock on ZFSSA


 

3 types of retention lock


Legal Hold


You might need to preserve certain business data in response to potential or on-going lawsuits. A legal hold does not have a defined retention period and remains in effect until removed.  Once the legal hold is removed, all protected data is immediately eligible for deletion unless other retention rules still apply.



NOTE: Both Data Governance and Regulatory Compliance can be use to protect from any kind of cyber/ransomware attack.  


Data Governance


Data Governance locks data sets (snapshot, object or file) for a period of time protecting the data from deletion.  You might need to protect certain data sets as a part of internal business process requirements or protect data sets as part of your cyber protection strategy. Data Governance allows for adjustments in the retention strategy from privileged users.



Regulatory Compliance


Your industry might require you to retain a certain class of data for a defined length of time. Your data retention regulations might also require that you lock the retention settings. Regulatory compliance only allows you to increase the retention time if at all.  Regulatory Compliance is the most restrictive locking strategy and often does not allow anyone, even an administrator, to make changes affecting retention.



 

3 implementations of retention lock


Object storage

Object storage retention is managed through the OCI client tool and Object retention is enforced through the API. Current retention settings are applied to all objects when they are accessed.  Adding a rule immediately takes affect for all objects.  

Administration of retention rules can be managed through the use of RSA certificates.  It is recommended to create a separation of duties between a security administrator, and the object owner.

Retention on object storage is implemented in the following way based on the retention lock type.


Legal hold


Legal holds are implemented by placing an indefinite retention rule on a bucket.  Creating this rule ensures that all objects within the bucket can not be deleted, and cannot be changed. Only new objects can be stored.



 

Data Governance


Data Governance is implemented by placing a time bound retention rule on a bucket.  The rule sets a lock on all objects for a set length of time.  The rule can be later deleted. For cyber protection it is recommended to implement this with a separation of duties.



 

Regulatory Compliance


Regulatory Compliance is implemented by placing a locked time bound retention rule on a bucket with a grace period.  When a locked time bound retention rule is created it immediately takes effect, but there is a grace period of at least 14 days before the rule becomes permanent which allows you to test the rule. Once the grace period expires (defined by a specific date and time) the rule cannot be deleted even by an administrator.



 

Snapshots


Snapshot locking is managed the BUI, or CLI.  Individual snapshots can be locked, and scheduled snapshots can be created and automatically locked.  Permission for controlling snapshot locking can be assigned to ZFSSA users allowing you to create a separation of duties. Shares or projects cannot be removed if they contained locked snapshots.

Retention on snapshots is implemented in the following way based on the retention lock type.



Legal hold


Because snapshots only affect data that is on the project/share when the snapshot is taken, it is not possible to lock all new data as it is written.  Manual snapshots can be used to provide a mechanism to capture the content of a share as of the current time.  This could suffice for a Legal Hold.  A manual snapshot can be created with a "retention lock" of UNLOCKED creating a snapshot that cannot be removed. The only way to remove the snapshot is by changing the "retention lock" to OFF, unlocking it for deletion. This creates a hold on the current data for an indefinite period of time.  Permissions for releasing the hold on a the snapshot can be assigned to specific individual account allowing for a separation of duties.

 

Data Governance


Data governance of snapshots is handled through the use of scheduled locked snapshots and enabling the retention policy for scheduled snapshots.  A LOCKED schedule is created with both a retention, and "keep at most" setting. This allows you to manage snapshots for a locked number of snapshots, while automatically cleaning up snapshots that are past the retention number.  The snapshots within the retention number can not be unlocked, and the schedule can not be removed as long as there is data contained in the snapshot. 

 


Regulatory Compliance


Regulatory compliance of snapshots is handled through the same method as Data Governance.  Snapshots cannot be be removed when they are locked, and the schedule remains locked.

 

File Retention


File retention is set at the share or project level and controls updating and deletion of all data contained on the share/project.  A default file retention length is set and all new files will inherit the default setting in effect when the file is created. It is also possible to manually set the retention on a file increasing the default setting inherited by the file.

 


Legal Hold


Legal Holds on files is implemented by manually increasing the retention on individual files.  Because a Legal Hold may be required for an indefinitely period of time, it is recommended to periodically extend the retention on files needed within the legal hold. This allows the files retention to expire once the need the for the Legal Hold has passed.

 

Data Governance

Data governance is implemented by creating a NEW project and share with a file retention policy of privileged.  Privileged mode allows you to create a default retention setting for all new files, and change that setting (longer or shorter) going forward.  Files created inherit the retention setting in effect when they are created.  Retention can also be adjusted manually to be longer by changing the unlock timestamp.  Projects/shares cannot be deleted as long as they have locked files remaining on them.

 

Regulatory Compliance

Regulatory compliance  is implemented by creating a NEW project and share with a file retention policy of mandatory (no override).  Mandatory mode does not allow you to decrease the default file retention. Retention can also be adjusted manually to be longer by changing the unlock timestamp. Regulatory Compliance uses the same mechanisms as Data Governance but is much more restrictive.  The project/share cannot be removed when locked files exist, and the storage pool cannot be removed when locked files exist within the pool. This mode also requires an NTP server be utilized, and root is locked out of any remote access.

 

The best way to explore these new features is by using the ZFSSA image in OCI to test different scenarios.

Wednesday, May 18, 2022

File Retention Lock now available in ZFSSA OS8.8.45

 File Retention Lock is introduced today in the much-awaited release of ZFSSA AK Software OS8.8.45 (aka 2013.1 Update 8.45) 

ZFSSA retention lock



OS8.8.45 introduces File Retention to ZFSSA.

 File retention is controlled by a new system attribute timestamp for files that, once set, makes the file read-only and unable to be deleted. Once the date/time specified by that timestamp has passed and the retention has expired, the file may be deleted. No other modification is allowed, even after expiration.

In a filesystem with retention enabled, rename of directories is blocked unless the directory is empty. This is done to preserve the name of a file, including its path, so that its location cannot be hidden or any meaning conveyed by a changed path.

File retention enforces one of two policies, set at filesystem creation:

  • Privileged mode: Allows a process with the FILE_RETENTION_OVERRIDE privilege to override retention and delete files. This privilege does not allow files to be modified once retained.           

Mandatory mode: No privilege or authorization allows deletion of a retained file until the retention timestamp has been surpassed. Mandatory mode's protection extends to the filesystem and pool in that they may not be destroyed until all retention on all files therein has expired. A mandatory-mode-protected filesystem also protects its ancestors and clone descendants from destruction.


NOTE: File retention must be enabled in the filesystem during creation before files can be retained because in most settings, taking away the ability to modify or delete a file would be undesirable behavior.

Tuesday, May 17, 2022

ZFS Object Store now offers detailed access control policies

 Object Retention Rules is one of the new features that was released in ZFSSA version 8.8.36.  Before I talk about Object Retention Rules on buckets on ZFSSA, I am going to go how to leverage the new access control polies that go along with managing objects, buckets, and retention. 

User Architecture


If you have followed my previous postings on configuring ZFS as object store, you found that one of the options available is to configure ZFSSA as an OCI Gen 2 (sometimes called OCI native) object store.
 When configuring this API interface on ZFSSA, the authentication utilizes the same public/private key concept that is used in most of the Oracle Cloud.

If you want to read my post on configuring authentication you can find it here.

What I want to go through in this post is how you can configure a set of user roles on ZFSSA with different permissions based public/private keys.

This will help you isolate and secure backups that were sent from multiple sources, and allow you to define both a security administrator (to apply retention policies), and an auditor to view the existence of backups without having the ability to delete or update backups.

In the "User Architecture" diagram at the beginning of this post you see that I have defined 5 user roles  that will be used to manage the object store security for the backups.

Users:

  • SECADMIN - This user role is the security administrator for all 3 object store backups, and all three buckets.  This user role is responsible for creating, deleting and assigning retention rules to the buckets.
  • AUDITOR - This user reviews the backups and has a read only view of all 3 backups. The auditor cannot delete or update any objects, but they can view the existence of the backup pieces.
  • GLUSER  - This user controls the backups for GLDB only
  • APUSER  - This user controls the backups for APDB only
  • DWUSER - This user controls the backups for DWDB only
NOTE: Because the Object Store API controls the access to objects in the bucket, all access to objects in a bucket is through the bucket owner. I can have multiple buckets on the same share, managed by different different users, but access WITHIN the bucket is only granted to the bucket owner.

Based on the above note, I am going to create 3 users to manage the buckets for the 3 database backups.
The 2 additional user roles, SECADMIN and AUDITOR are going to control their access through the use of RSA keys.

 Because I am not going to use pre-authenticated URLs for my backups (which requires login), all 3 users are going to be created as "no-login" users.    Below is an example of creating the APUSER.





I created all 3 users as no-login users




Project/Share for Object Storage


Now I am going to create a project and share to store the backup pieces for all 3 databases.  The project is going to be "dbbackups" and the share is going to be "dbbackups".  I am going to set the default user for the share to "oracle" and I am also going to grant the other 3 users "Full Control" of the share. I will later limit the permissions for these users.


Share User Access


User certificates:


Authentication to the object store is through the use of RSA public/private certificates.
For each user/role I created a certificate that will be used for authentication. 
The following table shows the users/roles and the fingerprint that identifies them.




Authentication:


Within the OCI service on the ZFSSA I combine the user and key (fingerprint) to provide the role.


First I will add the SECADMIN role.  Notice that I am adding this users access to all 3 database backup "users".  This will allow the SECADMIN role to manage bucket creation/deletion and retention to the individual buckets.  The role SECADMIN is accessed through the key.

I will start by adding the key owned by this role (SECADMIN) to the 3 users APUSER, GLUSER and DWUSER.



Now that I have the SECADMIN role assigned to the 3 users, I want set the proper capabilities for this role.  I click on the pencil to edit the key configuration, and I can see the permissions assigned to this user/key combination.  I want to allow the SECADMIN the ability create buckets, delete buckets and  to control the retention within the 3 users buckets.  This role will need the ability to read the bucket.  Notice that this role does not have ability to read any of the objects within the bucket





Now I am going to move on to the AUDITOR role.  This role will be configured using the AUDITOR key assigned to all 3 users.  Within each user the AUDITOR will be granted the ability to read the bucket and the objects but not make any changes.


I now have both the SECADMIN role and the AUDITOR role defined for all 3 users. Below is what is configured within the OCI service. Notice that that there are 2 keys set for each user, and the there are 2 unique keys (one for SECADMIN and one for AUDITOR).


Finally I am going to add the 3 users that own the buckets and grant them access to create objects, but not control the retention or be able to add/remove buckets.



Once completed with adding users/keys I have my 2 roles defined and assigned to each user, and I have an individual key for each user/backup.

When completed, the chart below shows the permissions for each user/role.



OCI cli configuration :


I added entries to the ~/.oci/config file for each of the users/roles configured for the service.
Below is an example entry for the SECADMIN role with the APDB bucket.

[SECADMIN_APDB]
user=ocid1.user.oc1..apuser
fingerprint=0a:35:21:1b:5c:eb:09:8c:e9:44:42:f2:7c:b5:bc:f6
key_file=~/keys/secadmin.ppk
tenancy=ocid1.tenancy.oc1..nobody
region=us-phoenix-1
endpoint=http://150.136.215.19
os.object.bucket-name=apdb
namespace-name=dbbackups
compartment-id=dbbackups


Below is a table of the entries that I added to the config file.




Creating buckets:


Now I am going to create my 3 buckets using the SECADMIN role. Below is an example of adding the bucket for APDB

[oracle@oracle-19c-test-tde keys]$ oci os bucket create  --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile SECADMIN_APDB    --name apdb  --compartment-id dbbackups
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "auto-tiering": null,
    "compartment-id": "dbbackups",
    "created-by": "apuser",
    "defined-tags": null,
    "etag": "2f0b55dbbb925ebbaabbc37e3ce342fa",
    "freeform-tags": null,
    "id": "2f0b55dbbb925ebbaabbc37e3ce342fa",
    "is-read-only": null,
    "kms-key-id": null,
    "metadata": null,
    "name": "apdb",
    "namespace": "dbbackups",
    "object-events-enabled": null,
    "object-lifecycle-policy-etag": null,
    "public-access-type": "NoPublicAccess",
    "replication-enabled": null,
    "storage-tier": "Standard",
    "time-created": "2022-05-17T17:55:49+00:00",
    "versioning": "Disabled"
  },
  "etag": "2f0b55dbbb925ebbaabbc37e3ce342fa"
}


I then did the same thing for the GLDB bucket using SECADMIN_GLDB, and the DWDB bucket using SECADMIN_DWDB.

Once the buckets were created, I attempted to create buckets with both the AUDITOR role, and the DB role.  You can see below that both of these configurations did not have the correct privileges.

[oracle@oracle-19c-test-tde keys]$ oci os bucket create  --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile AUDITOR_APDB    --name apdb  --compartment-id dbbackups
ServiceError:
{
    "code": "BucketNotFound",
    "message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
    "opc-request-id": "tx3a37f1dee0cc4778a1201-006283e2a1",
    "status": 404
}
[oracle@oracle-19c-test-tde keys]$ oci os bucket create  --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile APDB    --name apdb  --compartment-id dbbackups
ServiceError:
{
    "code": "BucketNotFound",
    "message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
    "opc-request-id": "tx46435ae6b8234982b3fbd-006283e2a9",
    "status": 404
}



Listing buckets:

All of the entries I created have access to view the buckets.  Below is an example of SECADMIN_APDB listing buckets. You can see that I have 3 buckets each owned by the correct user.
[oracle@oracle-19c-test-tde keys]$ oci os bucket list --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile SECADMIN_APDB    --compartment-id dbbackups

{
  "data": [
    {
      "compartment-id": "dbbackups",
      "created-by": "apuser",
      "defined-tags": null,
      "etag": "2f0b55dbbb925ebbaabbc37e3ce342fa",
      "freeform-tags": null,
      "name": "apdb",
      "namespace": "dbbackups",
      "time-created": "2022-05-17T17:55:49+00:00"
    },
    {
      "compartment-id": "dbbackups",
      "created-by": "dwuser",
      "defined-tags": null,
      "etag": "866ded83e5ea2a29c66dca0d01036f0e",
      "freeform-tags": null,
      "name": "dwdb",
      "namespace": "dbbackups",
      "time-created": "2022-05-17T17:58:32+00:00"
    },
    {
      "compartment-id": "dbbackups",
      "created-by": "gluser",
      "defined-tags": null,
      "etag": "2169cf94f86009f66ca8770c1c58febb",
      "freeform-tags": null,
      "name": "gldb",
      "namespace": "dbbackups",
      "time-created": "2022-05-17T17:58:17+00:00"
    }
  ]
}


Configuration retention lock :


Here is the documentation on how to configure retention lock for the objects within a bucket. For my example, I am going to lock all objects for 15 days.  I am going to use the SECADMIN_APDB account to lock the objects on the apdb bucket.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule create --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile SECADMIN_APDB --bucket-name apdb --time-amount 30  --time-unit days --display-name APDB-30-day-Bound-backups
{
  "data": {
    "display-name": "APDB-30-day-Bound-backups",
    "duration": {
      "time-amount": 30,
      "time-unit": "DAYS"
    },
    "etag": "2c9ab8ff9c4743392d308365d9f72e05",
    "id": "2c9ab8ff9c4743392d308365d9f72e05",
    "time-created": "2022-05-17T18:49:24+00:00",
    "time-modified": "2022-05-17T18:49:24+00:00",
    "time-rule-locked": null
  }
}


Now I am going to make sure my AUDITOR role and my BACKUP role do not have privileges to manage retention. For both of these I can an error.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule create --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb --time-amount 30  --time-unit days --display-name APDB-30-day-Bound-backups
ServiceError:
{
    "code": "BucketNotFound",
    "message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
    "opc-request-id": "tx52e8849aa6444c639d59b-006283ee99",
    "status": 404
}

I set the retention rule for the other buckets, and now I can use the AUDITOR accounts to list the retention rules.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_APDB --bucket-name apdb
oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb
oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_DWDB --bucket-name dwdb


{
  "data": {
    "items": [
      {
        "display-name": "APDB-30-day-Bound-backups",
        "duration": {
          "time-amount": 30,
          "time-unit": "DAYS"
        },
        "etag": "2c9ab8ff9c4743392d308365d9f72e05",
        "id": "2c9ab8ff9c4743392d308365d9f72e05",
        "time-created": "2022-05-17T18:49:24+00:00",
        "time-modified": "2022-05-17T18:49:24+00:00",
        "time-rule-locked": null
      }
    ]
  }
}
[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb
{
  "data": {
    "items": [
      {
        "display-name": "GLDB-30-day-Bound-backups",
        "duration": {
          "time-amount": 30,
          "time-unit": "DAYS"
        },
        "etag": "ee0d6114310a9971f5a464b428916e48",
        "id": "ee0d6114310a9971f5a464b428916e48",
        "time-created": "2022-05-17T18:56:45+00:00",
        "time-modified": "2022-05-17T18:56:45+00:00",
        "time-rule-locked": null
      }
    ]
  }
}
[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_DWDB --bucket-name dwdb
{
  "data": {
    "items": [
      {
        "display-name": "DWDB-30-day-Bound-backups",
        "duration": {
          "time-amount": 30,
          "time-unit": "DAYS"
        },
        "etag": "96cc109a7308d5f849541be72d87757a",
        "id": "96cc109a7308d5f849541be72d87757a",
        "time-created": "2022-05-17T18:57:42+00:00",
        "time-modified": "2022-05-17T18:57:42+00:00",
        "time-rule-locked": null
      }
    ]
  }
}


Sending backups to buckets :

Here is the link to the "archive to cloud" section of the latest ZDLRA documentation.  The buckets are added as cloud locations.  Since I am going to be using an immutable bucket, I also need to add a metadata bucket to match the normal backup bucket. The metadata bucket holds temporary objects that get removed as the backup is written.
  I created 3 additional buckets, "apdb_meta", "gldb_meta" and "dwdb_meta".
When I configure the Cloud Location I want to use the keys I created to send the backups.

The backup pieces were sent by the keys for apuser, gluser, and dwuser.

I used the process in the documentation to send the backups pieces from  the ZDLRA

Audit Backups :


Now that I have backups created for my database, I am going to use the the AUDITOR role to report on what's available within the apdb bucket.

First I am going to look at the Retention Settings.


[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_APDB --bucket-name apdb
{
  "data": {
    "items": [
      {
        "display-name": "APDB-30-day-Bound-backups",
        "duration": {
          "time-amount": 30,
          "time-unit": "DAYS"
        },
        "etag": "2c9ab8ff9c4743392d308365d9f72e05",
        "id": "2c9ab8ff9c4743392d308365d9f72e05",
        "time-created": "2022-05-17T18:49:24+00:00",
        "time-modified": "2022-05-17T18:49:24+00:00",
        "time-rule-locked": null
      }
    ]
  }
}


Now I am going to print out all the backups that exist for the APDB database.
I am using the python script that comes with the Cloud Backup Library and instructions for how to use it can be found in my blog here.
 
Below I am running the script. Notice I am running it using the AUDITOR role.

[oracle@oracle-19c-test-tde ~]$ python2  /home/oracle/ociconfig/lib/odbsrmt.py --mode report --ocitype bmc  --host http://150.136.215.19 --dir /home/oracle/keys/reports --base apdbreport --pvtkeyfile  /home/oracle/keys/auditor.ppk --pubfingerprint a8:31:78:c2:b4:4f:44:93:bd:4f:f1:72:1c:37:c8:86 --tocid ocid1.tenancy.oc1..nobody --uocid ocid1.user.oc1..apuser --container apdb  --dbid 2867715978
odbsrmt.py: ALL outputs will be written to [/home/oracle/keys/reports/apdbreport12193.lst]
odbsrmt.py: Processing container apdb...
cloud_slave_processors: Thread Thread_0 starting to download metadata XML files...
cloud_slave_processors: Thread Thread_0 successfully done
odbsrmt.py: ALL outputs have been written to [/home/oracle/keys/reports/apdbreport12193.lst]

And finally I can see the report that is creating from this script.


FileName
Container                Dbname         Dbid        FileSize          LastModified                BackupType                  Incremental  Compressed   Encrypted
870toeq3_263_1_1
apdb                     ORCLCDB        2867715978  1285029888        2022-05-17 19:09:45         Datafile                    true         false        true   
890toetk_265_1_1
apdb                     ORCLCDB        2867715978  2217476096        2022-05-17 19:12:17         ArchivedLog                 false        false        true   
8a0tof0j_266_1_1
apdb                     ORCLCDB        2867715978  2790260736        2022-05-17 19:14:15         Datafile                    true         false        true   
8b0tof4g_267_1_1
apdb                     ORCLCDB        2867715978  2124677120        2022-05-17 19:15:52         Datafile                    true         false        true   
8c0tof7f_268_1_1
apdb                     ORCLCDB        2867715978  536346624         2022-05-17 19:16:21         Datafile                    true         false        true   
8d0tof89_269_1_1
apdb                     ORCLCDB        2867715978  262144            2022-05-17 19:16:25         ArchivedLog                 false        false        true   
c-2867715978-20220517-00
apdb                     ORCLCDB        2867715978  18874368          2022-05-17 19:09:47         ControlFile SPFILE          false        false        true   
c-2867715978-20220517-01
apdb                     ORCLCDB        2867715978  18874368          2022-05-17 19:16:26         ControlFile SPFILE          false        false        true   
Total Storage: 8.37 GB



Conclusion :

By creating 3 different roles for the user through the use of separate keys I am able to provide a separation of duties on the OCI object store.,

SECADMIN - This user role creates/deletes buckets and controls retention. This user role cannot see any backup pieces, and this user role cannot delete any objects from the buckets. This user role is isolated from the backup pieces themselves.

AUDITOR - This user role is used to create reporting on the backups to ensure there are backup pieces available.

DBA - These user roles are used to manage the individual backup pieces within the bucket but they do not have the ability to delete the bucket, or change the retention.

This provides a true separation of duties.