Thursday, May 26, 2022

ZFSSA offers versatile data protection

 The latest release of ZFSSA software OS8.8.45 includes file retention locking, joining object retention lock and snapshot retention lock providing both versatility and protection of your data.

Retention Lock on ZFSSA


 

3 types of retention lock


Legal Hold


You might need to preserve certain business data in response to potential or on-going lawsuits. A legal hold does not have a defined retention period and remains in effect until removed.  Once the legal hold is removed, all protected data is immediately eligible for deletion unless other retention rules still apply.



NOTE: Both Data Governance and Regulatory Compliance can be use to protect from any kind of cyber/ransomware attack.  


Data Governance


Data Governance locks data sets (snapshot, object or file) for a period of time protecting the data from deletion.  You might need to protect certain data sets as a part of internal business process requirements or protect data sets as part of your cyber protection strategy. Data Governance allows for adjustments in the retention strategy from privileged users.



Regulatory Compliance


Your industry might require you to retain a certain class of data for a defined length of time. Your data retention regulations might also require that you lock the retention settings. Regulatory compliance only allows you to increase the retention time if at all.  Regulatory Compliance is the most restrictive locking strategy and often does not allow anyone, even an administrator, to make changes affecting retention.



 

3 implementations of retention lock


Object storage

Object storage retention is managed through the OCI client tool and Object retention is enforced through the API. Current retention settings are applied to all objects when they are accessed.  Adding a rule immediately takes affect for all objects.  

Administration of retention rules can be managed through the use of RSA certificates.  It is recommended to create a separation of duties between a security administrator, and the object owner.

Retention on object storage is implemented in the following way based on the retention lock type.


Legal hold


Legal holds are implemented by placing an indefinite retention rule on a bucket.  Creating this rule ensures that all objects within the bucket can not be deleted, and cannot be changed. Only new objects can be stored.



 

Data Governance


Data Governance is implemented by placing a time bound retention rule on a bucket.  The rule sets a lock on all objects for a set length of time.  The rule can be later deleted. For cyber protection it is recommended to implement this with a separation of duties.



 

Regulatory Compliance


Regulatory Compliance is implemented by placing a locked time bound retention rule on a bucket with a grace period.  When a locked time bound retention rule is created it immediately takes effect, but there is a grace period of at least 14 days before the rule becomes permanent which allows you to test the rule. Once the grace period expires (defined by a specific date and time) the rule cannot be deleted even by an administrator.



 

Snapshots


Snapshot locking is managed the BUI, or CLI.  Individual snapshots can be locked, and scheduled snapshots can be created and automatically locked.  Permission for controlling snapshot locking can be assigned to ZFSSA users allowing you to create a separation of duties. Shares or projects cannot be removed if they contained locked snapshots.

Retention on snapshots is implemented in the following way based on the retention lock type.



Legal hold


Because snapshots only affect data that is on the project/share when the snapshot is taken, it is not possible to lock all new data as it is written.  Manual snapshots can be used to provide a mechanism to capture the content of a share as of the current time.  This could suffice for a Legal Hold.  A manual snapshot can be created with a "retention lock" of UNLOCKED creating a snapshot that cannot be removed. The only way to remove the snapshot is by changing the "retention lock" to OFF, unlocking it for deletion. This creates a hold on the current data for an indefinite period of time.  Permissions for releasing the hold on a the snapshot can be assigned to specific individual account allowing for a separation of duties.

 

Data Governance


Data governance of snapshots is handled through the use of scheduled locked snapshots and enabling the retention policy for scheduled snapshots.  A LOCKED schedule is created with both a retention, and "keep at most" setting. This allows you to manage snapshots for a locked number of snapshots, while automatically cleaning up snapshots that are past the retention number.  The snapshots within the retention number can not be unlocked, and the schedule can not be removed as long as there is data contained in the snapshot. 

 


Regulatory Compliance


Regulatory compliance of snapshots is handled through the same method as Data Governance.  Snapshots cannot be be removed when they are locked, and the schedule remains locked.

 

File Retention


File retention is set at the share or project level and controls updating and deletion of all data contained on the share/project.  A default file retention length is set and all new files will inherit the default setting in effect when the file is created. It is also possible to manually set the retention on a file increasing the default setting inherited by the file.

 


Legal Hold


Legal Holds on files is implemented by manually increasing the retention on individual files.  Because a Legal Hold may be required for an indefinitely period of time, it is recommended to periodically extend the retention on files needed within the legal hold. This allows the files retention to expire once the need the for the Legal Hold has passed.

 

Data Governance

Data governance is implemented by creating a NEW project and share with a file retention policy of privileged.  Privileged mode allows you to create a default retention setting for all new files, and change that setting (longer or shorter) going forward.  Files created inherit the retention setting in effect when they are created.  Retention can also be adjusted manually to be longer by changing the unlock timestamp.  Projects/shares cannot be deleted as long as they have locked files remaining on them.

 

Regulatory Compliance

Regulatory compliance  is implemented by creating a NEW project and share with a file retention policy of mandatory (no override).  Mandatory mode does not allow you to decrease the default file retention. Retention can also be adjusted manually to be longer by changing the unlock timestamp. Regulatory Compliance uses the same mechanisms as Data Governance but is much more restrictive.  The project/share cannot be removed when locked files exist, and the storage pool cannot be removed when locked files exist within the pool. This mode also requires an NTP server be utilized, and root is locked out of any remote access.

 

The best way to explore these new features is by using the ZFSSA image in OCI to test different scenarios.

Wednesday, May 18, 2022

File Retention Lock now available in ZFSSA OS8.8.45

 File Retention Lock is introduced today in the much-awaited release of ZFSSA AK Software OS8.8.45 (aka 2013.1 Update 8.45) 

ZFSSA retention lock



OS8.8.45 introduces File Retention to ZFSSA.

 File retention is controlled by a new system attribute timestamp for files that, once set, makes the file read-only and unable to be deleted. Once the date/time specified by that timestamp has passed and the retention has expired, the file may be deleted. No other modification is allowed, even after expiration.

In a filesystem with retention enabled, rename of directories is blocked unless the directory is empty. This is done to preserve the name of a file, including its path, so that its location cannot be hidden or any meaning conveyed by a changed path.

File retention enforces one of two policies, set at filesystem creation:

  • Privileged mode: Allows a process with the FILE_RETENTION_OVERRIDE privilege to override retention and delete files. This privilege does not allow files to be modified once retained.           

Mandatory mode: No privilege or authorization allows deletion of a retained file until the retention timestamp has been surpassed. Mandatory mode's protection extends to the filesystem and pool in that they may not be destroyed until all retention on all files therein has expired. A mandatory-mode-protected filesystem also protects its ancestors and clone descendants from destruction.


NOTE: File retention must be enabled in the filesystem during creation before files can be retained because in most settings, taking away the ability to modify or delete a file would be undesirable behavior.

Tuesday, May 17, 2022

ZFS Object Store now offers detailed access control policies

 Object Retention Rules is one of the new features that was released in ZFSSA version 8.8.36.  Before I talk about Object Retention Rules on buckets on ZFSSA, I am going to go how to leverage the new access control polies that go along with managing objects, buckets, and retention. 

User Architecture


If you have followed my previous postings on configuring ZFS as object store, you found that one of the options available is to configure ZFSSA as an OCI Gen 2 (sometimes called OCI native) object store.
 When configuring this API interface on ZFSSA, the authentication utilizes the same public/private key concept that is used in most of the Oracle Cloud.

If you want to read my post on configuring authentication you can find it here.

What I want to go through in this post is how you can configure a set of user roles on ZFSSA with different permissions based public/private keys.

This will help you isolate and secure backups that were sent from multiple sources, and allow you to define both a security administrator (to apply retention policies), and an auditor to view the existence of backups without having the ability to delete or update backups.

In the "User Architecture" diagram at the beginning of this post you see that I have defined 5 user roles  that will be used to manage the object store security for the backups.

Users:

  • SECADMIN - This user role is the security administrator for all 3 object store backups, and all three buckets.  This user role is responsible for creating, deleting and assigning retention rules to the buckets.
  • AUDITOR - This user reviews the backups and has a read only view of all 3 backups. The auditor cannot delete or update any objects, but they can view the existence of the backup pieces.
  • GLUSER  - This user controls the backups for GLDB only
  • APUSER  - This user controls the backups for APDB only
  • DWUSER - This user controls the backups for DWDB only
NOTE: Because the Object Store API controls the access to objects in the bucket, all access to objects in a bucket is through the bucket owner. I can have multiple buckets on the same share, managed by different different users, but access WITHIN the bucket is only granted to the bucket owner.

Based on the above note, I am going to create 3 users to manage the buckets for the 3 database backups.
The 2 additional user roles, SECADMIN and AUDITOR are going to control their access through the use of RSA keys.

 Because I am not going to use pre-authenticated URLs for my backups (which requires login), all 3 users are going to be created as "no-login" users.    Below is an example of creating the APUSER.





I created all 3 users as no-login users




Project/Share for Object Storage


Now I am going to create a project and share to store the backup pieces for all 3 databases.  The project is going to be "dbbackups" and the share is going to be "dbbackups".  I am going to set the default user for the share to "oracle" and I am also going to grant the other 3 users "Full Control" of the share. I will later limit the permissions for these users.


Share User Access


User certificates:


Authentication to the object store is through the use of RSA public/private certificates.
For each user/role I created a certificate that will be used for authentication. 
The following table shows the users/roles and the fingerprint that identifies them.




Authentication:


Within the OCI service on the ZFSSA I combine the user and key (fingerprint) to provide the role.


First I will add the SECADMIN role.  Notice that I am adding this users access to all 3 database backup "users".  This will allow the SECADMIN role to manage bucket creation/deletion and retention to the individual buckets.  The role SECADMIN is accessed through the key.

I will start by adding the key owned by this role (SECADMIN) to the 3 users APUSER, GLUSER and DWUSER.



Now that I have the SECADMIN role assigned to the 3 users, I want set the proper capabilities for this role.  I click on the pencil to edit the key configuration, and I can see the permissions assigned to this user/key combination.  I want to allow the SECADMIN the ability create buckets, delete buckets and  to control the retention within the 3 users buckets.  This role will need the ability to read the bucket.  Notice that this role does not have ability to read any of the objects within the bucket





Now I am going to move on to the AUDITOR role.  This role will be configured using the AUDITOR key assigned to all 3 users.  Within each user the AUDITOR will be granted the ability to read the bucket and the objects but not make any changes.


I now have both the SECADMIN role and the AUDITOR role defined for all 3 users. Below is what is configured within the OCI service. Notice that that there are 2 keys set for each user, and the there are 2 unique keys (one for SECADMIN and one for AUDITOR).


Finally I am going to add the 3 users that own the buckets and grant them access to create objects, but not control the retention or be able to add/remove buckets.



Once completed with adding users/keys I have my 2 roles defined and assigned to each user, and I have an individual key for each user/backup.

When completed, the chart below shows the permissions for each user/role.



OCI cli configuration :


I added entries to the ~/.oci/config file for each of the users/roles configured for the service.
Below is an example entry for the SECADMIN role with the APDB bucket.

[SECADMIN_APDB]
user=ocid1.user.oc1..apuser
fingerprint=0a:35:21:1b:5c:eb:09:8c:e9:44:42:f2:7c:b5:bc:f6
key_file=~/keys/secadmin.ppk
tenancy=ocid1.tenancy.oc1..nobody
region=us-phoenix-1
endpoint=http://150.136.215.19
os.object.bucket-name=apdb
namespace-name=dbbackups
compartment-id=dbbackups


Below is a table of the entries that I added to the config file.




Creating buckets:


Now I am going to create my 3 buckets using the SECADMIN role. Below is an example of adding the bucket for APDB

[oracle@oracle-19c-test-tde keys]$ oci os bucket create  --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile SECADMIN_APDB    --name apdb  --compartment-id dbbackups
{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "auto-tiering": null,
    "compartment-id": "dbbackups",
    "created-by": "apuser",
    "defined-tags": null,
    "etag": "2f0b55dbbb925ebbaabbc37e3ce342fa",
    "freeform-tags": null,
    "id": "2f0b55dbbb925ebbaabbc37e3ce342fa",
    "is-read-only": null,
    "kms-key-id": null,
    "metadata": null,
    "name": "apdb",
    "namespace": "dbbackups",
    "object-events-enabled": null,
    "object-lifecycle-policy-etag": null,
    "public-access-type": "NoPublicAccess",
    "replication-enabled": null,
    "storage-tier": "Standard",
    "time-created": "2022-05-17T17:55:49+00:00",
    "versioning": "Disabled"
  },
  "etag": "2f0b55dbbb925ebbaabbc37e3ce342fa"
}


I then did the same thing for the GLDB bucket using SECADMIN_GLDB, and the DWDB bucket using SECADMIN_DWDB.

Once the buckets were created, I attempted to create buckets with both the AUDITOR role, and the DB role.  You can see below that both of these configurations did not have the correct privileges.

[oracle@oracle-19c-test-tde keys]$ oci os bucket create  --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile AUDITOR_APDB    --name apdb  --compartment-id dbbackups
ServiceError:
{
    "code": "BucketNotFound",
    "message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
    "opc-request-id": "tx3a37f1dee0cc4778a1201-006283e2a1",
    "status": 404
}
[oracle@oracle-19c-test-tde keys]$ oci os bucket create  --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile APDB    --name apdb  --compartment-id dbbackups
ServiceError:
{
    "code": "BucketNotFound",
    "message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
    "opc-request-id": "tx46435ae6b8234982b3fbd-006283e2a9",
    "status": 404
}



Listing buckets:

All of the entries I created have access to view the buckets.  Below is an example of SECADMIN_APDB listing buckets. You can see that I have 3 buckets each owned by the correct user.
[oracle@oracle-19c-test-tde keys]$ oci os bucket list --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile SECADMIN_APDB    --compartment-id dbbackups

{
  "data": [
    {
      "compartment-id": "dbbackups",
      "created-by": "apuser",
      "defined-tags": null,
      "etag": "2f0b55dbbb925ebbaabbc37e3ce342fa",
      "freeform-tags": null,
      "name": "apdb",
      "namespace": "dbbackups",
      "time-created": "2022-05-17T17:55:49+00:00"
    },
    {
      "compartment-id": "dbbackups",
      "created-by": "dwuser",
      "defined-tags": null,
      "etag": "866ded83e5ea2a29c66dca0d01036f0e",
      "freeform-tags": null,
      "name": "dwdb",
      "namespace": "dbbackups",
      "time-created": "2022-05-17T17:58:32+00:00"
    },
    {
      "compartment-id": "dbbackups",
      "created-by": "gluser",
      "defined-tags": null,
      "etag": "2169cf94f86009f66ca8770c1c58febb",
      "freeform-tags": null,
      "name": "gldb",
      "namespace": "dbbackups",
      "time-created": "2022-05-17T17:58:17+00:00"
    }
  ]
}


Configuration retention lock :


Here is the documentation on how to configure retention lock for the objects within a bucket. For my example, I am going to lock all objects for 15 days.  I am going to use the SECADMIN_APDB account to lock the objects on the apdb bucket.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule create --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile SECADMIN_APDB --bucket-name apdb --time-amount 30  --time-unit days --display-name APDB-30-day-Bound-backups
{
  "data": {
    "display-name": "APDB-30-day-Bound-backups",
    "duration": {
      "time-amount": 30,
      "time-unit": "DAYS"
    },
    "etag": "2c9ab8ff9c4743392d308365d9f72e05",
    "id": "2c9ab8ff9c4743392d308365d9f72e05",
    "time-created": "2022-05-17T18:49:24+00:00",
    "time-modified": "2022-05-17T18:49:24+00:00",
    "time-rule-locked": null
  }
}


Now I am going to make sure my AUDITOR role and my BACKUP role do not have privileges to manage retention. For both of these I can an error.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule create --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb --time-amount 30  --time-unit days --display-name APDB-30-day-Bound-backups
ServiceError:
{
    "code": "BucketNotFound",
    "message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
    "opc-request-id": "tx52e8849aa6444c639d59b-006283ee99",
    "status": 404
}

I set the retention rule for the other buckets, and now I can use the AUDITOR accounts to list the retention rules.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_APDB --bucket-name apdb
oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb
oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_DWDB --bucket-name dwdb


{
  "data": {
    "items": [
      {
        "display-name": "APDB-30-day-Bound-backups",
        "duration": {
          "time-amount": 30,
          "time-unit": "DAYS"
        },
        "etag": "2c9ab8ff9c4743392d308365d9f72e05",
        "id": "2c9ab8ff9c4743392d308365d9f72e05",
        "time-created": "2022-05-17T18:49:24+00:00",
        "time-modified": "2022-05-17T18:49:24+00:00",
        "time-rule-locked": null
      }
    ]
  }
}
[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb
{
  "data": {
    "items": [
      {
        "display-name": "GLDB-30-day-Bound-backups",
        "duration": {
          "time-amount": 30,
          "time-unit": "DAYS"
        },
        "etag": "ee0d6114310a9971f5a464b428916e48",
        "id": "ee0d6114310a9971f5a464b428916e48",
        "time-created": "2022-05-17T18:56:45+00:00",
        "time-modified": "2022-05-17T18:56:45+00:00",
        "time-rule-locked": null
      }
    ]
  }
}
[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_DWDB --bucket-name dwdb
{
  "data": {
    "items": [
      {
        "display-name": "DWDB-30-day-Bound-backups",
        "duration": {
          "time-amount": 30,
          "time-unit": "DAYS"
        },
        "etag": "96cc109a7308d5f849541be72d87757a",
        "id": "96cc109a7308d5f849541be72d87757a",
        "time-created": "2022-05-17T18:57:42+00:00",
        "time-modified": "2022-05-17T18:57:42+00:00",
        "time-rule-locked": null
      }
    ]
  }
}


Sending backups to buckets :

Here is the link to the "archive to cloud" section of the latest ZDLRA documentation.  The buckets are added as cloud locations.  Since I am going to be using an immutable bucket, I also need to add a metadata bucket to match the normal backup bucket. The metadata bucket holds temporary objects that get removed as the backup is written.
  I created 3 additional buckets, "apdb_meta", "gldb_meta" and "dwdb_meta".
When I configure the Cloud Location I want to use the keys I created to send the backups.

The backup pieces were sent by the keys for apuser, gluser, and dwuser.

I used the process in the documentation to send the backups pieces from  the ZDLRA

Audit Backups :


Now that I have backups created for my database, I am going to use the the AUDITOR role to report on what's available within the apdb bucket.

First I am going to look at the Retention Settings.


[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_APDB --bucket-name apdb
{
  "data": {
    "items": [
      {
        "display-name": "APDB-30-day-Bound-backups",
        "duration": {
          "time-amount": 30,
          "time-unit": "DAYS"
        },
        "etag": "2c9ab8ff9c4743392d308365d9f72e05",
        "id": "2c9ab8ff9c4743392d308365d9f72e05",
        "time-created": "2022-05-17T18:49:24+00:00",
        "time-modified": "2022-05-17T18:49:24+00:00",
        "time-rule-locked": null
      }
    ]
  }
}


Now I am going to print out all the backups that exist for the APDB database.
I am using the python script that comes with the Cloud Backup Library and instructions for how to use it can be found in my blog here.
 
Below I am running the script. Notice I am running it using the AUDITOR role.

[oracle@oracle-19c-test-tde ~]$ python2  /home/oracle/ociconfig/lib/odbsrmt.py --mode report --ocitype bmc  --host http://150.136.215.19 --dir /home/oracle/keys/reports --base apdbreport --pvtkeyfile  /home/oracle/keys/auditor.ppk --pubfingerprint a8:31:78:c2:b4:4f:44:93:bd:4f:f1:72:1c:37:c8:86 --tocid ocid1.tenancy.oc1..nobody --uocid ocid1.user.oc1..apuser --container apdb  --dbid 2867715978
odbsrmt.py: ALL outputs will be written to [/home/oracle/keys/reports/apdbreport12193.lst]
odbsrmt.py: Processing container apdb...
cloud_slave_processors: Thread Thread_0 starting to download metadata XML files...
cloud_slave_processors: Thread Thread_0 successfully done
odbsrmt.py: ALL outputs have been written to [/home/oracle/keys/reports/apdbreport12193.lst]

And finally I can see the report that is creating from this script.


FileName
Container                Dbname         Dbid        FileSize          LastModified                BackupType                  Incremental  Compressed   Encrypted
870toeq3_263_1_1
apdb                     ORCLCDB        2867715978  1285029888        2022-05-17 19:09:45         Datafile                    true         false        true   
890toetk_265_1_1
apdb                     ORCLCDB        2867715978  2217476096        2022-05-17 19:12:17         ArchivedLog                 false        false        true   
8a0tof0j_266_1_1
apdb                     ORCLCDB        2867715978  2790260736        2022-05-17 19:14:15         Datafile                    true         false        true   
8b0tof4g_267_1_1
apdb                     ORCLCDB        2867715978  2124677120        2022-05-17 19:15:52         Datafile                    true         false        true   
8c0tof7f_268_1_1
apdb                     ORCLCDB        2867715978  536346624         2022-05-17 19:16:21         Datafile                    true         false        true   
8d0tof89_269_1_1
apdb                     ORCLCDB        2867715978  262144            2022-05-17 19:16:25         ArchivedLog                 false        false        true   
c-2867715978-20220517-00
apdb                     ORCLCDB        2867715978  18874368          2022-05-17 19:09:47         ControlFile SPFILE          false        false        true   
c-2867715978-20220517-01
apdb                     ORCLCDB        2867715978  18874368          2022-05-17 19:16:26         ControlFile SPFILE          false        false        true   
Total Storage: 8.37 GB



Conclusion :

By creating 3 different roles for the user through the use of separate keys I am able to provide a separation of duties on the OCI object store.,

SECADMIN - This user role creates/deletes buckets and controls retention. This user role cannot see any backup pieces, and this user role cannot delete any objects from the buckets. This user role is isolated from the backup pieces themselves.

AUDITOR - This user role is used to create reporting on the backups to ensure there are backup pieces available.

DBA - These user roles are used to manage the individual backup pieces within the bucket but they do not have the ability to delete the bucket, or change the retention.

This provides a true separation of duties.



Wednesday, April 27, 2022

Recovery Continuity with Multitenant

 Recovery Continuity with Multitenant is something you need to understand as you migrate databases from one CDB to another






Above is from a presentation that I recently gave to my internal Oracle Team. This was such a big hit (and very eye opening) that I wanted to make sure I share the information on my Blog


The first thing to point out is that before we (Oracle) moved to the multitenant architecture, life was simple. Below is my slide showing how databases were moving around as they upgraded.  Regardless of whether it was an out-of-place upgrade, or migrating it to a different host, the DB name stays, and the backups stay contiguous.


But, like many things in life, new ideas came along that changed the way we do things.  Multitenant is one of those things.  Don't get me wrong, multitenant is a great feature giving DBAs a lot more flexibility.  Below are a couple of pictures that show all the wonderful things that multitenant can do.




Above are the 2 slides from my presentation.  These slides are often used to show the benefits of multitenant.  I did point out on the last slide encryption keys that are used to secure the database with TDE.

The use of Encryption keys is an important point to think about.  With Multitenant (if you think about how it works), the CDB has a different encryption key from the PDB.  If I create an encrypted backup of my CDB, it is encrypted with the CDB key. The backup (and actual datafiles) for my PDB are encrypted with the PDB key.

Below is the next slide I used.  All the information on multitenant talks about how easy it is to unplug/plug (which it is), but ensuring you maintain your recovery window is the hard part.



Database backup and recovery in a multitenant environment

Here are some things to keep in mind in a multitenant environment

  • Pluggable database backup pieces are ALWAYS kept independent of the CDB and other PDBs.  Even with a filesperset=1000 channels=1 each PDB, and the CDB will be individual backupsets.
  • Pluggable databases can be backed up independently of each other, and of the CDB. “backup pluggable database xxx.
  • You can perform a point in time recovery of a pluggable database independent of other PDBS. This requires local undo. “recover pluggable database until”
  • Recovering a pluggable database requires a backup of the CDB (for metadata), and backups of the archive logs.
  • All redo transactions for all PDBs are intertwined into a single redo stream. This will not change in the near future. 
  • Flashback can be set at the PDB level
  • You can create restore points within a PDB
When backing up a Multitenant environment, the item to keep in mind is that the RMAN catalog information is stored at the CDB level.  Pluggable databases are part of the CDB, and registration is done at the CDB.


The next image shows what a recovery of the Pluggable database looks like. Keep in mind that the datafiles for the  pluggable database get restored using the pluggable database backup, but to defuzzy them, the archive logs get restored from the CDB.  Remember that in a multitenant environment the redo/archive logs are intertwined at the CDB level.


The next image shows what is typically done to perform a PDB upgrade with unplug/plug. The pluggable database is migrated from 12c to 19c.


Now that the database is migrated, let's look at what happens to the RMAN catalog after the migration to ensure that we have a backup of the pluggable database.



You can see in the image above, that the pluggable database is now associated with the CDB that the pluggable database is plugged into.

Now to go back to the image at the beginning of this post, you can see what it takes to restore and recovery the database throughout it's lifecycle.

  • Backups that were taken through previous CDBs (for example an archival backup) needs to be restored through the CDB is was backed up through.
  • Backups that were taken in original CDB can only be restored back to the original CDB.
  • Pre-plugin backups provide a gateway between plugging in and when the first backup is taken
  • Backups to the new CDB will restored back to the new CDB.



Finally some parting thoughts on backups of pluggable databases when migrating.

  • Perform a full backup if possible (ZDLRA makes this easy) with the PDB mounted prior to unplugging. This is the best possible restore point after migrating.
  • Keep the RMAN catalog entries for the old CDB as long as there are valid backups pieces. This could be years for keep backups.
  • NOTE – On the ZDLRA you can execute “Pause Database” this will remove all backups, but leave the RMAN catalog entries.
  • Ensure you have the encryption keys for both CDBs and PDBs for the needed recovery window which may be years.
  • Keep track of CDB backups, as a PDB might be migrated between multiple CDBs throughout it’s backup cycle.
  • NEVER delete a CDB backup that has needed backups
  • NEVER delete any TDE keys or wallets that support needed backups.

Friday, April 15, 2022

Recovery Continuity of your Oracle Database

"Recovery Continuity" should be a critical part of your Oracle Database support plan.
As multitenant Oracle Databases becomes the standard for database implementations, you need to ensure that you maintain your recovery window even as your pluggable moves around your environment.

Above is the recommended practice we have all been hearing about to make upgrades of your Oracle Database easier. Unplug from your current CDB (CDBPROD122) plug into a new CDB (CDB19C) that has the new release.  What you need to think about however, is how am I  going to ensure that I can recover my pluggable database to any point in time, all the way this migration without a huge amount of downtime?

This is where preplugin backups, and some planning comes into play.
You can find out more about preplugin backups with some of the links below.
Let's take a look at what I am doing for my pluggable database PDBDWPROD before I migrate it from OLDCDB to NEWCDB.

Pre-unplug


In the picture PDBDWPROD is plugged into CDBPROD122.

In my environment I am testing my PDB (PDBDWPROD)  and it is plugged into OLDCDB,  migrating to NEWCDB.

To ensure that I have a good restore point I am going to perform a full backup of my pluggable database prior to unplugging, and I will also include an archive log backups. 

RMAN> backup incremental level 0 pluggable database PDBDWPROD plus archivelog delete input;


Starting backup at 15-APR-22
current log archived
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=426 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: starting compressed archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=185 RECID=186 STAMP=1102096068
input archived log thread=1 sequence=186 RECID=187 STAMP=1102096071
input archived log thread=1 sequence=187 RECID=188 STAMP=1102096147
input archived log thread=1 sequence=188 RECID=189 STAMP=1102096166
input archived log thread=1 sequence=189 RECID=190 STAMP=1102096288
channel ORA_SBT_TAPE_1: starting piece 1 at 15-APR-22
channel ORA_SBT_TAPE_1: finished piece 1 at 15-APR-22
piece handle=6l0r19t1_213_1_1 tag=TAG20220415T175129 comment=API Version 2.0,MMS Version 23.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:03
channel ORA_SBT_TAPE_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_185_k5mcy4w8_.arc RECID=186 STAMP=1102096068
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_186_k5mcy7xv_.arc RECID=187 STAMP=1102096071
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_187_k5md0m8l_.arc RECID=188 STAMP=1102096147
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_188_k5md16rt_.arc RECID=189 STAMP=1102096166
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_189_k5md50qy_.arc RECID=190 STAMP=1102096288
Finished backup at 15-APR-22

Starting backup at 15-APR-22
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting compressed incremental level 0 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00066 name=/u01/app/oracle/oradata/OLDCDB/PDBDWPROD/sysaux01.dbf
input datafile file number=00065 name=/u01/app/oracle/oradata/OLDCDB/PDBDWPROD/system01.dbf
input datafile file number=00068 name=/u01/app/oracle/oradata/OLDCDB/PDBDWPROD/PDBDWPROD.dbf
input datafile file number=00067 name=/u01/app/oracle/oradata/OLDCDB/PDBDWPROD/undotbs01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 15-APR-22
channel ORA_SBT_TAPE_1: finished piece 1 at 15-APR-22
piece handle=6m0r19t5_214_1_1 tag=TAG20220415T175132 comment=API Version 2.0,MMS Version 23.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:15
Finished backup at 15-APR-22

Starting backup at 15-APR-22
current log archived
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting compressed archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=190 RECID=191 STAMP=1102096309
channel ORA_SBT_TAPE_1: starting piece 1 at 15-APR-22
channel ORA_SBT_TAPE_1: finished piece 1 at 15-APR-22
piece handle=6n0r19tm_215_1_1 tag=TAG20220415T175150 comment=API Version 2.0,MMS Version 23.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:03
channel ORA_SBT_TAPE_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_190_k5md5osn_.arc RECID=191 STAMP=1102096309
Finished backup at 15-APR-22

Starting Control File and SPFILE Autobackup at 15-APR-22
piece handle=c-1180802953-20220415-07 comment=API Version 2.0,MMS Version 23.0.0.1
Finished Control File and SPFILE Autobackup at 15-APR-22

RMAN>


Then right before the unplug I am going to execute another archive log backup, immediately followed by the unplug.

RMAN> backup archivelog all delete input;

Starting backup at 15-APR-22
current log archived
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=442 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: starting compressed archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=191 RECID=192 STAMP=1102096412
input archived log thread=1 sequence=192 RECID=193 STAMP=1102096418
input archived log thread=1 sequence=193 RECID=194 STAMP=1102096424
input archived log thread=1 sequence=194 RECID=195 STAMP=1102096502
channel ORA_SBT_TAPE_1: starting piece 1 at 15-APR-22
channel ORA_SBT_TAPE_1: finished piece 1 at 15-APR-22
piece handle=6p0r1a3n_217_1_1 tag=TAG20220415T175503 comment=API Version 2.0,MMS Version 23.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:07
channel ORA_SBT_TAPE_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_191_k5md8w57_.arc RECID=192 STAMP=1102096412
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_192_k5md926n_.arc RECID=193 STAMP=1102096418
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_193_k5md9893_.arc RECID=194 STAMP=1102096424
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_194_k5mdcpko_.arc RECID=195 STAMP=1102096502
Finished backup at 15-APR-22

Starting Control File and SPFILE Autobackup at 15-APR-22
piece handle=c-1180802953-20220415-08 comment=API Version 2.0,MMS Version 23.0.0.1
Finished Control File and SPFILE Autobackup at 15-APR-22



Then the unplug

SQL>  alter pluggable database PDBDWPROD  close immediate;

Pluggable database altered.

SQL> ALTER PLUGGABLE DATABASE PDBDWPROD  UNPLUG INTO '/tmp/PDBDWPROD.xml';

Pluggable database altered.

SQL>


Plug


SQL>  create pluggable database PDBDWPROD using  '/tmp/PDBDWPROD.xml' nocopy tempfile reuse KEYSTORE IDENTIFIED BY "change-on-install" ;
Pluggable database created. SQL> alter pluggable database PDBDWPROD open;
Pluggable database altered.

Update database and set restore point


Now I am going create some objects in my PDB, set a restore point, and then create a few more objects to ensure I am restoring to a point in time.
SQL>  alter session set container=PDBDWPROD;

Session altered.

SQL> create table bgrenn.postmove as select * from dba_objects ;

Table created.

############################ perform a couple of log switches

SQL>  alter session set container=CDB$ROOT;
Session altered.

SQL> alter system archive log current;
System altered.

SQL> alter system archive log current;
System altered.

SQL>  alter session set container=PDBDWPROD;
Session altered.

############################ create a restore point

SQL> create restore point PDBDWPROD_restore;
Restore point created.

############################  create a second table

SQL> create table bgrenn.postrestorepoint as select * from dba_objects ;
Table created.

############################ perform a couple of log switches

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> alter system archive log current;
System altered.

SQL> alter system archive log current;
System altered.

SQL> alter system archive log current;
System altered.



Backups available post plugin

Now using the preplugin commands I can see the backups that we taken before the migration.

rman> SET PREPLUGIN CONTAINER=PDBDWPROD;
rman> list preplugin backup of pluggable database PDBDWPROD;

 

RMAN>  list preplugin backup of pluggable database PDBDWPROD;

starting full resync of recovery catalog
full resync complete

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
209     Incr 0  284.50M    SBT_TAPE    00:00:07     15-APR-22
        BP Key: 209   Status: AVAILABLE  Compressed: YES  Tag: TAG20220415T175132
        Handle: 6m0r19t5_214_1_1   Media: objectstorage.us-ashburn-1.oraclecloud.com/n/xxx/oldcdb
  List of Datafiles in backup set 209
  Container ID: 5, PDB Name: PDBDWPROD
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
  59   0  Incr 6346380    15-APR-22              NO    /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/system01.dbf
  60   0  Incr 6346380    15-APR-22              NO    /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/sysaux01.dbf
  61   0  Incr 6346380    15-APR-22              NO    /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/undotbs01.dbf
  62   0  Incr 6346380    15-APR-22              NO    /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/PDBDWPROD.dbf


list preplugin backup of archivelog all;

List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
208     2.25M      SBT_TAPE    00:00:01     15-APR-22
        BP Key: 208   Status: AVAILABLE  Compressed: YES  Tag: TAG20220415T175129
        Handle: 6l0r19t1_213_1_1   Media: objectstorage.us-ashburn-1.oraclecloud.com/n/xxx/oldcdb

  List of Archived Logs in backup set 208
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    185     6345022    15-APR-22 6345387    15-APR-22
  1    186     6345387    15-APR-22 6345399    15-APR-22
  1    187     6345399    15-APR-22 6345803    15-APR-22
  1    188     6345803    15-APR-22 6345912    15-APR-22
  1    189     6345912    15-APR-22 6346322    15-APR-22

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
210     256.00K    SBT_TAPE    00:00:00     15-APR-22
        BP Key: 210   Status: AVAILABLE  Compressed: YES  Tag: TAG20220415T175150
        Handle: 6n0r19tm_215_1_1   Media: objectstorage.us-ashburn-1.oraclecloud.com/n/xxx/oldcdb

  List of Archived Logs in backup set 210
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    190     6346322    15-APR-22 6346391    15-APR-22

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
212     512.00K    SBT_TAPE    00:00:02     15-APR-22
        BP Key: 212   Status: AVAILABLE  Compressed: YES  Tag: TAG20220415T175503
        Handle: 6p0r1a3n_217_1_1   Media: objectstorage.us-ashburn-1.oraclecloud.com/n/id20skavsofo/oldcdb

  List of Archived Logs in backup set 212
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    191     6346391    15-APR-22 6346585    15-APR-22
  1    192     6346585    15-APR-22 6346593    15-APR-22
  1    193     6346593    15-APR-22 6346601    15-APR-22
  1    194     6346601    15-APR-22 6346663    15-APR-22



Restore from preplugin

I shutdown my pluggable database and start with "from preplugin" in the command in my rman session.

RAMN> alter pluggable database PDBDWPROD close;
RMAN> restore pluggable database PDBDWPROD   from preplugin;

RMAN> alter pluggable database PDBDWPROD close;

Statement processed
starting full resync of recovery catalog
full resync complete

RMAN> restore pluggable database PDBDWPROD   from preplugin;

Starting restore at 15-APR-22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00059 to /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/system01.dbf
channel ORA_SBT_TAPE_1: restoring datafile 00060 to /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/sysaux01.dbf
channel ORA_SBT_TAPE_1: restoring datafile 00061 to /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/undotbs01.dbf
channel ORA_SBT_TAPE_1: restoring datafile 00062 to /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/PDBDWPROD.dbf
channel ORA_SBT_TAPE_1: reading from backup piece 6m0r19t5_214_1_1
channel ORA_SBT_TAPE_1: piece handle=6m0r19t5_214_1_1 tag=TAG20220415T175132
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
Finished restore at 15-APR-22


Recover from preplugin

Now I am running the recover from preplugin

 recover pluggable database PDBDWPROD   from preplugin;
RMAN>

Starting recover at 15-APR-22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=190
channel ORA_SBT_TAPE_1: reading from backup piece 6n0r19tm_215_1_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/15/2022 18:23:58
ORA-19870: error while restoring backup piece 6n0r19tm_215_1_1
ORA-19827: Restoring preplugin files to a recovery area is not supported.

RMAN>


You can see that it is not going to let me apply the archive logs by restoring them from backup to the local recovery area of my new CDB.

I need to catalog the archive logs themselves by restoring them.

By looking at the backup piece name, I can see it is looking for "sequence 190" and I restored it from my original CDB.


RMAN> restore archivelog sequence 190;

Starting restore at 15-APR-22
starting full resync of recovery catalog
full resync complete
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=449 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=71 device type=DISK

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=190
channel ORA_SBT_TAPE_1: reading from backup piece 6n0r19tm_215_1_1
channel ORA_SBT_TAPE_1: piece handle=6n0r19tm_215_1_1 tag=TAG20220415T175150
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
Finished restore at 15-APR-22

RMAN> list archivelog sequence 190;

List of Archived Log Copies for database with db_unique_name OLDCDB
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
8134    1    190     A 15-APR-22
        Name: /u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_190_k5mgd1h2_.arc




Now I need to catalog it in preplugin backup to continue the recovery.
I am able to copy the restored archive log to /tmp and catalog it, but I am still missing some pieces. I will continue restoring the rest of archivelogs that in the listing up to sequence 194

RMAN> restore archivelog sequence 190;

Starting restore at 15-APR-22
starting full resync of recovery catalog
full resync complete
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=449 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=71 device type=DISK

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=190
channel ORA_SBT_TAPE_1: reading from backup piece 6n0r19tm_215_1_1
channel ORA_SBT_TAPE_1: piece handle=6n0r19tm_215_1_1 tag=TAG20220415T175150
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
Finished restore at 15-APR-22

RMAN> list archivelog sequence 190;

List of Archived Log Copies for database with db_unique_name OLDCDB
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
8134    1    190     A 15-APR-22
        Name: /u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_190_k5mgd1h2_.arc




Now that I restored and catalog all the backup pieces up to sequence 194, I will continue the recovery.
RMAN>  recover pluggable database PDBDWPROD   from preplugin;

Starting recover at 15-APR-22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 191 is already on disk as file /tmp/o1_mf_1_191_k5mgw2fr_.arc
archived log for thread 1 with sequence 192 is already on disk as file /tmp/o1_mf_1_192_k5mgw83q_.arc
archived log for thread 1 with sequence 193 is already on disk as file /tmp/o1_mf_1_193_k5mgwlf8_.arc
archived log for thread 1 with sequence 194 is already on disk as file /tmp/o1_mf_1_194_k5mgx0t1_.arc
unable to find archived log
archived log thread=1 sequence=195
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/15/2022 18:40:31
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 195 and starting SCN of 6346663



I am finding that there is still one last archivelog (hopefully). This was the redo log that was active while I unplugged my database.
In fact I can see on the source CDB, that it is still the active redo log, so I am going to have to do a log switch to grab a copy of the archive log and catalog it.

SQL> select sequence#,status from v$log;

 SEQUENCE# STATUS
---------- ----------------
       195 CURRENT
       193 INACTIVE
       194 INACTIVE


Now that I have the last archive log, my preplug recovery is completed to the time it was unplugged.

RMAN> recover pluggable database PDBDWPROD   from preplugin;

Starting recover at 15-APR-22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 195 is already on disk as file /tmp/o1_mf_1_195_k5mhf79h_.arc
media recovery complete, elapsed time: 00:00:01
Finished recover at 15-APR-22



Recover post plugin

Now I can recover to my restore point, and open it up.


RMAN>

RMAN> recover pluggable database PDBDWPROD  until restore point PDBDWPROD_restore;

Starting recover at 15-APR-22
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=4 device type=DISK


starting media recovery

archived log for thread 1 with sequence 101 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_101_k5mf0rr5_.arc
archived log for thread 1 with sequence 102 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_102_k5mf0s8b_.arc
archived log for thread 1 with sequence 103 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_103_k5mf1wof_.arc
archived log for thread 1 with sequence 104 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_104_k5mf1zqm_.arc
archived log for thread 1 with sequence 105 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_105_k5mf22sk_.arc
archived log for thread 1 with sequence 106 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_106_k5mf91fn_.arc
archived log for thread 1 with sequence 107 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_107_k5mf94g2_.arc
archived log for thread 1 with sequence 108 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_108_k5mf9wk2_.arc
media recovery complete, elapsed time: 00:00:01
Finished recover at 15-APR-22

RMAN> alter pluggable database PDBDWPROD open resetlogs;

Statement processed
starting full resync of recovery catalog
full resync complete



And let's make sure I can see that it was recovered until prior to my restore point, and that I can see the data in the table.

SQL>  alter session set container=PDBDWPROD;

Session altered.

SQL> select table_name from dba_tables where owner='BGRENN';

TABLE_NAME
--------------------------------------------------------------------------------
POSTMOVE

SQL> select count(1) from bgrenn.postmove;

  COUNT(1)
----------
     73610




Conclusion : 


Preplugin backups provide you with Recovery Continuity ensuring you can recovery your pluggable after migrating to a new PDB even before you take your first backup.  As you can tell by my example, you want to make sure you take the backup as close to the point in time you are unplugging as possible to lessen the work to catalog and apply the archive logs.  I would also recommend you take a backup on the new CDB as soon as possible.