Monday, December 23, 2019

Oracle - Alternate Destinations, FRA, and balancing 2 destinations


My previous post was on using Groups/Priorities when specifying an alternate destination for sending Redo to a remote destination. You can find it here.

This post covers 2 topics that come up whenever I talk about alternate destinations.

  • Does it balance 2 destinations with the same group/priority. and once it choses a destination, does it remain sticky to that destination and ignore other destinations with the same "group/priority"
  • Does my FRA know both destinations can be considered the same for "shipped" status?


First I will show you what I configured my tests.

In my example, I actually have both a standby database and a Far Sync instance for my primary database.

bsg18   --> Primary database
bsg18d --> Dataguard database
bsg18f  --> Far Sync database

This is what my 2 remote destinations look like in my configuration.


log_archive_dest_2 ='service="bsg18d" ASYNC NOAFFIRM max_failure=0 db_unique_name="bsg18d" group=1 priority=1 valid_for=(online_logfile,all_roles)';

log_archive_dest_3 ='Service="bsg18f" SYNC AFFIRM  max_failure=1 db_unique_name="bsg18f" group=1 priority=1 valid_for=(online_logfile,all_roles)';


You can see that both Dataguard (bsg18d) and Far Sync (bsg18f) are in "group=1 priority=1"

Multiple Destinations with the same Group and Priority - Does it balance them ?


First I'm going to look at where the logs are going.

   DEST_ID  SEQUENCE# S SHI APPLIED   DEL FAL
---------- ---------- - --- --------- --- ---
         2       1751 A YES NO        NO  NO
         2       1752 A YES NO        NO  NO
         2       1753 A YES NO        NO  NO


I can see that they are going to DEST_2, which is my dataguard instance.  I am going to shut it down, do a few log switches and see what happens.

   DEST_ID  SEQUENCE# S SHI APPLIED   DEL FAL
---------- ---------- - --- --------- --- ---
         2       1751 A YES YES       NO  NO
         1       1751 A YES NO        NO  NO
         1       1752 A YES NO        NO  NO
         2       1752 A YES YES       NO  NO
         1       1753 A YES NO        NO  NO
         2       1753 A YES YES       NO  NO
         1       1754 A YES NO        NO  NO
         2       1754 A YES NO        NO  NO
         1       1755 A YES NO        NO  NO
         1       1756 A YES NO        NO  NO
         1       1757 A YES NO        NO  NO
         1       1758 A YES NO        NO  NO
         1       1759 A YES NO        NO  NO
         1       1760 A YES NO        NO  NO


I've including DEST_1 this time to show that my sequence# had moved forward, but any sending of redo my standby has stopped.

Now I started it back up, and did a few log switches.

  DEST_ID  SEQUENCE# S SHI APPLIED   DEL FAL
---------- ---------- - --- --------- --- ---
         2       1751 A YES YES       NO  NO
         2       1752 A YES YES       NO  NO
         2       1753 A YES YES       NO  NO
         2       1754 A YES YES       NO  NO
         2       1761 A YES NO        NO  NO
         3       1755 A YES YES       NO  YES
         3       1758 A YES NO        NO  YES
 .....
         3       1754 A YES NO        NO  YES
         3       1752 A YES NO        NO  YES
         3       1762 A YES NO        NO  NO
         3       1763 A YES NO        NO  NO


After restarting the database, I can see that it is now using DEST_3 (my Far Sync) instance instead of  DEST_2 (dataguard).

Now for a final test on this, I am going to STOP my Far Sync instance.

   DEST_ID  SEQUENCE# S SHI APPLIED   DEL FAL
---------- ---------- - --- --------- --- ---
         2       1751 A YES YES       NO  NO
         2       1752 A YES YES       NO  NO
         2       1753 A YES YES       NO  NO
         2       1754 A YES YES       NO  NO
         2       1761 A YES YES       NO  NO
         3       1755 A YES YES       NO  YES
         3       1758 A YES NO        NO  YES
 ....
         3       1754 A YES YES       NO  YES
         3       1752 A YES YES       NO  YES
         3       1762 A YES NO        NO  NO
         3       1763 A YES NO        NO  NO
         2       1764 A YES NO        NO  YES
         2       1765 A YES NO        NO  NO
         2       1766 A YES NO        NO  NO


I stopped bsg18f (Far Sync), when it was on sequence 1764.  I can see that the database automatically switched to sending logs to bsg18d (dataguard) when DEST_3 because unavailable. 

This is exactly what I would expect to happen when I have 2 destinations with the same Group/Priority.
  • It chooses one of the 2 destinations and becomes "sticky" once chosen.
  • If the "sticky" destination becomes unavailable, it then automatically switches to the second destination.

FRA - If you use an FRA (Fast Recovery Area), how does it understand that I my redo can go to an alternate location?

Now after all this, you can see from my previous tests, some logs were sent from DEST_2 and some were sent from DEST_3.  Both of these were members of the same group.
Let's see if the FRA sees them as shipped (my retention policy).

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         21.78                     21.78             555          0
BACKUP PIECE                          7.54                       .45              38          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0



Perfect !  All the archive log space is reclaimable.  When I have different remote destinations that are members of the same group, as long as the Redo Log was successfully shipped to a destination in the group, the log is eligible for deletion.

Thursday, December 19, 2019

ZDLRA alternate redo destination replaced with Groups/Priority in 12.2

Alternate destination is depreciated in 12.2 for remote locations.

This diagram shows the use of an alternate destination for sending logs.
An alternate destination can be local to the database (to be used if the disk location fills up), but they can also be used to change the remote destination that the logs get sent to.

Configuring an "ALTERNATE" for a local destination is still supported

Below is what this looks like from the 18C manual.

LOG_ARCHIVE_DEST_1='LOCATION=/disk1 MANDATORY MAX_FAILURE=1 
ALTERNATE=LOG_ARCHIVE_DEST_2'
 
LOG_ARCHIVE_DEST_STATE_1=ENABLE
 
LOG_ARCHIVE_DEST_STATE_2=ALTERNATE

LOG_ARCHIVE_DEST_2='LOCATION=/disk2 MANDATORY'

In this example, if /disk1 is not available (fills up, etc). the database will switch to sending logs to /disk2 so that the database will not get hung.

HOWEVER, for remote destinations AKA destinations using "SERVICE" , this changes with 12.2

12.2 introduces the idea of "GROUP" and "PRIORITY". in place of a simple "ALTERNATE".
I'm sure this was added for Far Sync but it also comes into play with the ZDLRA.

Using only "ALTERNATE" gives you the ability to switch to a single remote destination. Using GROUP and PRIORITY lets you set multiple destinations and decide the order in which they are used.

Now let's start with simple example with FAR SYNC.
I have 2 locations. San Francisco (SF) is Primary with 2 local farsync servers, and my remote destination is New York (NYC).


I can group them together and set the priority that they use. I can even switch from sync to async.  Below is what the new syntax would look like for the destinations.

LOG_ARCHIVE_DEST_2='SERVICE=FARSYNC1 SYNC GROUP=1 PRIORITY=1'
LOG_ARCHIVE_DEST_3='SERVICE=FARSYNC2 SYNC GROUP=1 PRIORITY=1'
LOG_ARCHIVE_DEST_4='SERVICE=NYC ASYNC GROUP=1 PRIORITY=2’

This behaves in the following manner.


  • My database in San Francisco sends archive logs to EITHER of the two services specified as PRIORITY=1.  The log could go to FARSYNC1 or FARSYNC2 and get passed on to NYC.
  • If either of these 2 PRIORITY=1 destinations becomes unavailable the other will continue to work.
  • If both of these 2 PRIORITY=1 destinations become unavailable, the redo will be sent to NYC as ASYNC.
  • If NYC is the destination (because of a double failure), it will switch back to the either of the 2 FARSYNC services when they become available.
You can add up to 8 layers of PRIORITY and include a remote FARSYNC server if you would like.

Now, what does this have to do with ZDLRA ?

You can also use an alternate destination for redo traffic on the ZDLRA.  The alternate can be the downstream ZDLRA if replication is used, or it can be the upstream ZDLRA if store-and-forward is being used.  These are outlined in the ZDLRA admin guide under "Implementing Additional High Availability Strategies".

In the case of the ZDLRA you would use 2 destinations.
The first destination is the primary ZDLRA, and the second destination is the alternate ZDLRA.
The configuration would look like this.


LOG_ARCHIVE_DEST_2='SERVICE=ZDLRA_SF_SCAN:1521/ZDLRASF:dedicated" ASYNC GROUP=1 PRIORITY=1'
LOG_ARCHIVE_DEST_3='SERVICE=ZDLRA_NYC_SCAN:1521/ZDLRANYC:dedicated" ASYNC GROUP=1 PRIORITY=2'



This is an interesting change to the ALTERNATE setting on remote destinations.



Tuesday, November 26, 2019

Dataguard and LOG_ARCHIVE_MAX_PROCESSES


Any DBA who has set up Dataguard has seen this diagram of how the sending of logs to standby database works.

What I wanted to go through is the use of LOG_ARCHIVE_MAX_PROCESSES and how how it affects the sending of redo logs.

If you have set this parameter in your environment it is typically because you are seeing issues with the database cleaning out redo logs.
The primary reason this gets set from the default (4) is to add more processes capable of clearing out redo logs and making them archive logs.  The more processes running in parallel to clean out redo logs, the faster  the database can move on to start a new redo log.

But did you know the LOG_ARCHIVE_MAX_PROCESSES parameter also affects how many processes are used on your standby database to fill any gaps ?

In my example I have a primary database (BSG18) which on 18.3.0.  I also have a standby database (BSG18d).

In my primary database I set the parameter to 10.

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_max_processes            integer     10


I then shutdown my dataguard database, and performed many log switches. I needed at least 10 log switches to demonstrate what happens.

Now once I start up my standby database, the LOG_ARCHIVE_MAX_PROCESSES is used to determine how many processes on my standby database to fill the gap.

Here is my query from V$DATAGUARD_PROCESS

select name,
           pid "standby PID",
           role, client_pid "Primary DB PID",
           client_role
    from V$DATAGUARD_PROCESS
    where type='NET'
order by 2;


NAME  standby PID              ROLE                     Primary DB PID CLIENT_ROLE     
----- ------------------------ ------------------------ -------------- ----------------
rfs   18116                    RFS async                          4165 async ORL multi 
rfs   26337                    RFS ping                           4153 gap manager     
rfs   26401                    RFS archive                        4159 archive gap     
rfs   26404                    RFS archive                        8752 archive gap     
rfs   26408                    RFS archive                        8738 archive gap     
rfs   26411                    RFS archive                        8750 archive gap     
rfs   26413                    RFS archive                        8740 archive gap     
rfs   26415                    RFS archive                        8748 archive gap     
rfs   26417                    RFS archive                        8742 archive gap     
rfs   26419                    RFS archive                        8744 archive gap     
rfs   26421                    RFS archive                        8746 archive gap     


If you look at the output, you can see all the pieces fit together.


Real Time redo process


The first process sends the redo information to the standby database in async mode, and is written to the standby redo logs.


oracle    4165     1  0 Nov05 ?        00:05:57 ora_tt02_bsg18


We can see on the primary database it is a "tt" process and starts up with the database.

oracle   18116     1  0 Nov12 ?        00:07:58 oraclebsg18d (LOCAL=NO)


and for the standby database , it is a sqlnet process.

Gap Manager process


Now let's look at the second process.

oracle    4153     1  0 Nov05 ?        00:00:26 ora_tt00_bsg18


We can see on the primary database it is also a "tt" process and also starts up with the database.

oracle   26337     1  0 Nov12 ?        00:00:18 oraclebsg18d (LOCAL=NO)

and for the standby database , it is a sqlnet process.


Gap fill processes


Now let's look at the remaining processes.


oracle    4159     1  0 Nov05 ?        00:00:14 ora_arc1_bsg18
oracle    8738     1  0 Nov05 ?        00:00:13 ora_arc2_bsg18
oracle    8740     1  0 Nov05 ?        00:00:14 ora_arc3_bsg18
oracle    8742     1  0 Nov05 ?        00:00:13 ora_arc4_bsg18
oracle    8744     1  0 Nov05 ?        00:00:13 ora_arc5_bsg18
oracle    8746     1  0 Nov05 ?        00:00:13 ora_arc6_bsg18
oracle    8748     1  0 Nov05 ?        00:00:13 ora_arc7_bsg18
oracle    8750     1  0 Nov05 ?        00:00:13 ora_arc8_bsg18
oracle    8752     1  0 Nov05 ?        00:00:13 ora_arc9_bsg18


When I look at the primary database, I can see 9 "arc" processes.
Why 9 processes? because LOG_ARCHIVE_MAX_PROCESSES of 10 tells the primary database that it can use 10 processes to fill the gap on the standby.
 The 1st process is the gap manager "tt', and the remaining 9 processes "arc" are used to send over the archive logs.

Increasing the LOG_ARCHIVE_MAX_PROCESSES can be used to increase the parallelization of sending archive logs to fill any gaps.



Monday, October 21, 2019

Oracle Active Dataguard - More than just a read-only copy.


NOTE: I updated this on 1/16/20 with additional information.

Anyone that knows me, knows I'm a stickler for properly explaining technical topics and ensuring I use the correct term.

Dataguard  vs Active Dataguard is a topic that drives me crazy sometimes as people tend to use the two options interchangeably.

The differences appear to be subtle on the surface, but there are some major difference (other than the obvious) that you might not know about.

What I am hoping you get out this blog post is... If you have the license for Active Dataguard, then turn  on portions of it, even if the application isn't using it for queries. There are more benefits from Active Dataguard than just having a read-only database copy.


Dataguard  -  First let's talk about normal Dataguard.


Basically, a Dataguard database, is an exact copy of the primary/protected database, that is constantly in recovery.  Redo log information (used for recovery) is automatically sent from the primary/protected database by specifying the name/location of the Dataguard copy as an additional location for the redo logs.
This is a simple explanation, but Dataguard concepts are that simple.

Dataguard - What Dataguard does for me to protect my primary database.

Dataguard is used for a number of purposes.  The most common of which is to have a "Disaster Recovery" copy of the primary database available in the event of the loss of the primary copy.
For a "Disaster Recovery" copy, best practice is to have this copy be geographically isolated from the primary.  This ensures a disaster to the primary datacenter (flood, earthquake, etc.) doesn't affect the Dataguard copy.

When moving the application to use the Dataguard copy, there are 2 different ways to bring up the dataguard copy as the primary.

1) Switchover - In the case of the switchover, all transactions on the primary are applied to the dataguard copy before it is opened up.  This ensures no data loss. This isn't always possible since transactions on primary database need to be "drained" and transferred to the Dataguard copy.

2) Failover - In the case of a failover, it is not possible to "drain" transactions from the primary database.  All outstanding transactions that have been received from the primary database are applied on the Dataguard copy and it is opened with "resetlogs" and there is data loss.

Other uses for Dataguard

  • Dataguard can be used to create an up-to-date copy of production for testing/QA, etc.
  • A Dataguard database can be opened for write (snapshot standby) to test code releases etc and then flashed back to being a Datagaurd copy again.
  • A Dataguard database can have a delay in log apply, essentially providing a time gap, allowing data to be recovered within the time gap in case of user error.

Active Dataguard Option (Licensible option)- 

This contains many features now

Original feature  -- Dataguard copy is is open as read only.

  That is as simple as it is to use active dataguard (if you have the license).  Before starting the apply of redo log information, the database is open read only.

  The main advantage of Active Dataguard is that you can now use the DR copy of the database for queries.  This not only offloads activity from the primary to the mostly idle Dataguard copy, it also ensures that there is a readable copy of the data even while the primary is not available (patching, etc.).

What I wanted to go through in this post, is all the other features that comes with Active Dataguard that you might not realize.

Additional Active Dataguard Features.

First I am going to separate the features into 2 requirements.

Features that are available when the database is in mount mode (read-only not required).


  1. Far Sync -  Far sync allows you to create a shell instance that is used to capture real-time redo from a database, and send it on to standby database.  You can have multiple far sync instances for redundancy, and they are typically local to primary to provide a synchronous destination with very little network lag..
  2. BCT (Block Change Tracking). You can create a BCT file on your standby database, and it will be used for incremental backups. 
  3. Real-time- redo - This allows you to cascade redo from the standby to a destination real-time in the same manner that the primary DB does with Standby redologs.
Features that are available when the database is in read-only mode
  1. Automatic Block repair -  Corrupted blocks on the primary are repaired by automatically applying the "clean" block from the Dataguard copy.
  2. DML redirection.  Occasional updates to the dataguard copy are redirected to the primary database.
  3. Preserve Buffer cache during Role change - When a Dataguard database becomes the primary, the mode change is done without bouncing the database, thus preserving what is in memory.

Additional Active Dataguard Features affecting ZDLRA.



I wanted to call out there 2 features specific to Active Dataguard that have an affect on the ZDLRA recoverability.

  1. Block Change Tracking File - With Active Dataguard, the BCT file gets updated with the changes and is used for incremental backups.. This can be extremely important when using a ZDLRA to backup your Dataguard database.  With the ZDLRA, only incremental backups are performed.  Without an active BCT file, incremental backups will scan all database blocks with every incremental backup.  If you have the license for Active Dataguard be sure to create a BCT anyway, and ensure it is used.
  2. Far Sync Support -  You are probably wondering what this has to do with ZDLRA.  The Far Sync support (starting with 12c) is more than just support for Far Sync.  This feature changes when the applied updates are written to the destinations. Prior to 12c, changes were written to the destinations AFTER the log switch.  This means that the downstream Dataguard databases, and ZDLRA only got full archive logs.  With 12c, an Active Dataguard database, just like the primary sends changes to the destinations from memory as they are applied.  This can make a big difference as to the recovery point objective (RPO) of a dataguard database from the ZDLRA.  This feature, using real-time redo from a standby database to a ZDLRA is allowed as an exception under licensing.

Key takeaways.


Active Dataguard has a couple of great features that the ZDLRA takes advantage of. If you have the license for it, you should turn on BCT, even if the application isn't using any of the other features..


Tuesday, September 3, 2019

Oracle : Dataguard vs Backups

For anyone that has been around Oracle databases for a long time, they should be familiar with MAA (Maximum Availability Architecture).
Here is a link to the latest version of it.

As you probably know, MAA is described as "best practices" for implementing the Oracle Database.

What I am going to explain in this blog post, is why you need the different pieces.
Questions often comes up around why each piece of availability (for lack of a better word) is important.
These pieces include
  • Data Guard
  • Active Data Guard
  • Flashback database
  • Point-in-time backups
  • Archive Backups
  • DR site

Data Guard


Well let's start with the first piece Data Guard.  I'm not going address Data Guard in the same datacenter, just Data Guard between datacenters.
There is a difference between "just Data Guard" and "Active Data Guard"

Data Guard  -  Mounted copy of the primary database that is constantly applying redo changes. It can be thought of as always in recovery. This copy can be gracefully "switched" to without data loss (all transactions are applied before opening). In the event of an emergency, it can be "failed" to which allows for opening it without applying all transactions.

Active Data Guard -  A Data Guard copy of the data (like above), BUT this is a read-only copy of the primary database.  Note this is an extra cost option requiring a license. Having the database open read-only has many advantages.
  • Reporting, and read-only workload can be sent to the Data Guard copy of the database taking advantage of the resources available that would typically sit idle.
  • Automatic block repair from the primary database.
  • A Block Change Tracking can be used to speed up incremental backups.

Data Guard, regardless of the 2 types is often between datacenters with limited bandwidth.


This becomes important for a few reasons
  •  Cloning across datacenters, or even rebuilding a copy of the database in the opposite datacenter can take a long time.  Even days for a very large database.
  • The application tier that accesses the database needs to move with the database. In the event of a switchover/failover of the database, the application must also switchover.

NOTE :  Making your Data Guard database your primary database involves moving the application tier to your DR data center.  The testing and changes to move the application may be risky and involve more time than simply restoring the database.

Flashback Database


Flashback database allows you to query the database as of a previous point-in-time, and even flashback the entire database to a previous point-in-time.
You can also use flashback database to "recover" a table that was accidentally dropped.
With flashback database, the changes are kept in the FRA for a period of time, typically only a few hours or less depending on workload.

Flashback can be helpful in a few situations.
  1. Logical corruption of a subset of data.  If a logical corruption occurs, like data is accidentally deleted, or incorrectly updated, you can view the data as it appeared before the corruption occurred. This of course, is assuming that the corruption is found within the flashback window.  To correct corruption, the pre-corrupt data can be selected from the database and saved to a file.  The application team familiar with the data, can then create scripts to surgically correct the data.
  2. Logical corruption of much of data.  If much of the data is corrupted you can flashback the database to a point-in-time prior to the corruption within the flashback window.  This a FULL flashback of ALL data.  After the flashback occurs, a resetlogs is performed creating a new incarnation of the database.  This can be useful if an application release causes havoc with the data, and it is better to suffer data loss, than deal with the corruption.
  3. Iterative testing.  Flashback database can be a great tool when testing, to repeat the same test over and over.  Typically a restore point is created before beginning a test, and the database is flashed back to that state.  This allows an application team to be assured that the data is in the same state at the beginning of each iterative test.

Backups 


Backups allow you to restore the database to a point-in-time prior to the current point-in-time. ZDLRA is the best product to use for backups.

ZDLRA offers

  • Near Zero RPO
  • Very low RTO by using the proprietary incremental forever strategy.  Incremental forever reduces recovery time, by creating a full restore point for each incremental backup taken.

The most recent backups are kept local to the database to ensure quick restore time. here are typically 2 types of backups. 

Archive backups - These are often "keep" backups.  They are special because they are a self contained backupset that allows you recovery only to a specific prior point-in-time. An example would be the creation of an archival backup on the last day of the month, that allows you recover only to the end of the backup time.  A single recovery point for each month rather than any point during the month.

Point-in-time backups - These are backups that include the archive logs to recovery to any point-in-time.  Typically these types of backups allow you to recover to any point in time from the beginning to the current point in time. For example, allow you to recover to any point in the last 30 days.

The advantages of backups are
  • You can recover to any point-in-time within the recovery window.  This window is much longer than flashback database can provide.  Any logical corruptions can be corrected by restoring portions of "good" data, or the whole database.
  • You can restore the database within the same data center returning the database to availability without affecting the application tier.
  • Backups are very important if using active Data Guard. The issue may be with the read-only copy of the database.  Since both the primary and the Data Guard copy are used by the application, restoring from a backup is the quickest method to rebuild either environment.

Disaster Recovery


 Disaster Recovery is typically a documented, structured approach to dealing with unplanned incidents.  These incidents include planning for a complete site failure involving the loss of all data centers within a geographical area.

The 2 major risks areas addressed by any approach are

RTO - Recovery Time Objective.  How long does it take to return the application to full availability

RPO - Recovery Point Objective. How much data loss can be tolerated in the event of a disaster


Comparison



The bottom line when comparing the different technologies


Flashback database - Correct logical corruption that is immediately found.

Dataguard - Full disaster requiring moving the full environment. Keep application continuity in the event of a disaster.

Backups - Restore current environment to availability (production site or Data Guard site).


ZDLRA, by offering a very low RTO and RPO provides a solution that can be used to return availability in all but a datacenter disaster.




Tuesday, August 27, 2019

Oracle incremental merge and corruption



In the last couple of years I have seen a lot of backup vendors (Commvault, Netbackup, Rubrik etc.) using the incremental merge backup process that Oracle introduced in 10G.

These vendors have combined the incremental merge with snapshots of each new merged backup to provide a daily restore point.  The process is to perform an incremental backup, use the DB software to merge in the changes, and then "snap" the storage.  Rinse and repeat.
The archive logs are also backed up to provide a recovery point in time.

This process has a few flaws.

  1. You are leveraging DB resources to perform the merge.  Only the DB software knows the proprietary format of files to merge in the changes.
  2. The merge process is a sequential process that can be slow. You are essentially "recovering" the database one day forward using the merge process. We all know how slow a recovery can be.
  3. The storage used for these backups is typically not tier 1 storage, and this also affects the speed of the merge process.
  4. The RMAN catalog only knows about the last incremental merge.  In order to recover to more than the last merge, you need to catalog the older backups.
  5. These are FULL SIZE datafile copies.  RMAN backup sets have many features to optimize the size of backupsets (exclude unused blocks, compress, etc.) that can't be leveraged with this type of backups.
  6. Lastly, there is no inherent checking of backups for corruption.  If there is any corruption in the backup, it may not be found.

I am going walk through an example showing you the last point.

But first, I want to point out how the ZDLRA addresses all of these points.


  1. The ZDLRA uses the same proprietary software that RMAN uses to merge in the changes.  Rather than using DB resources (and licensed CPUs) to merge in the changes, the ZDLRA offloads this workload and performs the merge internally
  2. The ZDLRA simply keeps track of block changes and indexes them.  This process is much more efficient than a recovery process, and scales much better.
  3. The ZDLRA uses tier 1 storage and flash allowing the performance to match or exceed that of the databases it is protecting
  4. The RMAN catalog is within the ZDLRA allowing it to automatically catalog new full backups as they are created.
  5. These are backupsets, and don't include unused space optimizing space utilization.
  6. There are many layers of error checking.  This includes not only during the backup, but also within the ZDLRA (and when replicated).  The ZDLRA offloads the "restore validate" process to ensure that DB blocks that haven't been touched for a long time are periodically checked for corruption.  An "incremental forever" strategy is risky if you don't periodically check blocks for corruption.

Now to show why periodically checking for corruption is so important.

I am going to schedule an incremental merge of a single datafile (to keep it simple), inject corruption into the datafile copy, and then continue to merge into the backup.

First, I'm going to create a new tablespace for my testing.

SYS:DB10 > create tablespace bsg datafile '/tmp/bsg.dbf' size 10m;

Tablespace created.


Now that we have a tablespace, let's create a table in the tablespace that we can use to provide corruption.


create table corruption_test tablespace bsg as select * from dba_users;

Table created.

SQL> select  file_id, block_id from dba_extents where segment_name = 'CORRUPTION_TEST';

   FILE_ID   BLOCK_ID
---------- ----------
       16    128


Now I have a copy of dba_users in my new tablespace, and I know where the data is.

I am going run my script (below) and perform a couple of incremental merges.


run
{
allocate channel disk1 device type disk format '/tmp/bkup%U';
recover copy of datafile 16  with tag 'incr_merge';
backup incremental level 1 for recover of copy with tag 'incr_merge' datafile 16;
}


Now here is the output, everything looks good.  I backed it up, and the changes are getting merged into the image copy of the datafile.


Starting recover at 08/27/19 17:05:00
no copy of datafile 16 found to recover
Finished recover at 08/27/19 17:05:00

Starting backup at 08/27/19 17:05:00
channel disk1: starting incremental level 1 datafile backup set
channel disk1: specifying datafile(s) in backup set
input datafile file number=00016 name=/home/oracle/app/oracle/oradata/BSG18/bsg.dbf
channel disk1: starting piece 1 at 08/27/19 17:05:01
channel disk1: finished piece 1 at 08/27/19 17:05:02
piece handle=/tmp/bkup52ua97lt_1_1 tag=INCR_MERGE comment=NONE
channel disk1: backup set complete, elapsed time: 00:00:01
Finished backup at 08/27/19 17:05:02

Starting Control File and SPFILE Autobackup at 08/27/19 17:02:49
piece handle=/home/oracle/app/oracle/fast_recovery_area/bsg18/BSG18/autobackup/2019_08_27/o1_mf_s_1017421369_gpc6mt45_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 08/27/19 17:02:52
released channel: disk1

RMAN> run
{
allocate channel disk1 device type disk format '/tmp/bkup%U';
recover copy of datafile  16 with tag 'incr_merge';
backup incremental level 1 for recover of copy with tag 'incr_merge' datafile 16;
}


Starting recover at 08/27/19 17:07:35
channel disk1: starting incremental datafile backup set restore
channel disk1: specifying datafile copies to recover
recovering datafile copy file number=00016 name=/tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho
channel disk1: reading from backup piece /tmp/bkup52ua97lt_1_1
channel disk1: piece handle=/tmp/bkup52ua97lt_1_1 tag=INCR_MERGE_BSG
channel disk1: restored backup piece 1
channel disk1: restore complete, elapsed time: 00:00:01
Finished recover at 08/27/19 17:07:37

Starting backup at 08/27/19 17:07:37
channel disk1: starting incremental level 1 datafile backup set
channel disk1: specifying datafile(s) in backup set
input datafile file number=00016 name=/home/oracle/app/oracle/oradata/BSG18/bsg.dbf
channel disk1: starting piece 1 at 08/27/19 17:07:37
channel disk1: finished piece 1 at 08/27/19 17:07:38
piece handle=/tmp/bkup54ua97qp_1_1 tag=INCR_MERGE_BSG comment=NONE
channel disk1: backup set complete, elapsed time: 00:00:01
Finished backup at 08/27/19 17:07:38


Now let's corrupt the datafile.
I am going to use "sed" and change the user "SYSTEM" to "      " in the image copy of the datafile.


oracle@/tmp [18c] $ sed -i 's/SYSTEM/      /g' /tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho



Now I'm going to run the same incremental merge again (and again).
Here's the output.. everything looks fine since I didn't touch the block that is corrupted.
The merge process is simply replacing blocks that have changed.

Starting recover at 08/27/19 17:13:05
channel disk1: starting incremental datafile backup set restore
channel disk1: specifying datafile copies to recover
recovering datafile copy file number=00016 name=/tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho
channel disk1: reading from backup piece /tmp/bkup54ua97qp_1_1
channel disk1: piece handle=/tmp/bkup54ua97qp_1_1 tag=INCR_MERGE
channel disk1: restored backup piece 1
channel disk1: restore complete, elapsed time: 00:00:01
Finished recover at 08/27/19 17:13:07

Starting backup at 08/27/19 17:13:07
channel disk1: starting incremental level 1 datafile backup set
channel disk1: specifying datafile(s) in backup set
input datafile file number=00016 name=/home/oracle/app/oracle/oradata/BSG18/bsg.dbf
channel disk1: starting piece 1 at 08/27/19 17:13:07
channel disk1: finished piece 1 at 08/27/19 17:13:08
piece handle=/tmp/bkup56ua9853_1_1 tag=INCR_MERGE comment=NONE
channel disk1: backup set complete, elapsed time: 00:00:01
Finished backup at 08/27/19 17:13:08



Now that I have performed some incremental merges, let's see what happens when I go to restore that datafile.
I took the datafile offline, removed it, and now will try to restore it from the image copy.

RMAN> alter database datafile 16 offline;

Statement processed
RMAN> exit
[oracle@oracle-server] rm /home/oracle/app/oracle/oradata/BSG18/bsg.dbf

[oracle@oracle-server]  rman target /

RMAN> restore datafile 16;



Starting restore at 08/27/19 17:15:01
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=30 device type=DISK

channel ORA_DISK_1: restoring datafile 00016
input datafile copy RECID=53 STAMP=1017421986 file name=/tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho
destination for restore of datafile 00016: /home/oracle/app/oracle/oradata/BSG18/bsg.dbf
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/27/2019 17:15:02
ORA-19566: exceeded limit of 0 corrupt blocks for file /tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho
ORA-19600: input file is datafile-copy 53 (/tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho)
ORA-19601: output file is datafile 16 (/home/oracle/app/oracle/oradata/BSG18/bsg.dbf)



Now, even though the incremental merges all look good, the corruption isn't caught until I go to restore the datafile.

This is why it is so important to execute a restore validate on a regular basis, and keep a secondary backup.

With the incremental merge process, there is only 1 copy of each unique block.  For historical data that doesn't change, those blocks will never get checked for corruption.





Tuesday, July 30, 2019

Block change tracking and the ZDLRA (part 2)

Oracle Block Change Tracking(BCT) and the ZDLRA



One of the most commonly asked question I get when explaining how the ZDLRA works has to do with Block Change tracking.

The question is usually something like...

"If the the Block Change Tracking file only keeps 7 days of history by default, how does it work with the ZDLRA where you only perform a full backup once"?

And the second question is

"If I only perform incremental cumulative backups, how does it use the BCT if it get's recreated/invalidated ?"

So how is the BCT file used ?


First, I am assuming you understand what's in it. This is explained in my previous post

For full backups -

The BCT file is not used, but it is updated prior to the backup if any changes took place.
Since the datafiles are "fuzzy" (being updated), the next BCT record starts with SCN number prior to the checkpoint SCN of the backup


 For Incremental Cumulative/Differential backups -

The BCT file is updated prior to the incremental cumulative if any changes took place.


RMAN then determines starting SCN and ending SCN number to backup, and if a BCT can possibly be used.





Once RMAN determines the starting/stopping SCN for the backup, and verifies that the BCT is in use, it can then determine if the BCT was capturing changes for the backup period, and if the changes fall within BCT capturing period



Now at this point we have determined

  • BCT was in use
  • The backup period falls within the window that the BCT has complete, valid records for.

Now we need to determine if any changes took place, and if so, build the list of blocks to backup up.  Keep in mind that if there are no version records, this means that no changes were made.








NOTE : if using multiple backups strategies, and TAGS to identify backups, this will complicate the process.


ZDLRA PROCESS


For the ZDLRA, the process is very simple.  As you go through the steps, you see that that new version records are created for each datafile that has any changes since the last backup (regardless of the backup type).

When an incremental cumulative backup is executed and sent to the ZDLRA, the RMAN catalog on the ZDLRA get's updated with the virtual Full.  The RMAN client, which uses this catalog,  compares against the checkpoint scn of the last full backup in the catalog (the virtual full).

Because the RMAN client is using the RMAN catalog from the ZDLRA (containing the virtual fulls), the RMAN client always compares to the last virtual full.

Keeping 7 days of BCT history isn't an issue because the current RMAN incremental cumulative backup always compares to the previous incremental cumulative backup which is virtualized. 

Monday, July 29, 2019

Block change tracking and the ZDLRA (part 1)

Oracle Block Change Tracking(BCT) and the ZDLRA



One of the most commonly asked question I get when explaining how the ZDLRA works has to do with Block Change tracking.

The question is usually something like...

"If the the Block Change Tracking file only keeps 7 days of history by default, how does it work with the ZDLRA where you only perform a full backup once"?

And the second question is

"If I only perform incremental cumulative backups, how does it use the BCT if it get's recreated/invalidated ?"

How does the BCT work ?

Well let's walk through what the block change does from a high level.

If you want the detail of the internals, Alex Gorbachev wrote a great paper and MOS note explaining it all.  ORACLE 10G BLOCK CHANGE TRACKING INSIDE OUT (Doc ID 1528510.1)

I'm going to start with a simple example.
First I'm going to turn on block change tracking, and then check the File# for my database files.


SQL> alter database enable block change tracking using file '/home/oracle/app/oracle/oradata/BSG18/bct.dbf';

SQL>
TABLESPACE_NAME                   FILE_ID
------------------------------ ----------
USERS                                   7
UNDOTBS1                                4
SYSTEM                                  1
SYSAUX                                  3




I am going to use 2 internal structures to walk through the BCT information.

X$KRCFH - This structure contains the lowscn of the BCT file.  It is used by RMAN to determine
                      the beginning SCN of the BCT file.  If no changes have been captured for a datafile,
                      then RMAN knows at which point it can make that assumption.

X$KRCFBH - This structure contains the versions of block changes available to use for backups.
                         This structure contains a row for each datafile, and each version.
                         This row points to a bitmap containing the block changes associated with the version.
                          The key columns I am interested in to explain how it works are

                                FNO          - File number
                                VERCNT - Version number identifying the change records.
                                                    This starts at 1 and increases over the life of the BCT file.
                                 LOW        - Low SCN of the block changes. It is 0 for the first record
                                                     indicating that it not complete
                                 HIGH        - High SCN of the block changes. The last SCN number
                                                      identifying the block changes.

I am going to look the internal structure X$KRCFBH to view what's in the block change tracking.

select (select tablespace_name from dba_data_files where file_id=fno) tablespace_name,vercnt,to_char(vertime,'mm/dd/yy hh24:mi:ss') vertime,low, high from x$krcfbh
   where fno in (select file_id from dba_data_files);
SQL> SQL>   2
TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
SYSTEM              1 07/29/19 09:21:17          0          0
SYSAUX              1 07/29/19 09:21:18          0          0
UNDOTBS1            1 07/29/19 09:21:18          0          0
USERS               1 07/29/19 09:21:18          0          0



I can see that there are entries for all my tablespaces, with a "version count" of 1 and low/high time of 0.  This is telling me that that no backups have been executed yet usingthe BCT.
I am also going to look at X$KRCFH to see what the starting SCN is for the BCT file.


select lowscn from X$KRCFH;

LOWSCN
----------
8361930


Now I am going to execute a full backup and see what happens.



TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
SYSTEM              1 07/29/19 09:21:17          0    8362001
SYSAUX              1 07/29/19 09:21:18          0    8362001
UNDOTBS1            1 07/29/19 09:21:18          0    8362001
USERS               1 07/29/19 09:21:18          0    8362001




You can see that the high SCN set to 8362001.
Now I am going to look at the RMAN backup (USERS tablespace) to see what the Checkpoint SCN number was for the backup.

  File LV Type Ckp SCN    Ckp Time          Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- ----------------- ----------- ------ ----
  7    0  Incr 8362002    07/29/19 09:46:59              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf


Ahhh.. Now I can see how it fits together. The high SCN for the BCT file is SCN number right before the checkpoint taken with the backup.

Now let's execute an incremental backup and see what happens.

TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
SYSAUX              1 07/29/19 09:21:18          0    8362001
SYSTEM              1 07/29/19 09:21:17          0    8362001
UNDOTBS1            1 07/29/19 09:21:18          0    8362001
USERS               1 07/29/19 09:21:18          0    8362001
SYSAUX              2 07/29/19 09:46:59    8362001    8363961
SYSTEM              2 07/29/19 09:46:59    8362001    8363961
UNDOTBS1            2 07/29/19 09:46:59    8362001    8363961


1    0  Incr 8362002    07/29/19 09:46:59              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
1    1  Incr 8363962    07/29/19 10:04:00              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  


By looking at the SYSTEM tablespace (file 1) I can see exactly what is happening with the BCT file.
The first version marks the starting SCN prior to the first backup after creating the file.
The second version marks the checkpoint SCN of the first backup (low), and the SCN prior to the second backup(high).
These marks, LOW/HIGH SCN, can be used to identify the blocks that changed between the backups.
Now I am going to perform a few more incremental backups with a few changes to the USERS tablespace and see what happens to the SYSTEM and USERS tablespaces versions.

First here is my query.

col tablespace_name format a10

select (select tablespace_name from dba_data_files where file_id=fno) tablespace_name,vercnt,to_char(vertime,'mm/dd/yy hh24:mi:ss') vertime,low, high from x$krcfbh
   where fno in (select file_id from dba_data_files)
    order by 1,2;

Now let's see what what my backups look for File 1 (SYSTEM tablespace).

List of Backup Sets
===================
  File LV Type Ckp SCN    Ckp Time          Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- ----------------- ----------- ------ ----
  1    0  Incr 8362002    07/29/19 09:46:59              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8363962    07/29/19 10:04:00              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8364371    07/29/19 10:13:21              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8364482    07/29/19 10:15:19              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8364637    07/29/19 10:16:38              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8365075    07/29/19 10:26:23              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8365373    07/29/19 10:29:13              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8365578    07/29/19 10:30:21              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf
  1    1  Incr 8365763    07/29/19 10:30:58              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_system_fz01xnqo_.dbf


Let's see what my backups look like for File 7 (USERS tablespace)

List of Backup Sets
===================
  File LV Type Ckp SCN    Ckp Time          Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- ----------------- ----------- ------ ----
    7    0  Incr 8362002    07/29/19 09:46:59              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8363962    07/29/19 10:04:00              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8364371    07/29/19 10:13:21              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8364482    07/29/19 10:15:19              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8364637    07/29/19 10:16:38              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8365075    07/29/19 10:26:23              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8365373    07/29/19 10:29:13              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8365578    07/29/19 10:30:21              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
    7    1  Incr 8365763    07/29/19 10:30:58              NO    /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf




My backups for both of these match. They have the same Ckp SCN.
Notice that I performed an Incremental level 0 backup, and then 8 Incremental level 1 backups.
I actually performed both differential backups and cumulative backups but it didn't matter.

Now let's look at the block change tracking file for these 2 tablespaces (system and users)

TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
SYSTEM              3 07/29/19 10:04:00    8363961    8364370
SYSTEM              4 07/29/19 10:13:21    8364370    8364481
SYSTEM              5 07/29/19 10:15:19    8364481    8364636
SYSTEM              6 07/29/19 10:16:38    8364636    8365074
SYSTEM              7 07/29/19 10:26:23    8365074    8365372
SYSTEM              8 07/29/19 10:29:13    8365372    8365577
SYSTEM              9 07/29/19 10:30:20    8365577    8365762
USERS               1 07/29/19 09:21:18          0    8362001
USERS               2 07/29/19 09:46:59    8362001    8364481
USERS               3 07/29/19 10:15:19    8364481    8364636


Very interesting.. Since I only made few changes to the users tablespace it has 3 versions, the oldest of which is the full backup.
The system tablespace has gone over 7 versions and it no longer has the original version from the level 0 backup.

Now let's see if it used the BCT files for the backups.

     FILE# Creation Time     INCREMENTAL_LEVEL INCREMENTAL_CHANGE# CHECKPOINT_CHANGE# USED BCT
---------- ----------------- ----------------- ------------------- ------------------ ---
         1 07/29/19 09:47:24                 0                   0            8362002 YES
         1 07/29/19 10:04:02                 1             8362002            8363962 YES
         1 07/29/19 10:13:21                 1             8363962            8364371 YES
         1 07/29/19 10:15:19                 1             8364371            8364482 YES
         1 07/29/19 10:16:38                 1             8364482            8364637 YES
         1 07/29/19 10:26:24                 1             8364637            8365075 YES
         1 07/29/19 10:29:14                 1             8362002            8365373 YES
         1 07/29/19 10:30:22                 1             8362002            8365578 YES
         1 07/29/19 10:31:10                 1             8362002            8365763 NO

         7 07/29/19 09:47:00                 0                   0            8362002 YES
         7 07/29/19 10:04:02                 1             8362002            8363962 YES
         7 07/29/19 10:13:21                 1             8364300            8364371 YES
         7 07/29/19 10:15:19                 1             8364371            8364482 YES
         7 07/29/19 10:16:38                 1             8364482            8364637 YES
         7 07/29/19 10:26:24                 1             8364637            8365075 YES
         7 07/29/19 10:29:14                 1             8362002            8365373 YES
         7 07/29/19 10:30:22                 1             8362002            8365578 YES
         7 07/29/19 10:30:59                 1             8362002            8365763 YES



WOW...  Notice that the system tablespace (FILE #1) could not use the BCT file for the last backup, but the backup of the user tablespace (FILE #7) could because there were no changes between a few of the backups.

I also noticed the creation time of the RMAN backup, and the creation time of the BCT file record.  The new BCT file record (if changes occured) is created BEFORE the backup begins.

Finally, I can also see that the high SCN for users, 8364636,is older than the high SCN for system.


I am going to change users, and perform an incremental to see what happens to the LOW/HIGH scn in the next version.

TABLESPACE     VERCNT VERTIME                  LOW       HIGH
---------- ---------- ----------------- ---------- ----------
USERS               1 07/29/19 09:21:18          0    8362001
USERS               2 07/29/19 09:46:59    8362001    8364481
USERS               3 07/29/19 10:15:19    8364481    8364636
USERS               4 07/29/19 10:16:38    8364636    8366975

After the backup, there is no gap in the SCN numbers.  It created a new version that contained the changes between the previous version HIGH and the SCN of the incremental backup.


So what have we learned about the BCT ?



  1. The BCT has a structure that contains the starting SCN of changes captured.
  2. The BCT file creates version records associated with block changes before the backup begins.
  3. If no BCT changes are captured for a datafile, it is assumed that no changes occured since the starting SCN of the BCT file. 
  4. The BCT file keeps a bit map of changes between 2 SCN numbers
  5. The BCT file keeps changes on a datafile level.
  6. If a datafile didn't change between backups, it doesn't create a new record in the BCT file.  It doesn't matter if the backup is an incremental or full backup.
  7. By default 7 versions are kept for EACH DATAFILE. After 7 days of backups, some datafiles may still use the BCT if they haven't changed.

Below is a flow chart of what happens.



I explain how it is used in part 2 found here.

Tuesday, May 14, 2019

Oracle Security on your database -- Combining SEPS and EUS Security

Oracle offers many methods to authenticate users, and often it becomes difficult to determine how to combine multiple methods.

In this blog I will talk about how to combine 2 commonly used methods.


Security Methods

SEPS 

What is SEPS ?

The first method is SEPS (Secure External Password Store).
SEPS uses a wallet to store default credentials for connecting to a database.  The wallet can be used as the central location for username/password information when connecting from a system/application.  Using the wallet credentials in scripts etc, keeps the credentials secure, and makes it easier to periodically update the password to follow security rules.

Creating the wallet

The location of the wallet file (WALLET_LOCATION) must be added to the sqlnet.ora file that is used by the OS session.
There is also a second parameter WALLET_OVERRIDE that must also be set to true. This parameter determines if SEPS security is used for this database.

SQLNET.ORA entries

WALLET_LOCATION= < Location of wallet on OS >
WALLET_OVERRIDE=true

******  Note that the many databases can share the same wallet if they each point to the same sqlnet.ora file. *****

Below is an example of how to add an entry into the wallet.

mkstore -wrl  -createCredential < db_connect_string > < username > < password >
The creation of the wallet entry takes 3 parameters

1) db_connect_string - The connect string for the database that you wish to add. This can be either an entry in the tnsnames.ora file OR EZCONNECT
2) username - This is the user that you wish to connect to the database with
3) password - This is the current password for the user

The wallet is encrypted to obscure the password and permissions should limit the OS users than can read the wallet.

Using the wallet


To access the database using the entry in the wallet, use the "/@" prior to the connection.

Examples for sqlplus and RMAN

SQLPLUS> connect /@<db_connect_string>

RMAN> connect catalog /@<db_connect_string>

OID/EUS


OID (Oracle Internet Directory)/EUS (Enterprise User Security) uses LDAP or other methods to validate user credentials.

OID/EUS also uses a wallet that typically contains a unique certificate to authenticate a database.
Each database that utilized OID/EUS has it's own wallet file.

OID/EUS wallets

The default location for OID/EUS wallets is with the ${ORACLE_BASE} directory structure. By default the wallet for a database (like other database files) is contained with ${ORACLE_BASE}/admin/${DB_NAME}/wallet.

Also, if utilizing the multitenant option, each PDB has a wallet, and the wallet's default location is ${ORACLE_BASE}/admin/${DB_NAME}/pdbguid/wallet.


Combining SEPS and OID/EUS.


Question . Then how do we combine SEPS and OID/EUS security ?  SEPS requires WALLET_LOCATION to be set in the sqlnet.ora, and OID/EUS needs individual wallets for each database.  Setting the WALLET_LOCATION to a central wallet location will "break" OID/EUS authentication.

Answer. The way to combine both these authentication methods is to utilize an environmental variable in the sqlnet.ora file.  A variable needs to be created identifying the current database.

Step 1 - Ensure that there is a variable set that identifies the database name.
  - This can be either on the OS level (setenv DB_NAME=) 
       or
  - Through svrctl creating a variable that is set for all nodes in a cluster.

Step 2 - use this variable to set the wallet location in the sqlnet.ora file
   - WALLET_LOCATION = ${ORACLE_BASE}/admin/${DB_NAME}/wallet

*** Note: If multitenancy is used, the WALLET_LOCATION will be the root directory for PDB wallets.

Step 3 - ensure that the SEPS  connection/username/password entry is in each wallet that will utilize the SEPS entry.  

Conclusion


That's all there is to it.  By using environmental variables within the sqlnet.ora file you can point to the individual wallet for each database (satisfying OID/EUS requirements), and still use SEPS (by adding the SEPS entry to each wallet).




Monday, April 8, 2019

RMAN restore as encrypted

There is a new feature that was added to 12.2 of RMAN that allows you to restore tablespaces or a whole database "AS ENCRYPTED" or "AS DECRYPTED".





There is documentation of this feature here for 18c, but for some reason it isn't mentioned as a new feature for 12.2, or documented in 12.2.

Now I am going to demonstrate how to use this.

I have a database (bsg) which is a 12.2 database. I have created an encryption wallet, but none of the datafiles are encrypted.
You can see this from the dba_tablespaces view.

[oracle@oracle-server BSG]$ sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Mon Apr 8 15:37:52 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>  select tablespace_name,encrypted from dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       NO
TEMP                           NO
USERS                          NO

SQL>


Now I ran a Full backup of my database

[oracle@oracle-server BSG]$ rman

Recovery Manager: Release 12.2.0.1.0 - Production on Mon Apr 8 14:43:50 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: BSG (DBID=589892482)

RMAN> backup incremental level 0 database;




I have a full backup and now I am going to restore the tablespace users as encrypted.
NOTE : I am using "RESTORE TABLESPACE xxx AS ENCRYPTED";

[oracle@oracle-server BSG]$ rman

Recovery Manager: Release 12.2.0.1.0 - Production on Mon Apr 8 14:43:50 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: BSG (DBID=589892482)

RMAN> alter tablespace users offline;

Statement processed

RMAN> restore tablespace users as encrypted;

Starting restore at 04/08/19 14:53:22
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T144422_gbq5n71p_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T144422_gbq5n71p_.bkp tag=TAG20190408T144422
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 04/08/19 14:53:23
RMAN> recover tablespace users;

Starting recover at 04/08/19 14:53:44
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 04/08/19 14:53:44

RMAN> alter tablespace users online;

Statement processed

RMAN> exit


Now that I restored the tablespace and recovered it let's see if it is encrypted in the catalog.

[oracle@oracle-server BSG]$ sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Mon Apr 8 14:54:13 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select tablespace_name,encrypted from dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       NO
TEMP                           NO
USERS                          YES



Now to backup the tablespace.  Since it is newly encrypted, I need to make sure I perform a new Level 0 incremental backup to capture the encrypted datafiles.


[oracle@oracle-server datafile]$ rman

Recovery Manager: Release 12.2.0.1.0 - Production on Mon Apr 8 16:12:08 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: BSG (DBID=589892482)

RMAN> backup incremental level 0 tablespace users;

Starting backup at 04/08/19 16:12:25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=265 device type=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
channel ORA_DISK_1: starting piece 1 at 04/08/19 16:12:26
channel ORA_DISK_1: finished piece 1 at 04/08/19 16:12:27
piece handle=/home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T161226_gbqbsbmv_.bkp tag=TAG20190408T161226 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 04/08/19 16:12:27

Starting Control File and SPFILE Autobackup at 04/08/19 16:12:27
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of Control File and SPFILE Autobackup command on ORA_DISK_1 channel at 04/08/2019 16:12:28
ORA-00237: snapshot operation disallowed: control file newly created



Now the backup was successful, but the autobackup of the controlfile failed.

There is a MOS note on this (Doc ID 2320446.1)

You cannot backup the controlfile until the database is bounced because there is a data dictionary check pending.  Once I bounce the database all is good.

Now now let's try it the other way..

RESTORE TABLESPACE xxx AS DECRYPTED;

[oracle@oracle-server trace]$ rman

Recovery Manager: Release 12.2.0.1.0 - Production on Mon Apr 8 16:29:30 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: BSG (DBID=589892482)

RMAN> alter tablespace users offline;

using target database control file instead of recovery catalog
Statement processed

RMAN> restore tablespace users as decrypted;

Starting restore at 04/08/19 16:29:43
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=265 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/BSG/datafile/o1_mf_users_fz01zl49_.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T162748_gbqcp4vg_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/fast_recovery_area/bsg/BSG/backupset/2019_04_08/o1_mf_nnnd0_TAG20190408T162748_gbqcp4vg_.bkp tag=TAG20190408T162748
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 04/08/19 16:29:45

RMAN> recover tablespace users;

Starting recover at 04/08/19 16:29:54
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 04/08/19 16:29:54

RMAN> alter tablespace users online;

Statement processed

RMAN>



Finally let's double check.

[oracle@oracle-server trace]$ sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Mon Apr 8 16:31:15 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select tablespace_name,encrypted from dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       NO
TEMP                           NO
USERS                          NO