Quantcast
Channel: Archives des Oracle 12c - dbi Blog
Viewing all 190 articles
Browse latest View live

Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER

$
0
0

In a DataGuard environment, by default, the password of the SYS user is used to authenticate redo transport sessions when a password file is used. But for security reasons you might not want to use such a high privileged user only for the redo transmission. To overcome this issue, Oracle has implemented the REDO_TRANSPORT_USER initialization parameter.

The REDO_TRANSPORT_USER specifies the name of the user whose password verifier is used when a remote login password file is used for redo transport authentication.

But take care, the password must be the same at both databases to create a redo transport session, and the value of this parameter is case sensitive and must exactly match the value of the USERNAME column in the V$PWFILE_USERS view.

Besides that, this user must have the SYSDBA or SYSOPER privilege. However, we don’t want to grant the SYSDBA privilege. For administrative ease, Oracle recommends that the REDO_TRANSPORT_USER parameter be set to the same value on the redo source database and at each redo transport destination.

Ok. Let’s give it a try. I am creating an user called ‘DBIDG’ which will be used for redo transmission between my primary and standby.

SQL> create user DBIDG identified by manager;

User created.

SQL> grant connect to DBIDG;

Grant succeeded.

SQL> grant sysoper to DBIDG;

Grant succeeded.

Once done, I check the v$pwfile_users to see if my new user ‘DBIDG’ exist.

-- On Primary

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE


-- On Standby
SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

no rows selected

Ok. Like in previous versions of Oracle, I have to copy the password myself to the destination host to make it work.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] scp -p orapwDBIT122 oracle@dbidg02:$PWD

SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE

 

By connecting with the ‘DBIDG’ user, you almost can’t do anything. Not even selecting from the dba_tablespaces view e.g. From the security perspective, this user is much less of a concern.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] sqlplus dbidg/Manager1@DBIT122_SITE1 as sysoper

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 13 11:08:00 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> desc dba_tablespaces
ERROR:
ORA-04043: object "SYS"."DBA_TABLESPACES" does not exist

Nevertheless, the ‘DBIDG’ user is completely sufficient for my use case. Now, as I got my ‘DBIDG’ redo transport user in both password files (primary and standby), I can activate the redo_transport_user feature on (primary and standby) and check if everything works, by doing a switch over and switch back.

-- On Primary and Standby

SQL> alter system set redo_transport_user='DBIDG';

System altered.


DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 33 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE1"
Switchover succeeded, new primary is "DBIT122_SITE2"

DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE2 - Primary database
    DBIT122_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 71 seconds ago)


DGMGRL> SWITCHOVER TO 'DBIT122_SITE1' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE1"
Connecting ...
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE2"
Switchover succeeded, new primary is "DBIT122_SITE1"

Looks very good so far. But what happens if I have to change the password of the ‘DBIDG’ user?

-- On Primary

SQL> alter user dbidg identified by Manager1;

User altered.

-- On Primary
oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122


-- On Standby
oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122

That’s cool. Passwords on both sites have been updated successfully. They have the same time stamps and even the MD5 checksums are exactly the same. This is because of the new “Automatic Password Propagation to Standby” feature of 12cR2.

Conclusion

REDO_TRANSPORT_USER and “Automatic Password Propagation to Standby” are nice little features from Oracle.  The REDO_TRANSPORT_USER exists for quite a while now, at least since 11gR2, however, the “Automatic Password Propagation to Standby” is new with 12cR2.

 

Cet article Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER est apparu en premier sur Blog dbi services.


Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations?

$
0
0

For security reasons, you may want that your DataGuard operations are done with a different UNIX user and with a different Oracle user which is not so highly privileged like the SYSDBA.  This is exactly where the SYSDG Administrative Privilege for Oracle Data Guard Operations comes into play.

The SYSDG privilege is quite powerful and allows you to work with the Broker (DGMGRL) command line interface and besides that, it enables the following operations:

  • STARTUP
  • SHUTDOWN
  • ALTER DATABASE
  • ALTER SESSION
  • ALTER SYSTEM
  • CREATE RESTORE POINT (including GUARANTEED restore points)
  • CREATE SESSION
  • DROP RESTORE POINT (including GUARANTEED restore points)
  • FLASHBACK DATABASE
  • SELECT ANY DICTIONARY
  • SELECT
    • X$ tables (that is, the fixed tables)
    • V$ and GV$ views (that is, the dynamic performance views
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • DELETE
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • EXECUTE
    • SYS.DBMS_DRS

In addition, the SYSDG privilege enables you to connect to the database even if it is not open.

Ok. Let’s give it a try. I want to give the user scott all the privileges he needs to do the DataGuard operational tasks. So … I create a UNIX user scott and a database user scott with the SYSDG privilege.

[root@dbidg02 ~]# useradd scott
[root@dbidg02 ~]# usermod -a -G sysdg scott
[root@dbidg02 ~]# cat /etc/group |grep sysdg
sysdg:x:54324:oracle,scott

SQL> create user scott identified by tiger;

User created.

SQL> grant sysdg to scott;

Grant succeeded.

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS where USERNAME = 'SCOTT';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
SCOTT                  FALSE FALSE FALSE TRUE  FALSE

So far so good. Everything works. Scott can do switchovers, convert the physical standby to a snapshot database, create restore points and many more. But what happens when an error pops up? You need to take a look into the most important log files which are the alert log and broker log file in a DataGuard environment.

If you do a “show database verbose”, you will find at the end of the output the locations of the log files, which is quite useful from my point of view. This is new with Oracle 12cR2.

DGMGRL> show database verbose 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Active Apply Rate:  152.00 KByte/s
  Maximum Apply Rate: 152.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    DBIT122
  ...
  ...
Broker shows you the Log file location:

    Alert log               : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log

But unfortunately, the scott user can’t read those files, because there are no read permissions for others and
scott is not part of the oinstall group.

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 trace]$ ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 37787 Dec 13 10:36 drcDBIT122.log
[scott@dbidg01 trace]$ ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 221096 Dec 13 12:04 alert_DBIT122.log

So what possibilities do we have to overcome this issue?

1. We can add user scott to the oinstall group, but then we haven’t won to much security
2. We can set the parameter “_trace_files_public”=true, but when this one is enable, then all oracle
trace files are world readable, not just the alert and broker log
3. We can configure XFS access control lists, so that user scott gets only the permissions he needs

For security reasons, I decided to go for the last one.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(sysbkp),54324(sysdg),54325(syskm),54326(oper)
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r drcDBIT122.log


oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r-----+ 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl alert_DBIT122.log
# file: alert_DBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl drcDBIT122.log
# file: drcDBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

Cool. Now the scott user is really able to do a lot of DataGuard operation tasks, including some debugging.

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log | grep 'MAXIMUM AVAILABILITY mode' | tail -1
Primary database is in MAXIMUM AVAILABILITY mode

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log |grep "Protection Mode" | tail -1
      Protection Mode:            Maximum Availability

Conclusion

Using XFS ACL lists is quite cool if you want to give a specific user permissions to a file, but you don’t want to add him to a group, or make all files world readable. But be careful, that you configure the same ACL list on all other Standby nodes as well, and make sure that you use a Backup solution which supports ACL’s.

For example, using ‘cp’ or ‘cp -p’ makes a huge difference. In one case you loose your ACL list in the copy, in the other case you preserve it. The (+) sign at the end of the file permissions shows the difference.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp alert_DBIT122.log alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp -p alert_DBIT122.log alert_DBIT122.log.b
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.a
-rw-r----- 1 oracle oinstall 312894 Dec 13 14:25 alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.b
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log.b
 

Cet article Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations? est apparu en premier sur Blog dbi services.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

$
0
0

Here it is. The multitenant book is out for sale…

CaptureBooks
One year ago, at DOAG2015, Vit Spinka came to me with this idea: with Anton Els they planned to write a book on multitenant and proposed me to be a co-author. I was already quite busy at that time and my short-term plan was to prepare and pass the OCM 12c exam. But this book idea was something great and that had to be started quickly. At that time, we expected the 12cR2 to be out on June 2016 and then the book should be at for Oracle Open World. So no time to waste: propose the idea to Oracle Press, find a reviewer and start as soon as possible.

For reviewers, I was very happy that Deiby Gomez accepted to do the technical review. And Mike Donovan was volunteer to do the English review. I think he didn’t imagine how hard it can be to take non-native English speakers writing, with very limited vocabulary, and put that to something that makes sense to read. It’s an amazing chance to have the language review done by someone with deep technical knowledge. This ensures that the improved style do not change the meaning. Having that language review is also a good way to uniformise the style for what is written by three different authors. I bet you cannot guess who has written what. In addition to that, Oracle Press asked to Arup Nanda to do an additional review which was great because Arup has experience about book writing.

So we worked on the 12.2 beta, tested everything (there are lot of code listings in the book), filled bugs, clarified everything. We had good interaction with support engineers and product managers. The result is a book on multitenant which covers all administration tasks you can do on a 12c database.

Cs11EMPWcAAdlSqIt was an amazing adventure from the get-go. You know people for their skills, blogs, presentations and discussions at events. And then you start to work with them on a common thing – the book – and remotely – we’re all on different timezones. How to be sure that you can work together? Actually, it was easy and went smooth. We listed the chapters and each of us has marked which chapter he prefers. And that was done: in one or two e-mail exchange the distribution of tasks was done with everybody happy. We had very short schedule: need to deliver one chapter every 2 or 3 weeks. I was happy with what I wrote and was equally happy with what I’ve read from Vit and Anton. Reviews from Deiby, Mike, Arup were all adding higher precision and clarity. Incredible team work without the need for long discussions. Besides the hard work and the delightful result, working with this team was an amazing human adventure.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Master the Powerful Multitenant Features of Oracle Database 12c
• Build high-performance multitenant Oracle databases
• Create single-tenant, multitenant, and application containers
• Establish network connections and manage services
• Handle security using authentication, authorization, and encryption
• Back up and restore your mission-critical data
• Work with point-in-time recovery and Oracle Flashback
• Move data and replicate and clone databases
• Work with Oracle’s Resource Manager and Data Guard

 

Cet article Oracle Database 12c Release 2 Multitenant (Oracle Press) est apparu en premier sur Blog dbi services.

RMAN> TRANSPORT TABLESPACE

$
0
0

In a previous post I explained how to use transportabel tablespace from a standby database. Here I’m showing an alternative where you can transport from a backup instead of a standby database. RMAN can do that since 10gR2.

Transportable Tablespace is a beautiful feature: the performance of physical copy and the flexibility of logical export/import. But it has one drawback: the source tablespace must be opened read only when you copy it and export the metadata. This means that you cannot use it from production, such as moving data to a datawarehouse ODS. There’s an alternative to that: restore the tablespace with TSPITR (tablespace point-in-time recovery) into a temporary instance and transport from there.
This is what is automated by RMAN with a simple command: RMAN> TRANSPORT TABLESPACE.

Multitenant

This blog post shows how to do that when you are in 12c multitenant architecture. Even if 12.2 comes with online PDB clone, you may want to transport a single tablespace.

You cannot run TRANSPORT TABLESPACE when connected to a PDB. Let’s test it:

RMAN> connect target sys/oracle@//localhost/PDB1
connected to target database: CDB1:PDB1 (DBID=1975603085)

Here are the datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
9 250 SYSTEM NO /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 UNDOTBS1 NO /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3 20 TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

Let’s run the TRANSPORT TABLESPACE command:

RMAN> transport tablespace USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';
RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
 
Creating automatic instance, with SID='jlDa'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=jlDa_pitr_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created
 
Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/17/2016 21:33:14
RMAN-07538: Pluggable Database qualifier not allowed when connected to a Pluggable Database

You got the idea: an auxiliary instance is automatically created but then it failed because an internal command cannot be run from a PDB.

Run from CDB

So let’s run it when connected to CDB$ROOT:

echo set on
 
RMAN> connect target sys/oracle
connected to target database: CDB1 (DBID=894360530)

Whe can see all pluggable databases and all datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB1A/system01.dbf
3 480 SYSAUX NO /u02/oradata/CDB1A/sysaux01.dbf
4 65 UNDOTBS1 YES /u02/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB1A/pdbseed/system01.dbf
6 350 PDB$SEED:SYSAUX NO /u02/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/oradata/CDB1A/users01.dbf
8 520 PDB$SEED:UNDOTBS1 NO /u02/oradata/CDB1A/pdbseed/undotbs01.dbf
9 250 PDB1:SYSTEM YES /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 PDB1:SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 PDB1:UNDOTBS1 YES /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 240 TEMP 32767 /u02/oradata/CDB1A/temp01.dbf
2 32 PDB$SEED:TEMP 32767 /u02/oradata/CDB1A/pdbseed/temp012016-08-23_14-12-45-799-PM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

We can run the TRANSPORT TABLESPACE command from here, naming the tablespace prefixed with the PDB name PDB1:USERS

transport tablespace … auxiliary destination … tablespace destination

The TRANSPORT TABLESPACE command needs a destination where to put the datafiles and dump file to transport (TABLESPACE DESTINATION) and also needs an auxiliary destination (AUXILIARY DESTINATION). It seems it is mandatory, which is different from the PDBPITR where the FRA is used by default.


RMAN> transport tablespace PDB1:USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';

And then you will see all what RMAN does for you. I’ll show most of the output.

UNDO

Restoring a tablespace will need to apply redo and then rollback the transactions that were opened at that PIT. This is why RMAN has to restore all tablespaces that may contain UNDO:

RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace PDB1:SYSTEM
Tablespace UNDOTBS1
Tablespace PDB1:UNDOTBS1

I suppose that when the UNDO_TABLESPACE has changed in the meantime, RMAN cannot guess which tablespace covered the transactions at the requested PIT but I seen nothing in the TRANSPORT TABLESPACE syntax to specify it. That’s probably for a future post and /or SR.

Auxiliary instance

So RMAN creates an auxiliary instance with some specific parameters to be sure there’s no side effect on the source database (the RMAN target one).

Creating automatic instance, with SID='qnDA'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=qnDA_pitr_PDB1_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created

Restore

The goal is to transport the tablespace, so RMAN checks that they are self-contained:

Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

and starts the restore of controlfile and datafiles (the CDB SYSTEM, SYSAUX, UNDO and the PDB SYSTEM, SYSAUX, UNDO and the tablespaces to transport)

contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# restore the controlfile
restore clone controlfile;
 
# mount the controlfile
sql clone 'alter database mount clone database';
 
# archive current online log
sql 'alter system archive log current';
}
executing Memory Script
 
executing command: SET until clause
 
Starting restore at 17-DEC-16
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=253 device type=DISK
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp tag=TAG20161217T213038
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
output file name=/var/tmp/AUX/CDB1A/controlfile/o1_mf_d5c88zp3_.ctl
Finished restore at 17-DEC-16
 
sql statement: alter database mount clone database
 
sql statement: alter system archive log current
 
contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 9 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 11 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 10 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 3 to new;
set newname for datafile 12 to
"/var/tmp/TTS/users01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 9, 4, 11, 3, 10, 12;
 
switch clone datafile all;
}
executing Memory Script
 
executing command: SET until clause
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
renamed tempfile 1 to /var/tmp/AUX/CDB1A/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_temp_%u_.tmp in control file
 
Starting restore at 17-DEC-16
using channel ORA_AUX_DISK_1
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /var/tmp/AUX/CDB1A/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:35
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00010 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /var/tmp/TTS/users01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:25
Finished restore at 17-DEC-16
 
datafile 1 switched to datafile copy
input datafile copy RECID=11 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_system_d5c8993k_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=12 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_d5c8d8ow_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=13 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_d5c8998b_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=14 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_d5c8d8g6_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=15 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_d5c8996o_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=16 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_d5c8d8o7_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=17 STAMP=930865006 file name=/var/tmp/TTS/users01.dbf

You noticed that the SYSTEM, SYSAUX, UNDO are restored in the auxiliary location but the tablespaces to transport (USERS here) goes to its destination. If you want to transport it on the same server, you can avoid any copying of it.

Recover

The recovery continues automatically to the PIT (which you can also specify with an UNTIL clause or restore point)


contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone 'PDB1' "alter database datafile
9 online";
sql clone "alter database datafile 4 online";
sql clone 'PDB1' "alter database datafile
11 online";
sql clone "alter database datafile 3 online";
sql clone 'PDB1' "alter database datafile
10 online";
sql clone 'PDB1' "alter database datafile
12 online";
# recover and open resetlogs
recover clone database tablespace "PDB1":"USERS", "SYSTEM", "PDB1":"SYSTEM", "UNDOTBS1", "PDB1":"UNDOTBS1", "SYSAUX", "PDB1":"SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
 
executing command: SET until clause
 
sql statement: alter database datafile 1 online
 
sql statement: alter database datafile 9 online
 
sql statement: alter database datafile 4 online
 
sql statement: alter database datafile 11 online
 
sql statement: alter database datafile 3 online
 
sql statement: alter database datafile 10 online
 
sql statement: alter database datafile 12 online
 
Starting recover at 17-DEC-16
using channel ORA_AUX_DISK_1
 
starting media recovery
 
archived log for thread 1 with sequence 30 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc thread=1 sequence=30
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc thread=1 sequence=31
media recovery complete, elapsed time: 00:00:02
Finished recover at 17-DEC-16
 
database opened
 
contents of Memory Script:
{
sql clone 'alter pluggable database PDB1 open';
}
executing Memory Script
 
sql statement: alter pluggable database PDB1 open

Export TTS

The restored tablespaces can be set read only, which was the goal.

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'PDB1' 'alter tablespace
USERS read only';
# create directory for datapump export
sql clone 'PDB1' "create or replace directory
STREAMS_DIROBJ_DPDIR as ''
/var/tmp/TTS''";
}
executing Memory Script
 
sql statement: alter tablespace USERS read only

Now the export of metadata run (equivalent to expdp transport_tablespace=Y)


sql statement: create or replace directory STREAMS_DIROBJ_DPDIR as ''/var/tmp/TTS''
 
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_qnDA_urDb":
 
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_qnDA_urDb" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_qnDA_urDb is:
EXPDP> /var/tmp/TTS/dmpfile.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace USERS:
EXPDP> /var/tmp/TTS/users01.dbf
EXPDP> Job "SYS"."TSPITR_EXP_qnDA_urDb" successfully completed at Sat Dec 17 21:41:06 2016 elapsed 0 00:00:47
Export completed
 
Not performing table import after point-in-time recovery

The last message let me think that the RMAN codes shares the one that manages RECOVER TABLE.

Then RMAN lists the commands to run the import (also available in a generated script) and removes the auxiliary instance.

Cleanup

Not everything has been removed:
[oracle@VM117 blogs]$ du -ha /var/tmp/AUX
0 /var/tmp/AUX/CDB1A/controlfile
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_51_d5c8k0oo_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_52_d5c8kcjp_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_53_d5c8kskz_.log
601M /var/tmp/AUX/CDB1A/onlinelog
0 /var/tmp/AUX/CDB1A/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undo_1_d5c8m1nx_.dbf
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9
1.1G /var/tmp/AUX/CDB1A
1.1G /var/tmp/AUX

Import TTS

In the destination you find the tablespace datafiles, the dump of metadata and a script that can be run to import it to the destination:

[oracle@VM117 blogs]$ du -ha /var/tmp/TTS
5.1M /var/tmp/TTS/users01.dbf
132K /var/tmp/TTS/dmpfile.dmp
4.0K /var/tmp/TTS/impscrpt.sql
5.2M /var/tmp/TTS

For this example, I import it on the same server, in a different pluggable database:


SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB2;
Session altered.

and simply run the script provided:

SQL> set echo on
 
SQL> @/var/tmp/TTS/impscrpt.sql
 
SQL> /*
SQL> The following command may be used to import the tablespaces.
SQL> Substitute values for and .
SQL>
SQL> impdp directory= dumpfile= 'dmpfile.dmp' transport_datafiles= /var/tmp/TTS/users01.dbf
SQL> */
SQL>
SQL> --
SQL> --
SQL> --
SQL> --
SQL> CREATE DIRECTORY STREAMS$DIROBJ$1 AS '/var/tmp/TTS/';
Directory created.
 
SQL> CREATE DIRECTORY STREAMS$DIROBJ$DPDIR AS '/var/tmp/TTS';
Directory created.
 
SQL> /* PL/SQL Script to import the exported tablespaces */
SQL> DECLARE
2 --
3 tbs_files dbms_streams_tablespace_adm.file_set;
4 cvt_files dbms_streams_tablespace_adm.file_set;
5
6 --
7 dump_file dbms_streams_tablespace_adm.file;
8 dp_job_name VARCHAR2(30) := NULL;
9
10 --
11 ts_names dbms_streams_tablespace_adm.tablespace_set;
12 BEGIN
13 --
14 dump_file.file_name := 'dmpfile.dmp';
15 dump_file.directory_object := 'STREAMS$DIROBJ$DPDIR';
16
17 --
18 tbs_files( 1).file_name := 'users01.dbf';
19 tbs_files( 1).directory_object := 'STREAMS$DIROBJ$1';
20
21 --
22 dbms_streams_tablespace_adm.attach_tablespaces(
23 datapump_job_name => dp_job_name,
24 dump_file => dump_file,
25 tablespace_files => tbs_files,
26 converted_files => cvt_files,
27 tablespace_names => ts_names);
28
29 --
30 IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
31 FOR i IN ts_names.first .. ts_names.last LOOP
32 dbms_output.put_line('imported tablespace '|| ts_names(i));
33 END LOOP;
34 END IF;
35 END;
36 /
PL/SQL procedure successfully completed.
 
SQL>
SQL> --
SQL> DROP DIRECTORY STREAMS$DIROBJ$1;
Directory dropped.
 
SQL> DROP DIRECTORY STREAMS$DIROBJ$DPDIR;
Directory dropped.
 
SQL> --------------------------------------------------------------
SQL> -- End of sample PL/SQL script
SQL> --------------------------------------------------------------

Of course, you don’t need to and you can run the import with IMPDP:

SQL> alter session set container=pdb2;
Session altered.
SQL> create directory tts as '/var/tmp/TTS';
Directory created.
SQL> host impdp pdbadmin/oracle@//localhost/PDB2 directory=TTS dumpfile='dmpfile.dmp' transport_datafiles=/var/tmp/TTS/users01.dbf

You may also use convert to transport to a different endianness.

Local Undo

Note that if you run it on current 12.2.0.1.0 cloud first DBaaS you will get an error when RMAN opens the PDB in the auxiliary instance because there’s a bug with local undo. Here is the alert.log part:

PDB1(3):Opening pdb with no Resource Manager plan active
PDB1(3):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE
PDB1(3):Force tablespace UNDO_1 to be encrypted with AES128
2016-12-17T18:05:14.759732+00:00
PDB1(3):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/fqkn_pitr_pdb1_cdb1/fqkn/trace/fqkn_ora_26146.trc
PDB1(3):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE...
PDB1(3):Automatic creation of undo tablespace failed with error 604 60
ORA-604 signalled during: alter pluggable database PDB1 open...

I did this demo with LOCAL UNDO OFF.

So what?

You can use Transportable Tablespaces from a database where you cannot put the tablespace read-only. The additional cost of it is to recover it from a backup, along with SYSTEM, SYSAUX and UNDO. But it is fully automated with only one RMAN command.

 

Cet article RMAN> TRANSPORT TABLESPACE est apparu en premier sur Blog dbi services.

Oracle 12cR2: AWR views in multitenant

$
0
0

In a previous post I explained how the AWR views have changed between 12.1.0.1 and 12.1.0.2 and now in 12.2.0.1 they have changed again. This is a good illustration of multitenant object link usage.

What’s new in 12cR2 is the ability to run AWR snapshots at CDB or PDB level. I really think that it makes more sense to read an AWR report at CDB level because it’s about analysing the system (=instance) activity. But with PDBaaS I can understand the need to give a report to analyse PDB sessions, resource and statements.

I’ll start with the conclusion – a map of AWR view to show which ones read from CDB level snapshots, or PDB snapshots, or both:

C0DLx2GXEAALIG4
I’ll explain AWR reports in a future post. Basically when you run awrrpt.sql from CDB$ROOT you get CDB snapshots and when you run it from PDB you have the choice.

In the diagram above, just follow the arrows to know which view reads from PDB or CDB or both. You see two switches between the root and the PDB: data link for one way and common view for the other way. Note that all are metadata links so switches occurs also at parse time.

WRM$_

Let’s start from the table where AWR snapshots are stored:


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name='WRM$_SNAPSHOT';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ------------------------------ ----------------------- ------------------
SYS WRM$_SNAPSHOT TABLE METADATA LINK

This is a table. METADATA LINK means that the structure is the same in all containers, but data is different.

I have the following containers:

SQL> select con_id,dbid,name from v$containers;
 
CON_ID DBID NAME
---------- ---------- ------------------------------
1 904475458 CDB$ROOT
2 2066620152 PDB$SEED
3 2623271973 PDB1

From CDB$ROOT I see data for the CDB:

SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

and from PDB I see snapshots taken from PDB:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So remember, CDB$ROOT has 91 snapshots with DBID= 904475458 and PDB1 has 79 snapshots with DBID=2623271973

AWR_ROOT_ and AWR_PDB_

Views on WRM$_SNAPSHOT are referenced in DBA_DEPENDENCIES:


SQL> select owner,name,type from dba_dependencies where referenced_name='WRM$_SNAPSHOT' and type like 'VIEW';
 
OWNER NAME TYPE
----- ------------------------------ -------------------
SYS AWR_ROOT_SNAPSHOT VIEW
SYS AWR_ROOT_SYSSTAT VIEW
SYS AWR_ROOT_ACTIVE_SESS_HISTORY VIEW
SYS AWR_ROOT_ASH_SNAPSHOT VIEW
SYS AWR_PDB_SNAPSHOT VIEW
SYS AWR_PDB_ACTIVE_SESS_HISTORY VIEW
SYS AWR_PDB_ASH_SNAPSHOT VIEW

I’m interested in views that show snapshot information: AWR_ROOT_SNAPSHOT and AWR_PDB_SNAPSHOT


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') order by 3;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM METADATA LINK
PUBLIC AWR_PDB_SNAPSHOT SYNONYM METADATA LINK
SYS AWR_ROOT_SNAPSHOT VIEW DATA LINK
SYS AWR_PDB_SNAPSHOT VIEW METADATA LINK

Besides the synonyms, we have a metadata link view AWR_PDB_SNAPSHOT and a data link view AWR_ROOT_SNAPSHOT. The data link one means that it switches to CDB$ROOT when queried from a PDB. Here is the definition:


SQL> select owner,view_name,container_data,text from dba_views where view_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS AWR_ROOT_SNAPSHOT Y select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0
 
SYS AWR_PDB_SNAPSHOT N select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0

Same definition. The difference is that AWR_PDB_SNAPSHOT reads from the current container but AWR_ROOT_SNAPSHOT being a DATA LINK always read from CDB$ROOT.

This is what we can see:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

This query when run in PDB1 displays the 91 snapshots from the CDB.

SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

This one shows what is in the current container.

Those are the views used by the AWR report, depending on the AWR location choice. But what about the DBA_HIST_ views that we know and use from previous versions?

DBA_HIST_ and CDB_HIST_

I continue to follow the dependencies:

SQL> select owner,name,type from dba_dependencies where referenced_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') and name like '%SNAPSHOT' order by 3;
 
OWNER NAME TYPE
------ ------------------------------ -------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM
PUBLIC AWR_PDB_SNAPSHOT SYNONYM
SYS DBA_HIST_SNAPSHOT VIEW
SYS CDB_HIST_SNAPSHOT VIEW
 
SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
SYS DBA_HIST_SNAPSHOT VIEW METADATA LINK
SYS CDB_HIST_SNAPSHOT VIEW METADATA LINK
PUBLIC DBA_HIST_SNAPSHOT SYNONYM METADATA LINK
PUBLIC CDB_HIST_SNAPSHOT SYNONYM METADATA LINK

Here are the views I’m looking for. They are metadata link only. Not data link. This means that they do not switch to CDB$ROOT.

But there’s more in the view definition:

SQL> select owner,view_name,container_data,text from dba_views where view_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS DBA_HIST_SNAPSHOT N select "SNAP_ID","DBID","INSTANCE_NUMBER","STARTUP_TIME","BEGIN_INTERVAL_TIME","
END_INTERVAL_TIME","FLUSH_ELAPSED","SNAP_LEVEL","ERROR_COUNT","SNAP_FLAG","SNAP_
TIMEZONE","CON_ID" from AWR_ROOT_SNAPSHOT
 
SYS CDB_HIST_SNAPSHOT Y SELECT k."SNAP_ID",k."DBID",k."INSTANCE_NUMBER",k."STARTUP_TIME",k."BEGIN_INTERV
AL_TIME",k."END_INTERVAL_TIME",k."FLUSH_ELAPSED",k."SNAP_LEVEL",k."ERROR_COUNT",
k."SNAP_FLAG",k."SNAP_TIMEZONE",k."CON_ID", k.CON$NAME, k.CDB$NAME FROM CONTAINE
RS("SYS"."AWR_PDB_SNAPSHOT") k

The DBA_HIST_SNAPSHOT is a simple one view on AWR_ROOT_SNAPSHOT which, as we have seen above, always show snapshots from CDB:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
&nbsp
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

Then CDB_HIST_SNAPSHOT reads AWR_PDB_SNAPSHOT which show current container snapshots. But this view is a COMMON DATA one, with the CONTAINER() function. This means that from CDB$ROOT when executed with a common user data from all open containers will be retrieved:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79
904475458 91

However, from a PDB you cannot see anything else:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So what?

Multitenant adds a new dimension in the dictionary views and we must be aware of that. However, compatibility is still there. The scripts that we used to run to query DBA_HIST views should still work. Don’t forget to always join on DBID and INSTANCE_NUMBER in addition to SNAP_ID so that your scripts are still working in RAC, and cross containers.
In 12.2 you can do the same for your application: used metadata links, data links, and common views for your tables. But remember to keep it simple…

 

Cet article Oracle 12cR2: AWR views in multitenant est apparu en premier sur Blog dbi services.

Oracle 12cR2 – How to Setup DataGuard observer with Oracle Wallets

$
0
0

I am not a big fan of having passwords in clear text laying around. This applies not only to application servers, but also for my Data Guard observer.

I do have a script for starting the observer that is reading a config file dgobserver.cfg, and this file contains the Username, Passwords and the Connectstring to my Primary and Standby database.

#*************************************************************
# Connection string to the primary
ConnectStringPrim="sys/Manager1@DBIT122_SITE1"

#*************************************************************
# Connection string to the Standby
ConnectStringStdb="sys/Manager1@DBIT122_SITE2"

However, I don’t want to have these passwords in clear text anymore, so I setup wallets for that purpose on the observer host.

To setup the wallet connection we need to:

  • Create a wallet directory
  • Adjust the sqlnet.ora on the observer
  • Create the wallet and the credentials
  • Test the connections via wallets
  • Adjust the dgobserver.cfg file
  • Test a Fast Start Failover

Create a directory /u01/app/oracle/admin/wallets and add the following to your sqlnet.ora file

WALLET_LOCATION =
   (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA = (DIRECTORY = /u01/app/oracle/admin/wallets))
)

SQLNET.WALLET_OVERRIDE = TRUE

Now, create the wallet and the credentials

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -create
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:


oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:


oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential           
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
2: DBIT122_SITE2 SYS
1: DBIT122_SITE1 SYS

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] ls -l
total 8
-rw------- 1 oracle oinstall 957 Jan  3 13:57 cwallet.sso
-rw------- 1 oracle oinstall   0 Jan  3 13:56 cwallet.sso.lck
-rw------- 1 oracle oinstall 912 Jan  3 13:57 ewallet.p12
-rw------- 1 oracle oinstall   0 Jan  3 13:56 ewallet.p12.lck

 

After everything was successfully setup, it is time to test the connection via wallets with sqlplus and with dgmgrl.

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] sqlplus /@DBIT122_SITE1 as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 3 13:59:07 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] sqlplus /@DBIT122_SITE2 as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 3 13:59:12 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122]

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] dgh
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Tue Jan 3 14:00:05 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE1
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
DGMGRL> connect /@DBIT122_SITE2
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
DGMGRL> exit

 

Looks good so far, now let’s adjust the dgobserver.cfg file and start the observer.

-- adjust the dgobserver.cfg file

#*************************************************************
# Connection string to the primary
ConnectStringPrim="/@DBIT122_SITE1"

#*************************************************************
# Connection string to the Standby
ConnectStringStdb="/@DBIT122_SITE2"

-- start the observer

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] dgobserver.ksh start DBIT122
2017-01-03_14:01:02::dgobserver.ksh::SetOraEnv          ::INFO ==> Environment: DBIT122 (/u01/app/oracle/product/12.2.0/dbhome_1)
2017-01-03_14:01:03::dgobserver.ksh::StatusObserver     ::INFO ==> Observer Stopped
2017-01-03_14:01:04::dgobserver.ksh::StartObserver      ::INFO ==> Connection to the primary database
2017-01-03_14:01:04::dgobserver.ksh::DoCommand          ::INFO ==> Start observer file='/u01/app/oracle/admin/DBIT122/etc/fsfo_DBIT122.dat
2017-01-03_14:01:06::dgobserver.ksh::StatusObserver     ::INFO ==> Observer running
2017-01-03_14:01:07::dgobserver.ksh::CleanExit          ::INFO ==> Program exited with ExitCode : 0

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] ps -ef | grep dgmgrl | grep -v grep
oracle 9186 1 0 14:01 pts/0 00:00:00 dgmgrl -logfile /u01/app/oracle/admin/DBIT122/log/dgobserver.log -silent start observer file='/u01/app/oracle/admin/DBIT122/etc/fsfo_DBIT122.dat'

After everything is setup and done, it is time for the fun part. Let’s initiate a Fast start failover by shutting down the primary with abort.

SQL> shutdown abort
ORACLE instance shut down.


-- observer log 

...
14:04:49.10  Tuesday, January 03, 2017
Initiating Fast-Start Failover to database "DBIT122_SITE2"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "DBIT122_SITE2"
14:04:58.85  Tuesday, January 03, 2017
...

14:07:39.04  Tuesday, January 03, 2017
Initiating reinstatement for database "DBIT122_SITE1"...
Reinstating database "DBIT122_SITE1", please wait...
Reinstatement of database "DBIT122_SITE1" succeeded
14:08:33.19  Tuesday, January 03, 2017

...

Cool, Fast Start Failover and the Reinstante worked as expected.

Conclusion

With Oracle wallets, I can make my DataGuard observer a little bit more secure by eliminating the passwords in clear text.

 

 

Cet article Oracle 12cR2 – How to Setup DataGuard observer with Oracle Wallets est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard Switchover with Oracle Wallets

$
0
0

I would like to make my DataGuard environment more secure, by eliminating the typing of “connect sys/Manager1″ for my DGMGRL commands. Especially the ones, that I have in my scripts. For example:

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl <<-EOF
> connect sys/Manager1
> show configuration verbose;
> EOF

or something like that:

oracle@dbidg01:/u01/app/oracle/local/dg/ [DBIT122] cat show_config.dg
connect sys/Manager1;
show configuration;

oracle@dbidg01:/u01/app/oracle/local/dg/ [DBIT122] dgmgrl @show_config.dg
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 12:54:11 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "DBIT122_SITE1"
Connected as SYSDG.

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 39 seconds ago)

HINT: Be aware that “dgmgrl [<options>] @script_file_name” is a new feature with the Broker in Oracle 12.2. It was not possible to use “dgmgrl @script” beforehand.

Ok. So how can I make my scripts more secure? Of course, by using wallets, like we did already with the observer configuration. See http://blog.dbi-services.com/oracle-12cr2-how-to-setup-dataguard-observer-with-oracle-wallets/
However, I want to do also the switchover and other operations with wallets.

So, lets create the necessary wallets for the SYS user on the Primary and the Standby.

-- Primary

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS             
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

-- Standby

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

And of course, we have to test the connections to see if everything is working correctly.

-- Primary
sqlplus /@DBIT122_SITE1 as sysdba

sqlplus /@DBIT122_SITE2 as sysdba

DGMGRL> connect /@DBIT122_SITE1

DGMGRL> connect /@DBIT122_SITE2


-- Standby

sqlplus /@DBIT122_SITE1 as sysdba

sqlplus /@DBIT122_SITE2 as sysdba

DGMGRL> connect /@DBIT122_SITE1

DGMGRL> connect /@DBIT122_SITE2

So far, so good. My connections with the wallets work from the Primary to the Standby and the other way around. Now, lets try to do a DataGuard switchover with wallets.

DGMGRL> connect /@DBIT122_SITE1
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
DGMGRL>
DGMGRL>
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 58 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2';
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up instance "DBIT122" of database "DBIT122_SITE1"

DGMGRL>

Oppsssssss … doesn’t look good. It says “invalid username/password”, but everything worked beforehand. Ok. That output does not give me too much information. Lets try the whole thing again with the Debug mode … dgmgrl -debug

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] dgmgrl -debug
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 11:04:21 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE2
[W000 01/04 11:04:34.32] Connecting to database using DBIT122_SITE2.
[W000 01/04 11:04:34.33] Attempt logon as SYSDG
[W000 01/04 11:04:35.42] Attempt logon as SYSDBA
[W000 01/04 11:04:35.47] Executing query [select db_unique_name from v$database].
[W000 01/04 11:04:35.47] Query result is 'DBIT122_SITE2'
Connected to "DBIT122_SITE2"
[W000 01/04 11:04:35.47] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:04:35.47] Oracle database version is '12.2.0.1.0'
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 49 seconds ago)

DGMGRL> switchover to 'DBIT122_SITE2';
Performing switchover NOW, please wait...
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
[W000 01/04 11:05:04.99] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:05:04.99] Attempt logon as SYSDG
[W000 01/04 11:05:06.04] Attempt logon as SYSDBA
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up and mount instance "DBIT122" of database "DBIT122_SITE1"

DGMGRL>

What is happening here? The Broker is not using the connect string “DBIT122_SITE1″, it is using the description list  “(DESCRIPTION=(ADDRESS …..)”, and when I look up my credentials in the wallet, I see only credentials for “DBIT122_SITE1 SYS” and “DBIT122_SITE2 SYS”.

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat

The solution here is, to add the description list from the property StaticConnectIdentifier.

DGMGRL> show database 'DBIT122_SITE1' StaticConnectIdentifier;
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))'

DGMGRL> show database 'DBIT122_SITE2' StaticConnectIdentifier;
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))'

Ok. Lets add the new credentials to our wallet. Be careful that you specify it exactly like they show up in the StaticConnectIdentifier.

-- Primary

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
4: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat


-- Standby

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
4: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat

After everything is setup and done, lets try again the switchover in debug mode.

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl -debug
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 11:22:38 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE1
[W000 01/04 11:22:47.94] Connecting to database using DBIT122_SITE1.
[W000 01/04 11:22:47.94] Attempt logon as SYSDG
[W000 01/04 11:22:49.02] Attempt logon as SYSDBA
[W000 01/04 11:22:49.06] Executing query [select db_unique_name from v$database].
[W000 01/04 11:22:49.06] Query result is 'DBIT122_SITE1'
Connected to "DBIT122_SITE1"
[W000 01/04 11:22:49.06] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:22:49.06] Oracle database version is '12.2.0.1.0'
Connected as SYSDBA.
DGMGRL> switchover to 'DBIT122_SITE1';
Performing switchover NOW, please wait...
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
[W000 01/04 11:23:18.07] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:18.07] Attempt logon as SYSDG
[W000 01/04 11:23:19.15] Attempt logon as SYSDBA
[W000 01/04 11:23:20.23] Executing query [select db_unique_name from v$database].
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

ORACLE instance started.
[W000 01/04 11:23:36.03] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:36.03] Attempt logon as SYSDG
[W000 01/04 11:23:37.13] Attempt logon as SYSDBA
[W000 01/04 11:23:37.17] Executing query [select db_unique_name from v$database].
ORA-01507: database not mounted

[W000 01/04 11:23:37.20] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:23:37.20] Oracle database version is '12.2.0.1.0'
[W000 01/04 11:23:37.20] Executing statement [alter database mount].
[W000 01/04 11:23:42.66] Statement [alter database mount] executed successfully.
Database mounted.
[W000 01/04 11:23:42.66] Checking for bootstrap done...
[W000 01/04 11:23:42.67] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:42.67] Attempt logon as SYSDG
[W000 01/04 11:23:43.77] Attempt logon as SYSDBA
[W000 01/04 11:23:43.82] Executing query [select db_unique_name from v$database].
[W000 01/04 11:23:43.83] Query result is 'DBIT122_SITE2'
Connected to "DBIT122_SITE2"
[W000 01/04 11:23:43.83] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:23:43.83] Oracle database version is '12.2.0.1.0'
[W000 01/04 11:23:55.85] Done waiting for bootstrap after 0 retries
Switchover succeeded, new primary is "DBIT122_SITE1"
DGMGRL>
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 60 seconds ago)

DGMGRL>

Now it worked perfectly, I can run now my switchover operations with wallets and I can run my scripts now without a password in clear text, like the following.

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl <<-EOF
> connect /@DBIT122_SITE1
> show configuration verbose;
> EOF

Conclusion

Doing DataGuard switchovers with wallets work perfectly, if the setup was done correctly, and besides that, you can eliminate a lot of passwords in clear text that you might have laying around.

Cheers,

William

 

Cet article Oracle 12cR2 – DataGuard Switchover with Oracle Wallets est apparu en premier sur Blog dbi services.

vagrant up – get your Oracle infrastructure up an running

$
0
0

By using Vagrant to manage your Virtual Machines and Ansible for configuration management and provisioning
you can easily automate the setup of your whole test environment in a standardized way.

If you have never heard about Ansible and Vagrant I try to give you an idea with the following very short  summary. There is a lot of good information available about Ansible and Vagrant.
Please check the provided links at the end of this blog for further information.

What is Vagrant ?

Vagrant is an open source tool on top of some virtualization solution like Oracle VirtualBox. It can automate the creation of VM’s. Additionally vagrant supports provisioning with scripts or with tools like Ansible, Puppet or Chef.
You can download a lot of useful  boxes from here: https://atlas.hashicorp.com/boxes/search

 What is Ansible ?

Ansible is an open source automation platform.
It is a radically simple IT automation engine designed for multi-tier deployments. [https://www.ansible.com/how-ansible-works]

Ansible just uses ssh and does not require agents or other software installed on the target nodes. You simply put your steps into an Ansible playbook, which is an easy to read text-file written in YAML syntax. Your playbook will simply look like documented steps.
Ansible will run the listed tasks described in the playbook on the target servers by invoking Ansible Modules.

Here is a simple example task from a playbook which will add a directory on a server. It uses the Ansible module “file”
- name: add home directory
file:
path: /home/myuser
state: directory

Ansible is quite well known to build up whole test environments including databases like mysql which are easy to install with simple tar balls or rpm files.

Unfortunately in the community of Oracle DBA’s usually Ansible is not on the radar despite there are already good Ansible playbooks available which proofed that you can also use Ansible to provision your whole Oracle Test Environment even with Oracle Real Application Cluster:
https://github.com/racattack/racattack-ansible-oracle

https://github.com/cvezalis/oracledb-ansible

Starting from these examples and adapting them for your needs you will experience how quick you will be able to automate your Oracle installations. This is what I did an want to show you here. Please keep in mind that this example is optimized for a fast installation and should not be used as it is for a productive system.

What you’ll get
In this blog I give you an example how to build an Oracle infrastructure from scratch containing
two virtual servers, installed and configured with CentOS 7.2 ,
each hosting an Oracle DB (12.1.0.2).
Example_Ansible

  • Step ONE – What you need to prepare once to run this example
      1) the Ansible Playbook and Vagrant configuration for this example
      you can download everything from the git repository. All files are simple text files.
      https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
      2) the Oracle 12.1.0.2 binaries
      the Oracle binaries are not included in the download. You have to provide them.
      Please copy the Oracle software zip files into the directory oracle-db-12c-vagrant-ansible/
      ./linuxamd64_12102_database_1of2.zip
      ./linuxamd64_12102_database_2of2.zip

      3) your Linux host or laptop
      with Network Connection,Oracle VirtualBox , Vagrant and Ansible installed.
      This can be done with your linux package manager.
      You will need Ansible version 2.1.1.0 or higher for this example!
      Please check http://docs.ansible.com/ansible/intro_installation.html for installation details.
      sudo yum install ansible
      You can find the Oracle VirtualBox Download and Installation Guide here:
      https://www.virtualbox.org/wiki/Linux_Downloads
      Download Vagrant with version 1.8.5 or higher from
      https://www.vagrantup.com/downloads.html
      Also install the vagrant hostmanager plugin:
      $ vagrant plugin install vagrant-hostmanager
  • Step TWO – Run it
      Now you are ready to start the whole setup which will create two virtual servers and oracle databases.
      On my laptop with SSD disks and 16 GB RAM this takes about 20 minutes.
      To run this example you will need minimal 8 GB RAM and 10G free disk space
      Go to the directory where you have downloaded this example. Everything will be started from here.
      cd oracle-db-12c-vagrant-ansible
      vagrant up
  • Of cause you do not want to start this without knowing what is going on.
    I will go a little bit into details therefore next week ….

    Further information about Ansible:
    There will be some Introduction Webinars for Ansible coming soon
    https://www.ansible.com/webinars-training

    you can find more examples at:
    http://galaxy.ansible.com
    https://github.com/ansible/ansible-examples
    https://groups.google.com/forum/#!forum/ansible-project
    If you want to read a book I can recommend this:
    Ansible: Up and Running
    Print ISBN: 978-1-4919-1532-5
    Ebook ISBN: 978-1-4919-1529-5

    https://www.ansible.com/ebooks

     

    Cet article vagrant up – get your Oracle infrastructure up an running est apparu en premier sur Blog dbi services.


    Part 2 – vagrant up – get your Oracle infrastructure up an running

    $
    0
    0

    Last week in the first part of this blog we have seen a short introduction how to setup an Oracle Infrastructure with Vagrant and Ansible. Remember all the files for this example are available here https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
    Get the example code:

    git clone https://github.com/nkadbi/oracle-db-12c-vagrant-ansible

    If you have prepared your environment with Ansible, Vagrant and Oracle Virtual Box installed – and provided the Oracle software zip files –
    than you can just start to build your Test Infrastructure with the simple callvagrant up
    cleanup is also easy- stop the vagrant machines and deletes all traces:
    vagrant destroy
    How does this work ?
    vagrant up starts Vagrant which will setup two virtual servers using a sample box with CentOS 7.2.
    When this has been finished Vagrant calls Ansible for provisioning which configures the linux servers, installs the Oracle software and creates your databases on the target servers in parallel.

    Vagrant configuration
    All the configuration for Vagrant is in one file called Vagrantfile
    I used a box with CentOS 7.2 which you can find among other vagrant boxes here https://atlas.hashicorp.com/search
    config.vm.box = "boxcutter/centos72" If you start vagrant up the first time it will download the vagrant box
    $ vagrant up

    Bringing machine 'dbserver1' up with 'virtualbox' provider...
    Bringing machine 'dbserver2' up with 'virtualbox' provider...
    ==> dbserver1: Box 'boxcutter/centos72' could not be found. Attempting to find and install...
    dbserver1: Box Provider: virtualbox
    dbserver1: Box Version: >= 0
    ==> dbserver1: Loading metadata for box 'boxcutter/centos72'
    dbserver1: URL: https://atlas.hashicorp.com/boxcutter/centos72
    ==> dbserver1: Adding box 'boxcutter/centos72' (v2.0.21) for provider: virtualbox
    dbserver1: Downloading: https://atlas.hashicorp.com/boxcutter/boxes/centos72/versions/2.0.21/providers/virtualbox.box
    ==> dbserver1: Successfully added box 'boxcutter/centos72' (v2.0.21) for 'virtualbox'!
    ==> dbserver1: Importing base box 'boxcutter/centos72'...

    I have chosen a private network for the virtual servers and use vagrant hostmanager plugin to take care of the /etc/hosts files on all guest machines (and optionally your localhost)
    you can add this plugin to vagrant with:
    vagrant plugin install vagrant-hostmanager
    The corresponding part in the Vagrantfile will look like this:
    config.hostmanager.enabled = true
    config.hostmanager.ignore_private_ip = false # include private IPs of your VM's
    config.vm.hostname = “dbserver1”
    config.vm.network "private_network", ip: "192.168.56.31"

    ssh Configuration
    The Vagrant box comes already with ssh key configuration and- if security does not matter in your demo environment – the easiest way to configure ssh connection to your guest nodes is to use the same ssh key for all created virtual hosts.
    config.ssh.insert_key = false # Use the same insecure key provided from box for each machine After bringing up the virtual servers you can display the ssh settings:
    vagrant ssh-config The important lines from the output are:
    Host dbserver1
    HostName 127.0.0.1
    User vagrant
    Port 2222
    IdentityFile /home/user/.vagrant.d/insecure_private_key
    You should be able to reach your guest server without password with user vagrant
    vagrant ssh dbserver1
    Than you can switch to user oracle ( password = welcome1 ) or root (default password for vagrant boxes vagrant) su - oracle or directly connect with ssh ssh vagrant@127.0.0.1 -p 2222 -i /home/user/.vagrant.d/insecure_private_key
    Virtual Disks
    I added additional virtual disks because I wanted to separate data file destination from fast recovery area destination. # attach disks only localy
    if ! File.exist?("dbserver#{i}_disk_a.vdi") # create disks only once
    v.customize ['createhd', '--filename', "dbserver#{i}_disk_a.vdi", '--size', 8192 ] v.customize ['createhd', '--filename', "dbserver#{i}_disk_b.vdi", '--size', 8192 ] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_a.vdi"] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_b.vdi"] end # create disks only once

    Provisioning with Ansible
    At the end of the Vagrantfile provisioning with Ansible is called.
    N = 2
    (1..N).each do |i| # do for each server i
    ...
    if i == N
    config.vm.provision "ansible" do |ansible| # vm.provisioning
    #ansible.verbose = "v"
    ansible.playbook = "oracle-db.yml"
    ansible.groups = { "dbserver" => ["dbserver1","dbserver2"] }
    ansible.limit = 'all'
    end # end vm.provisioning
    end
    end
    To prevent the Ansible provisioning to start before all servers have been setup by Vagrant, I included the condition if i == N , where N is the number of desired servers.

    Ansible Inventory
    The Ansible Inventory is a collection of guest hosts against which Ansible will work.
    You can either put the information in an inventory file or let Vagrant create an Inventory file for you. Vagrant does this if you did not specify any inventory file.
    To enable Ansible to connect to the target hosts without password Ansible has to know the ssh key provided by the vagrant box.
    Example Ansible Inventory:
    # Generated by Vagrant
    dbserver2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
    dbserver1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
    [dbserver] dbserver1
    dbserver2
    You can see that the inventory created by Vagrant presents the necessary information to Ansible to connect to the targets and has also defined the group dbserver which includes the server dbserver1 and dbserver2.

    Ansible configuration
    tell Ansible where to find the inventory in the ansible.cfg.
    nocows=1
    hostfile = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
    host_key_checking = False

    Ansible Variables
    In this example I have put the general variables for all servers containing an Oracle Database into this file:
    group_vars/dbserver
    The more specific variables including variables used to create the database like the database name, character set
    can be adapted individual for each server:
    host_vars/dbserver1,host_vars/dbserver2

    Ansible Playbook
    The Ansible playbook is a simple textfile written in YAML syntax, which is easy readable.
    Our playbook oracle-db.yml has only one play called “ Configure Oracle Linux 7 with Oracle Database 12c” which will be applied on all servers belonging to the group dbserver. In my example Vagrant creates the vagrant inventory and initiates the play of the playbook but you can also start it stand-alone or repeat it if you want.
    ansible-playbook oracle-db.yml
    This is the whole playbook, to configure the servers and install Oracle Databases:
    $cat oracle-db.yml
    ---
    - name: Configure Oracle Linux 7 with Oracle Database 12c
    hosts: dbserver
    become: True
    vars_files:
    # User Passwords hashed are stored here:
    - secrets.yml
    roles:
    - role: disk_layout
    - role: linux_oracle
    - role: oracle_sw_install
    become_user: '{{ oracle_user }}'
    - role: oracle_db_create
    become_user: '{{ oracle_user }}'

    Ansible roles
    To make the playbook oracle-db.yml lean and to be more flexible I have split all the tasks into different roles.This makes it easy to reuse parts of the playbook or skip parts. For example if you only want to install the oracle software on the server, but do not want to create databases you can just delete the role oracle_db_create from the playbook.
    You (and Ansible ) will find the files containing the tasks for a role in the directory roles/my_role_name/main.yml.
    There can be further directories. The default directory structure looks like below. If you want to create a new role you can even create the directory structure by using ansible-galaxy. Ansible Galaxy is Ansible’s official community hub for sharing Ansible roles. https://galaxy.ansible.com/intro

    # example to create the directory structure for the role "my_role_name"
    ansible-galaxy init my_role_name


    # default Ansible role directory structure
    roles/
    my_role_name/
    defaults/
    files/
    handlers/
    meta/
    tasks/
    templates/
    vars/

    Ansible Modules
    Ansible will run the tasks described in the playbook on the target servers by invoking Ansible Modules.
    This Ansible Web Page http://docs.ansible.com/ansible/list_of_all_modules.html shows information about Modules ordered by categories.
    You can also get information about all the Ansible modules from command line:

    # list all modules
    ansible-doc --list
    # example to show documentation about the Ansible module "copy"
    ansible-doc copy

    One Example:
    To install the oracle software with response file I use the Ansible module called “template”. Ansible uses Jinja2, a templating engine for Python.
    This makes it very easy to design reusable templates. For example Ansible will replace {{ oracle_home }} with the variable, which I have defined in group_vars/dbserver, and than copies the response file to the target servers:

    Snipped from the Jinja2 template db_install.rsp.j2

    #-------------------------------------------------------------------------------
    # Specify the complete path of the Oracle Home.
    #-------------------------------------------------------------------------------
    ORACLE_HOME={{ oracle_home }}

    Snipped from roles/oracle_sw_install/tasks/main.yml

    - name: Gerenerate the response file for software only installation
    template: src=db_install.rsp.j2 dest={{ installation_folder }}/db_install.rsp

    Ansible Adhoc Commands – Some Use Cases
    Immediately after installing Ansible you already can use Ansible to gather facts from your localhost which will give you a lot of information:
    ansible localhost -m setup
    Use Ansible adhoc command with module ping to check if you can reach all target servers listed in your inventory file:

    $ ansible all -m ping
    dbserver2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    dbserver1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }

    File transfer – spread a file to all servers in the group dbserver
    ansible dbserver -m copy -b -a "src=/etc/hosts dest=/etc/hosts"

    Conclusion
    With the open source tools Vagrant and Ansible you can easily automate the setup of your infrastructure.
    Even if you do not want to automate everything, Ansible still can help you with your daily work if you want to check or apply something on several servers.
    Just group your servers in an inventory and run an Ansible Adhoc Command or write a small playbook.

    Please keep in mind that this is a simplified example for an automated Oracle Database Installation.
    Do not use this example for productive environments.

     

    Cet article Part 2 – vagrant up – get your Oracle infrastructure up an running est apparu en premier sur Blog dbi services.

    Oracle 12c – How to Prevent License Violation with Active Data Guard

    $
    0
    0

    There are some articles floating around how to prevent a license violation with Active Data Guard. Some of them related to an underscore parameter “_query_on_physical” and some of them related to a startup trigger. Both of them have advantages and disadvantages. E.g. regarding the “_query_on_physical” I can’t find any MOS Note about it, and I don’t know the side effects.

    Oracle gives us a hard time to disable features that we don’t want to be enabled by accident. It gets much better with 12.2 where you can use lockdown profiles. My colleague Franck explained it very well at the DOAG 2016 how this new feature works.

    http://www.doag.org/formes/pubfiles/8586609/docs/Konferenz/2016/vortraege/Datenbank/2016-DB-Franck_Pachot-Multitenant_New_Security_Features_Clarify_DevOps_DBA_roles-Praesentation.pdf

    But for now, I am on 12cR1 and I need a solution for that version. Especially with Active Data Guard it is very easy to activate it. Just type in “startup” on the Standby, and then you have it already. Nothing more is needed.

    Nevertheless, I have 12cR1 here, and my favorite way to prevent a license violation with Active Data Guard is related to cluster resources, in combination with the DataGuard Broker and an Observer. If all of them are in place and when you are on the right patch level, then it works. Especially the patch level is quite important. We will see later. What is also important, is that you should work only with the Broker command or with the srvctl utility.

    In my case I have a primary single instance called DBIT121_SITE1 and a standby single instance called DBIT121_SITE2. After the Data Guard has been setup, it is time to configure the Cluster Resources.

    In this particular case, the most important parameters when you add the database cluster resources are “role” and “startoption”

    $ srvctl add database -h | egrep '(<role>|<start_options>)' | tail -2
        -role <role>                   Role of the database (PRIMARY, PHYSICAL_STANDBY, LOGICAL_STANDBY, SNAPSHOT_STANDBY, FAR_SYNC)
        -startoption <start_options>   Startup options for the database. Examples of startup options are OPEN, MOUNT, or "READ ONLY".

    With the parameter “role” you specify the role that your database has at the moment (not the future role). The role adjustments are done later by the Broker whenever you do a switchover or failover.

    The role option is not only available with the “srvctl add database” command, it is also available with the “srvctl add service” command. Now it becomes really interesting. You tell Oracle to start the service only, if the role is PRIMARY.

    $ srvctl add service -h | grep '<role>'
        -role <role>                   Role of the service (primary, physical_standby, logical_standby, snapshot_standby)

    Ok. Let’s create the cluster resources now.

    -- Primary
    $ srvctl add database -db DBIT121_SITE1 -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
    -dbtype SINGLE -instance DBIT121 -node dbidg01 \
    -spfile /u01/app/oracle/admin/DBIT121/pfile/spfileDBIT121.ora \
    -pwfile /u01/app/oracle/admin/DBIT121/pfile/orapwDBIT121 \
    -role PRIMARY -startoption OPEN \
    -dbname DBIT121
    
    $ srvctl add service -db DBIT121_SITE1 -service DBIT121_SERVICE -role primary \
    -failovertype SELECT -notification TRUE -tafpolicy BASIC
    
    -- Standby
    $ srvctl add database -db DBIT121_SITE2 -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
    -dbtype SINGLE -instance DBIT121 -node dbidg02 \ 
    -spfile /u01/app/oracle/admin/DBIT121/pfile/spfileDBIT121.ora \
    -pwfile /u01/app/oracle/admin/DBIT121/pfile/orapwDBIT121 \
    -role PHYSICAL_STANDBY -startoption MOUNT \
    -dbname DBIT121
    
    $ srvctl add service -db DBIT121_SITE2 -service DBIT121_SERVICE -role primary \
    -failovertype SELECT -notification TRUE -tafpolicy BASIC

    To test if everything works, simply do a “SWITCHOVER” with the Data Guard Broker and check the Cluster Resources afterwards. After a role change, you should see the following Cluster resource entries on the Primary

    $ crsctl stat res ora.dbit121_site1.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
    ROLE=PRIMARY
    USR_ORA_OPEN_MODE=open

    and these ones on the Standby

    $ crsctl stat res ora.dbit121_site2.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
    ROLE=PHYSICAL_STANDBY
    USR_ORA_OPEN_MODE=mount

    Oracle preserves the Open modes and also some other stuff like Block Change Tracking. If Active Data Guard was not enabled beforehand, it will also not be enabled afterwards (this is at least how it should be), and besides that, Oracle also disables the “Block Change Tracking” feature on the new Standby, because this would need the Active Data Guard license as well.

    alert.log
    ...
    Completed: ALTER DATABASE SWITCHOVER TO 'DBIT121_SITE2'
    Target standby DBIT121_SITE2 did not have Active Data Guard enabled at the time of switchover.
    To maintain Active Data Guard license compliance Block Change Tracking will be disabled.
    Fri Jan 27 08:49:23 2017
    ..

    But the final and most important test is killing the PMON on the Standby. In GI version below 12.1.0.2 with 2016 Oct PSU, you might end up with Active Data Guard enabled. Opsssssss …
    Everything was setup up correctly, but still not working like expected. I just have simulated that a background process dies. This could happen in reality for example due to a bug with “_use_single_log_writer=false” which is the default with 12c, or simply by someone accidently killing the wrong process.

    $ ps -ef | grep ora_pmon_DBIT121 | grep -v grep 
    oracle 639 1 0 13:31 ? 00:00:00 ora_pmon_DBIT121
    
    $ kill -9 639 
    
    alert.log 
    ... 
    ... 
    Physical Standby Database mounted. 
    Lost write protection mode set to "typical" 
    Completed: ALTER DATABASE MOUNT /* db agent *//* {0:33:25} */ 
    ALTER DATABASE OPEN /* db agent *//* {0:33:25} */ 
    Data Guard Broker initializing... 
    ... 
    
    Physical standby database opened for read only access. 
    Completed: ALTER DATABASE OPEN /* db agent *//* {0:33:25} */ 
    
    ... 
    
    SQL> select open_mode from v$database; 
    
    OPEN_MODE 
    -------------------- 
    READ ONLY WITH APPLY

    After killing the PMON, the instance dies and the Cluster takes over which is very good. However, the cluster is ignoring my startup options which I have configured beforehand. After upgrading GI and the Database to 12.1.0.2 with 2016 Oct PSU, I could not reproduce this issue anymore and I have a good solution for preventing Active Data Guard to be activated.

    But what happens if my Primary host dies and a Failover is initiated by the observer. Then I do have two cluster resources with Primary and startup option OPEN. Let’s simulate this scenario by doing a shutdown abort with srvctl.

    DGMGRL> show configuration;
    
    Configuration - DBIT121
    
      Protection Mode: MaxAvailability
      Members:
      DBIT121_SITE1 - Primary database
        DBIT121_SITE2 - (*) Physical standby database
    
    Fast-Start Failover: ENABLED
    
    Configuration Status:
    SUCCESS   (status updated 5 seconds ago)
    
    
    $ srvctl stop database -db DBIT121_SITE1 -stopoption ABORT

     

    After 30 seconds, the observer initiated a fast start failover, and the new primary is now on SITE2.

    Initiating Fast-Start Failover to database "DBIT121_SITE2"...
    Performing failover NOW, please wait...
    Failover succeeded, new primary is "DBIT121_SITE2"

    On SITE1 I still have the old Primary with Startup option OPEN. Not an issue at the moment, because it is a Primary and on a Primary there is no Active Data Guard. After I start up SITE1, a few moments later the reinstate takes place. Therefore, the database has to be brought again into the MOUNT state to do a “FLASHBACK DATABASE”.

    $ srvctl start database -db DBIT121_SITE1
    
    observer.log
    ...
    Initiating reinstatement for database "DBIT121_SITE1"...
    Reinstating database "DBIT121_SITE1", please wait...
    Reinstatement of database "DBIT121_SITE1" succeeded
    
    broker.log on old Primary
    ...
    Data Guard notifying Oracle Clusterware to prepare database for role change
    Database Reinstate needs instance count to be reduced to 1
    Flashback SCN is 22408550; DB checkpoint SCN is 22405622. Flashback to SCN 22408550.
    01/28/2017 10:59:25
    Physical Standby Reinstatement: Converting old primary to a physical standby
    01/28/2017 10:59:34
    Conversion to physical standby database succeeded
    Instance restart not required
    Purging diverged redos on resetlogs branch 933516912, starting SCN 22408551
    Purged 0 archived logs
    Target standby DBIT121_SITE2 did not have Active Data Guard enabled at the time of failover.
    To maintain Active Data Guard license compliance Block Change Tracking will be disabled.
    01/28/2017 10:59:42
    Notifying Oracle Clusterware to buildup after database reinstatement

    The broker knows that Active DataGuard was not enabled beforehand, an so it does not enable it now.

    $ crsctl stat res ora.DBIT121_SITE1.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
    ROLE=PHYSICAL_STANDBY
    USR_ORA_OPEN_MODE=mount
    
    
    SQL> select open_mode from v$database;
    
    OPEN_MODE
    --------------------
    MOUNTED

    That’s it. This is my way to prevent Active Data Guard from being activated. :-)

    Conclusion

    Using cluster resources to prevent Active Data Guard from being activated is a fully supported way. You only need to take care that you are on GI/DB and Observer version 12.1.0.2 2016 Oct PSU or higher. Before that patchlevel, it never worked for me correctly with cluster resources. Besides that, use only Broker and the cluster srvctl commands to manage your Data Guard environment.

     

    Cet article Oracle 12c – How to Prevent License Violation with Active Data Guard est apparu en premier sur Blog dbi services.

    Oracle 12c – RMAN list failure does not show any failure even if there is one

    $
    0
    0

    Relying to much on the RMAN Data Recovery Advisor is not always the best idea. In a lot of situations,  it tells you the right things, however, sometimes it tells you not the optimal things, and sometimes, RMAN list failure does not show any failure at all, even if there is one.

    So … let’s simulate quickly a loss of a datafile during the normal runtime of the database. The result is a clear error message which says that the datafile 5 is missing.

    SQL> select count(*) from hr.employees;
    select count(*) from hr.employees
                            *
    ERROR at line 1:
    ORA-01116: error in opening database file 5
    ORA-01110: data file 5: '/u01/oradata/DBTEST1/hrDBTEST01.dbf'
    ORA-27041: unable to open file
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3

    Of course, the error message is immediately reflected in the alert.log as well where it clearly says that Oracle in unable to open file number 5.

    Errors in file /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/trace/DBTEST1_smon_17115.trc:
    ORA-01116: error in opening database file 5
    ORA-01110: data file 5: '/u01/oradata/DBTEST1/hrDBTEST01.dbf'
    ORA-27041: unable to open file
    Linux-x86_64 Error: 2: No such file or directory

    Only the RMAN Data Recovery advisor does not know what it’s going on.

    RMAN> list failure;
    
    using target database control file instead of recovery catalog
    Database Role: PRIMARY
    
    no failures found that match specification

    Of course, I could shutdown the DB, and then startup again which would trigger a Health Check, but shutting down an instance is not always so easy on production systems. Especially when only one datafile is missing, but all others are available and only a part of the application is affected.

    The solution to that issue, is to run a manual health check. Quite a lot of health checks can be run manually, like show in the following documentation.

    https://docs.oracle.com/database/121/ADMIN/diag.htm#ADMIN11269

    I start with the DB Structure Integrity Check. This check verifies the integrity of database files and reports failures if these files are inaccessible, corrupt or inconsistent.

    SQL> begin
      2  dbms_hm.run_check ('DB Structure Integrity Check','Williams Check 00000001');
      3  end;
      4  /
    
    PL/SQL procedure successfully completed.

    After running the Health Check, Oracle finds the failure and in the alter.log you will see an entry like the following:

    Checker run found 1 new persistent data failures

    If you want to take a look what exactly the Health check found, you can invoke the ADRCI and execute the “show hm_run” command.

    oracle@vmoratest1:/oracle/workshop/bombs/ [DBTEST1] adrci
    
    ADRCI: Release 12.1.0.2.0 - Production on Tue Feb 7 16:02:21 2017
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    ADR base = "/u00/app/oracle"
    adrci> show homes
    ADR Homes:
    diag/clients/user_oracle/host_1833655127_82
    diag/tnslsnr/vmoratest1/listener
    diag/rdbms/cdb1p/CDB1P
    diag/rdbms/dbtest1/DBTEST1
    diag/rdbms/rcat/RCAT
    
    adrci> set home diag/rdbms/dbtest1/DBTEST1
    
    adrci> show hm_run
    
    ADR Home = /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1:
    *************************************************************************
    
    ...
    ...
    
    **********************************************************
    HM RUN RECORD 9
    **********************************************************
       RUN_ID                        206
       RUN_NAME                      Williams Check 00000001
       CHECK_NAME                    DB Structure Integrity Check
       NAME_ID                       2
       MODE                          0
       START_TIME                    2017-02-07 16:03:44.431601 +01:00
       RESUME_TIME                   <NULL>
       END_TIME                      2017-02-07 16:03:44.478127 +01:00
       MODIFIED_TIME                 2017-02-07 16:03:44.478127 +01:00
       TIMEOUT                       0
       FLAGS                         0
       STATUS                        5
       SRC_INCIDENT_ID               0
       NUM_INCIDENTS                 0
       ERR_NUMBER                    0
       REPORT_FILE                   <NULL>
    9 rows fetched
    
    adrci>

    However, if you take a look at the HM RUN report, is gives you an error.

    adrci> show report hm_run 'Williams Check 00000001'
    DIA-48415: Syntax error found in string [show report hm_run 'Williams Check 00000001'] at column [44]

    This is not a bug. The HM run name must be only alphanumeric and underscore.  So … better don’t use spaces in between your name. The following would have been better.

    SQL> begin
      2  dbms_hm.run_check ('DB Structure Integrity Check','WilliamsCheck');
      3  end;
      4  /
    
    PL/SQL procedure successfully completed.

    In case, the “adrci show report hm_run” does not work for you, it is not the end of the story. We still can look up the v$hm_finding view.

    select RUN_ID, TIME_DETECTED, STATUS, DESCRIPTION, DAMAGE_DESCRIPTION from v$hm_finding where run_id = '206';
    
    SQL> select RUN_ID, TIME_DETECTED, STATUS, DESCRIPTION, DAMAGE_DESCRIPTION from v$hm_finding where run_id = '206';
    
    RUN_ID TIME_DETECTED                STATUS       DESCRIPTION                                  DAMAGE_DESCRIPTION
    ------ ---------------------------- ------------ -------------------------------------------- --------------------------------------------
       206 07-FEB-17 04.03.44.475000 PM OPEN         Datafile 5: '/u01/oradata/DBTEST1/hrDBTEST01 Some objects in tablespace HR might be unava
                                                     .dbf' is missing                             ilable

    Now let’s check the RMAN “list failure” again.

    RMAN> list failure;
    
    Database Role: PRIMARY
    
    List of Database Failures
    =========================
    
    Failure ID Priority Status    Time Detected        Summary
    ---------- -------- --------- -------------------- -------
    2          HIGH     OPEN      07-FEB-2017 15:39:38 One or more non-system datafiles are missing
    
    
    RMAN> advise failure;
    ...
    Automated Repair Options
    ========================
    Option Repair Description
    ------ ------------------
    1      Restore and recover datafile 5
      Strategy: The repair includes complete media recovery with no data loss
      Repair script: /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/hm/reco_668410907.hm
    
      
    RMAN> repair failure preview;
    
    Strategy: The repair includes complete media recovery with no data loss
    Repair script: /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/hm/reco_668410907.hm
    
    contents of repair script:
       # restore and recover datafile
       sql 'alter database datafile 5 offline';
       restore ( datafile 5 );
       recover datafile 5;
       sql 'alter database datafile 5 online';

    Conclusion

    The Oracle Data Recovery Advisor is quite good, but sometimes you need to push it into the right direction. Besides that, take care of the naming convention that you use for your health check runs. ;-)

     

    Cet article Oracle 12c – RMAN list failure does not show any failure even if there is one est apparu en premier sur Blog dbi services.

    12cR2 DBCA can create a standby database

    $
    0
    0

    Do you like DBCA to create a database from command line, with -silent -createDatabase? On a simple command line you can provision a database, with oratab, tnsnames.ora directory creation and any setting you want. And you can even call a custom script to customize further. But if you want to put it in Data Guard, you have to do the duplicate manually with RMAN. This evolves in 12.2 with a new option in DBCA to do that: dbca -silent -createDuplicateDB -createAsStandby

    Limitations

    I’ve tried in the Oracle Public Cloud where I just created a RAC database. But unfortunately, this new feature is only for Single Instance:

    [FATAL] [DBT-16056] Specified primary database is not a Single Instance (SI) database.
    CAUSE: Duplicate database operation is supported only for SI databases.

    Ok. RAC is complex enough anyway, so you don’t need that quick command line to create the standby. So I tried with a single instance database:

    [FATAL] [DBT-16057] Specified primary database is a container database (CDB).
    CAUSE: Duplicate database operation is supported only for non container databases.

    Ok. That a bit surprising to have a new feature in 12.2 that works only on the architecture that is deprecated in 12.1 but if we think about it, DBCA is for fast provisioning. In multitenant you create a CDB once, put it in Data Guard, and the fast provisioning comes with the ‘create pluggable database’. And deprecated doesn’t mean that we do not use it, and it is good to have a simple command line tools for easy provisioning in non-CDB.

    Then, I tried on a non-CDB that I’ve created in 12.2

    I’m a big fan of EZCONNECT but I had a few problems with it. What’s worth to know is that there is no ‘impossible to connect’ message. When it cannot connect, the following message is raised:

    [FATAL] [DBT-16051] Archive log mode is not enabled in the primary database.
    ACTION: Primary database should be configured with archive log mode for creating a duplicate or standby database.

    just because this is the first thing that DBCA checks and this is where it fails when connections is not ok.

    But you can also use a tnsnames.ora network service name. This is what I’ll use for -primaryDBConnectionString

    $ tnsping ORCLA
    TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 11-FEB-2017 22:28:35
    Copyright (c) 1997, 2016, Oracle. All rights reserved.
    Used parameter files:
    /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = MAA.compute-usslash.oraclecloud.internal)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcla.compute-usslash.oraclecloud.internal)))
    OK (0 msec)

    -createDuplicateDB -createAsStandby

    Here is an example:

    dbca -silent -createDuplicateDB -gdbName ORCLB.compute-usslash.oraclecloud.internal -sid ORCLB -sysPassword "Ach1z0#d" -primaryDBConnectionString ORCLA -createAsStandby -dbUniquename ORCLB

    This will connect RMAN to the target (here called ‘primary’), with the connect string ORCLA and run a duplicate to create ORCLB as specified.

    It starts to create a temporary listener (which is still there in listener.ora even after completion), create the auxiliary instance and run RMAN:
    Listener config step
    33% complete
    Auxiliary instance creation
    66% complete
    RMAN duplicate
    100% complete
    Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORCLB/orcla.log" for further details.

    Through RMAN API, the file names are set:


    run {
    set newname for datafile 1 to '/u01/app/oracle/oradata/orclb/system01.dbf' ;
    set newname for datafile 3 to '/u01/app/oracle/oradata/orclb/sysaux01.dbf' ;
    set newname for datafile 4 to '/u01/app/oracle/oradata/orclb/undotbs01.dbf' ;
    set newname for datafile 7 to '/u01/app/oracle/oradata/orclb/users01.dbf' ;
    set newname for tempfile 1 to '/u01/app/oracle/oradata/orclb/temp01.dbf' ;

    and the DUPLICATE FOR STANDBY FROM ACTIVE is run:

    duplicate target database
    for standby
    from active database
    dorecover
    spfile
    set 'db_recovery_file_dest_size'='8405385216'
    set 'compatible'='12.2.0'
    set 'undo_tablespace'='UNDOTBS1'
    set 'dispatchers'='(PROTOCOL=TCP) (SERVICE=ORCLAXDB)'
    set 'db_name'='orcla'
    set 'db_unique_name'='ORCLB'
    set 'sga_target'='2281701376'
    set 'diagnostic_dest'='/u01/app/oracle'
    set 'audit_file_dest'='/u01/app/oracle/audit'
    set 'open_cursors'='300'
    set 'processes'='300'
    set 'nls_language'='AMERICAN'
    set 'pga_aggregate_target'='757071872'
    set 'db_recovery_file_dest'='/u01/app/oracle/fast_recovery_area/orcla'
    set 'db_block_size'='8192'
    set 'log_archive_format'='%t_%s_%r.dbf'
    set 'nls_territory'='AMERICA'
    set 'control_files'="/u01/app/oracle/oradata/orclb/control01.ctl", "/u01/app/oracle/fast_recovery_area/orcla/ORCLB/control02.ctl"
    set 'audit_trail'='DB'
    set 'db_domain'='compute-usslash.oraclecloud.internal'
    set 'remote_login_passwordfile'='EXCLUSIVE'
    reset 'local_listener'
    reset 'db_file_name_convert'
    set 'log_archive_dest_1'='location=/u01/app/oracle/fast_recovery_area/orcla'
    reset 'event'
    reset 'remote_listener'
    nofilenamecheck;
    }

    The parameters are coming from the ‘primary’ and adapted for the new database. Be careful. This is where I prefer to review the parameters before. For example, when you duplicate to clone the primary (without the -createAsStandby) you probably don’t want to keep the same log_archive_dest that was set in a Data Guard configuration. I’ll have to post a blog about that.

    At the end, the standby database is opened read-only, so be careful to close it before starting the apply of redo if you don’t have the Active Data Guard option.

    Data Guard

    DBCA doesn’t go beyond the DUPLICATE. And you can use it also in Standard Edition to setup the manual standby.

    I hope that one day we will have an option to create the Data Guard configuration in the same process, but here you have to do it yourself:

    • No tnsnames.ora entry is added for the standby
    • The static listener entries are not added in listener.ora
    • No Data Guard configuration is there
    • The Data Guard Broker is not started except if it was set in advance to true on primary
    • No standby redo logs are created (except when they were present on primary)

    You can set dg_broker_start=true and create the standby redo logs on a post-script that you call with the -customScripts argument. However, the best way is to do it in advance on the primary, and then the duplicate will do the same on the standby.

    So what?

    You don’t need this new feature because it is easy to automate it yourself. It’s just a copy of spfile parameters, with a few change, and a RMAN duplicate command. But your scripts will be specialized for your environment. Generic scripts are more complex to maintain. The big advantage to have this integrated on DBCA is that is designed for all configurations, and is maintained through versions.

     

    Cet article 12cR2 DBCA can create a standby database est apparu en premier sur Blog dbi services.

    Oracle 12c – Issues with the HEATMAP Segment even if the heat map feature is not used

    $
    0
    0

    When I don’t need I feature, I don’t turn it on, or do not use it because it reduces the possibility to run into issues. Most of the times this is true, however, during the preparation for an RMAN workshop, the RMAN list failure command showed me the following dictionary issue.

    RMAN> list failure;
    
    using target database control file instead of recovery catalog
    Database Role: PRIMARY
    
    List of Database Failures
    =========================
    
    Failure ID Priority Status    Time Detected        Summary
    ---------- -------- --------- -------------------- -------
    2          CRITICAL OPEN      13-FEB-2017 10:12:26 SQL dictionary health check: seg$.type# 31 on object SEG$ failed

    I thought first, that it might be related to some incorrect errors shown by the health check (DBMS_HM), because there used to be some issues with that tool. But even after applying the following patch, nothing changed and the error still appears.

    19543595: INCORRECT HEALTHCHECK ERRORS FROM DBMS_HM – FALSE ERRORS ON TS$ , FILE$ OR USER

    So I started a manual health check again to get some more details.

    SQL> BEGIN
      2  DBMS_HM.RUN_CHECK (check_name => 'Dictionary Integrity Check',
      3  run_name => 'WilliamsDICTrun002',
      4  input_params => 'CHECK_MASK=ALL');
      5  END;
      6  /
    
    PL/SQL procedure successfully completed.
    
    SQL> SELECT DBMS_HM.GET_RUN_REPORT('WilliamsDICTrun002') from dual;
    
    DBMS_HM.GET_RUN_REPORT('WILLIAMSDICTRUN002')
    ---------------------------------------------------------------------
    Basic Run Information
     Run Name                     : WilliamsDICTrun002
     Run Id                       : 61
     Check Name                   : Dictionary Integrity Check
     Mode                         : MANUAL
     Status                       : COMPLETED
     Start Time                   : 2017-02-13 10:56:58.250100 +01:00
     End Time                     : 2017-02-13 10:56:58.689301 +01:00
     Error Encountered            : 0
     Source Incident Id           : 0
     Number of Incidents Created  : 0
    
    Input Paramters for the Run
     TABLE_NAME=ALL_CORE_TABLES
     CHECK_MASK=ALL
    
    Run Findings And Recommendations
     Finding
     Finding Name  : Dictionary Inconsistency
     Finding ID    : 62
     Type          : FAILURE
     Status        : OPEN
     Priority      : CRITICAL
     Message       : SQL dictionary health check: seg$.type# 31 on object SEG$
                   failed
     Message       : Damaged rowid is AAAAAIAABAAAK+RAAc - description: Ts# 1
                   File# 2 Block# 28032 is referenced

    Now I do have the ROWID, the file number and the block number of the affecting object. Let’s see what it is.

    SQL> select FILE#, BLOCK#, TYPE#, TS#, BLOCKS from seg$ where rowid='AAAAAIAABAAAK+RAAc';
    
         FILE#     BLOCK#      TYPE#        TS#     BLOCKS
    ---------- ---------- ---------- ---------- ----------
             2      28032         11          1       1024
    		 
    
    SQL> SELECT segment_name, segment_type, block_id, blocks
      2  FROM   dba_extents
      3  WHERE
      4  file_id = 2
      5  AND
      6  ( 28032 BETWEEN block_id AND ( block_id + blocks ) );
    
    SEGMENT_NAME               SEGMENT_TYPE               BLOCK_ID     BLOCKS
    -------------------------- ------------------------ ---------- ----------
    HEATMAP                    SYSTEM STATISTICS             28032       1024

    Really strange. It is related to the HEATMAP segment, but I am not using the heat map feature, or used it in the past.

    SQL> show parameter heat
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    heat_map                             string      OFF
    
    SQL> select name, DETECTED_USAGES from DBA_FEATURE_USAGE_STATISTICS where name like 'Heat%';
    
    NAME                     DETECTED_USAGES
    ------------------------ ---------------
    Heat Map                               0

    But how can I get this fixed now? You could either ignore this issue, create a SR at Oracle, or you can drop the statistics segment, in case you are not using the heatmap feature.

    In my case, I decided to the drop the statistics segment by issuing the following command. Dropping the statistics segment works by setting the underscore parameter “_drop_stat_segment” to 1.

    SQL> select SEGMENT_NAME, SEGMENT_TYPE from dba_extents where SEGMENT_TYPE = 'SYSTEM STATISTICS';
    
    SEGMENT_NAME               SEGMENT_TYPE
    -------------------------- ------------------------
    HEATMAP                    SYSTEM STATISTICS
    
    SQL> ALTER SYSTEM SET "_drop_stat_segment"=1 scope=memory;
    
    System altered.
    
    SQL> select SEGMENT_NAME, SEGMENT_TYPE from dba_extents where SEGMENT_TYPE = 'SYSTEM STATISTICS';
    
    no rows selected

    The heat map table is gone now. Let’s run the dictionary check again.

    SQL> BEGIN
      2  DBMS_HM.RUN_CHECK (check_name => 'Dictionary Integrity Check',
      3  run_name => 'WilliamsDICTrun003',
      4  input_params => 'CHECK_MASK=ALL');
      5  END;
      6  /
    
    PL/SQL procedure successfully completed.
    
    SQL> SELECT DBMS_HM.GET_RUN_REPORT('WilliamsDICTrun003') from dual;
    
    DBMS_HM.GET_RUN_REPORT('WILLIAMSDICTRUN003')
    ---------------------------------------------------------------------
    Basic Run Information
     Run Name                     : WilliamsDICTrun003
     Run Id                       : 81
     Check Name                   : Dictionary Integrity Check
     Mode                         : MANUAL
     Status                       : COMPLETED
     Start Time                   : 2017-02-13 11:17:15.190873 +01:00
     End Time                     : 2017-02-13 11:17:15.642501 +01:00
     Error Encountered            : 0
     Source Incident Id           : 0
     Number of Incidents Created  : 0
    
    Input Paramters for the Run
     TABLE_NAME=ALL_CORE_TABLES
     CHECK_MASK=ALL
    
    Run Findings And Recommendations
    
    
    RMAN> list failure;
    
    using target database control file instead of recovery catalog
    Database Role: PRIMARY
    
    no failures found that match specification

     

    Looks much better now.

    Conclusion

    Even if you are not using some features, you can still have trouble with them. :-)

     

    Cet article Oracle 12c – Issues with the HEATMAP Segment even if the heat map feature is not used est apparu en premier sur Blog dbi services.

    Oracle 12c – Combining Flashback Drop and Flashback Query

    $
    0
    0

    If you think that Flashback Drop feature just brings back your table, then this is only half of the story. It does much more than that. Besides undropping the table, it also brings back your constraints, your indexes, your trigger, your grants and the statistics as well.

    The ugly part is, that the flashback drop brings back some strange object names e.g. your indexes and constraints with names like “BIN$…” or alike. Maybe something you don’t want. So why not combining the Flashback Drop with a Flashback Query on the Dictionary to get the old constraint and index names.

    Let’s setup a few objects in the SCOTT schema. But before we do that, we need to grant the user SCOTT some extra privileges.

    SQL> grant execute on dbms_flashback to scott;
    
    Grant succeeded.
    
    SQL> grant flashback on user_indexes to scott;
    
    Grant succeeded.
    
    SQL> grant flashback on user_constraints to scott;
    
    Grant succeeded.
    
    SQL> grant flashback on user_triggers to scott;
    
    Grant succeeded.

    Now we can setup our objects for this test. I will create 2 tables, and few grants, a trigger and statistics. The goal is to have after the flashback to before drop, exactly the same object names afterwards for the table the index, the constraints and the trigger.

    SQL> connect scott/tiger
    Connected.
    
    SQL> create table dbi_t
      2  ( x int, constraint t_pk primary key(x),
      3   y int, constraint check_x check(x>0)
      4  );
    
    Table created.
    
    SQL> insert into dbi_t values (1,1);
    
    1 row created.
    
    SQL> insert into dbi_t values (2,2);
    
    1 row created.
    
    SQL> insert into dbi_t values (3,3);
    
    1 row created.
    
    SQL> COMMIT;
    
    Commit complete.
    
    SQL> create table dbi_audit
      2  (x int, x_before int, y int, y_before int, z varchar2(10));
    
    Table created.
    
    
    SQL> CREATE OR REPLACE TRIGGER dbi_after_update
      2  AFTER INSERT OR UPDATE
      3     ON DBI_T
      4     FOR EACH ROW
      5  DECLARE
      6     v_z varchar2(10);
      7  BEGIN
      8     SELECT user INTO v_z FROM dual;
      9     -- Insert record into audit table
     10     INSERT INTO dbi_audit
     11     ( x,
     12       x_before,
     13       y,
     14       y_before,
     15       z)
     16     VALUES
     17     ( :new.x,
     18       :old.x,
     19       :new.y,
     20       :old.y,
     21       v_z );
     22* END;
     /
    
    Trigger created.
    
    
    SQL> insert into dbi_t values (4,4);
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> insert into dbi_t values (5,5);
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> update dbi_t set x=6 where y=5;
    
    1 row updated.
    
    SQL> commit;
    
    Commit complete.
    
    
    SQL> select * from dbi_t;
    
             X          Y
    ---------- ----------
             1          1
             2          2
             3          3
             4          4
             6          5
    
    SQL> select * from dbi_audit;
    
             X   X_BEFORE          Y   Y_BEFORE Z
    ---------- ---------- ---------- ---------- ----------
             4                     4            SCOTT
             5                     5            SCOTT
             6          5          5          5 SCOTT
    
    
    
    
    SQL> begin
      2  DBMS_STATS.GATHER_TABLE_STATS (
      3  ownname => '"SCOTT"',
      4  tabname => '"DBI_T"',
      5  estimate_percent => 100
      6  );
      7  end;
      8  /
    
    PL/SQL procedure successfully completed.
    
    SQL> begin
      2  DBMS_STATS.GATHER_TABLE_STATS (
      3  ownname => '"SCOTT"',
      4  tabname => '"DBI_AUDIT"',
      5  estimate_percent => 100
      6  );
      7  end;
      8  /
    
    PL/SQL procedure successfully completed.
    
    
    SQL> grant select on dbi_t to hr;
    
    Grant succeeded.
    
    SQL> grant select on dbi_audit to hr;
    
    Grant succeeded.

    Ok. So let’s take a look how is the current situation is right now.

    SQL> select TABLE_NAME, LAST_ANALYZED
      2  from user_tables
      3  where TABLE_NAME in ('DBI_T','DBI_AUDIT');
    
    TABLE_NAME   LAST_ANALYZED
    ------------ --------------------
    DBI_AUDIT    17-FEB-17
    DBI_T        17-FEB-17
    
    SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE from user_constraints where table_name = 'DBI_T';
    
    CONSTRAINT_NAME                      C
    ------------------------------------ -
    CHECK_X                              C
    T_PK                                 P
    
    SQL> select index_name from user_indexes where table_name = 'DBI_T';
    
    INDEX_NAME
    ------------------------------------
    T_PK
    
    SQL> select GRANTEE, OWNER, TABLE_NAME, GRANTOR, PRIVILEGE from user_tab_privs
      2  where table_name in ('DBI_T','DBI_AUDIT');
    
    GRANTEE        OWNER          TABLE_NAME           GRANTOR        PRIVILEGE
    -------------- -------------- -------------------- -------------- --------------------
    HR             SCOTT          DBI_AUDIT            SCOTT          SELECT
    HR             SCOTT          DBI_T                SCOTT          SELECT
    
    SQL> select TRIGGER_NAME, TABLE_NAME, STATUS from user_triggers;
    
    TRIGGER_NAME             TABLE_NA STATUS
    ------------------------ -------- --------
    DBI_AFTER_UPDATE         DBI_T    ENABLED

    Everything looks good. Up to date statistics, trigger is enabled and no objects with “BIN$xx” or something. The next step is a quite important one for this demo. I am just saving the SCN number before the “drop table” into a variable. In the real world, you need to find the SCN number yourself, e.g. with the TIMESTAMP_TO_SCN function.

    SQL> column SCN new_val S
    SQL> select dbms_flashback.get_system_change_number SCN from dual;
    
           SCN
    ----------
       1056212

    After we got the SCN, we can drop the table and undrop it afterwards.

    SQL> drop table dbi_t;
    
    Table dropped.
    
    SQL> flashback table dbi_t to before drop;
    
    Flashback complete.

    Let’s take a look how our constraints and index names look right now. Exactly like expected. They have this ugly “BIN$xxx” names, but we want the old names back.

    SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE from user_constraints where table_name = 'DBI_T';
    
    CONSTRAINT_NAME                      C
    ------------------------------------ -
    BIN$SLt7vMNFZNbgU8k4qMDm6g==$0       C
    BIN$SLt7vMNGZNbgU8k4qMDm6g==$0       P
    
    SQL> select index_name from user_indexes where table_name = 'DBI_T';
    
    INDEX_NAME
    ------------------------------------
    BIN$SLt7vMNHZNbgU8k4qMDm6g==$0

    The trick is now to invoke a Flashback Query on the dictionary. Flashback query on the dictionary is not 100% supported, but it works. I just save the current index name into the variable “I” and the old name into variable “OI”.

    SQL> column index_name new_val I
    SQL> select index_name from user_indexes where table_name = 'DBI_T';
    
    INDEX_NAME
    ------------------------------------
    BIN$SLt7vMNHZNbgU8k4qMDm6g==$0
    
    SQL> column index_name new_val OI
    SQL> select index_name from user_indexes as of scn &S
      2  where table_name = 'DBI_T';
    old   1: select index_name from user_indexes as of scn &S
    new   1: select index_name from user_indexes as of scn    1056212
    
    INDEX_NAME
    ------------------------------------
    T_PK

    After I have the current and the old name in place, I can do an alter index and get my old name back.

    SQL> alter index "&I" rename to "&OI";
    old   1: alter index "&I" rename to "&OI"
    new   1: alter index "BIN$SLt7vMNHZNbgU8k4qMDm6g==$0" rename to "T_PK"
    
    Index altered.
    
    SQL> select index_name from user_indexes where table_name = 'DBI_T';
    
    INDEX_NAME
    ------------------------------------
    T_PK

     

    I will do now exactly the same for the constraints and the trigger.

    SQL> column constraint_name new_val CC
    SQL> select constraint_name from user_constraints where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'C';
    
    CONSTRAINT_NAME
    ------------------------------------
    BIN$SLt7vMNFZNbgU8k4qMDm6g==$0
    
    SQL> column constraint_name new_val OCC
    SQL> select constraint_name from user_constraints as of scn &S where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'C';
    old   1: select constraint_name from user_constraints as of scn &S where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'C'
    new   1: select constraint_name from user_constraints as of scn    1056212 where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'C'
    
    CONSTRAINT_NAME
    ------------------------------------
    CHECK_X
    
    
    SQL> alter table DBI_T RENAME CONSTRAINT "&CC" TO "&OCC";
    old   1: alter table DBI_T RENAME CONSTRAINT "&CC" TO "&OCC"
    new   1: alter table DBI_T RENAME CONSTRAINT "BIN$SLt7vMNFZNbgU8k4qMDm6g==$0" TO "CHECK_X"
    
    Table altered.
    
    SQL> column constraint_name new_val PC
    SQL> select constraint_name from user_constraints where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'P';
    
    CONSTRAINT_NAME
    ------------------------------------
    BIN$SLt7vMNGZNbgU8k4qMDm6g==$0
    
    SQL> column constraint_name new_val OPC
    SQL> select constraint_name from user_constraints as of scn &S where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'P';
    old   1: select constraint_name from user_constraints as of scn &S where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'P'
    new   1: select constraint_name from user_constraints as of scn    1056212 where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'P'
    
    CONSTRAINT_NAME
    ------------------------------------
    T_PK
    
    
    SQL> alter table DBI_T RENAME CONSTRAINT "&PC" TO "&OPC";
    old   1: alter table DBI_T RENAME CONSTRAINT "&PC" TO "&OPC"
    new   1: alter table DBI_T RENAME CONSTRAINT "BIN$SLt7vMNGZNbgU8k4qMDm6g==$0" TO "T_PK"
    
    Table altered.
    
    SQL> col CONSTRAINT_NAME format a36
    SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE from user_constraints where table_name = 'DBI_T';
    
    CONSTRAINT_NAME                      C
    ------------------------------------ -
    CHECK_X                              C
    T_PK                                 P
    
    SQL> col INDEX_NAME format a36
    SQL> select index_name from user_indexes where table_name = 'DBI_T';
    
    INDEX_NAME
    ------------------------------------
    T_PK
    
    
    SQL> select TRIGGER_NAME, TABLE_NAME,STATUS from user_triggers;
    
    TRIGGER_NAME                     TABLE_NAME                       STATUS
    -------------------------------- -------------------------------- --------
    BIN$SLt7vMNIZNbgU8k4qMDm6g==$0   DBI_T                            ENABLED
    
    SQL> column trigger_name new_val T
    SQL> select trigger_name from user_triggers where table_name = 'DBI_T';
    
    TRIGGER_NAME
    --------------------------------
    BIN$SLt7vMNIZNbgU8k4qMDm6g==$0
    
    SQL> column trigger_name new_val OT
    SQL> select trigger_name from user_triggers as of scn &S where table_name = 'DBI_T';
    old   1: select trigger_name from user_triggers as of scn &S where table_name = 'DBI_T'
    new   1: select trigger_name from user_triggers as of scn    1056212 where table_name = 'DBI_T'
    
    TRIGGER_NAME
    --------------------------------
    DBI_AFTER_UPDATE
    
    SQL> alter trigger "&T" RENAME TO "&OT";
    old   1: alter trigger "&T" RENAME TO "&OT"
    new   1: alter trigger "BIN$SLt7vMNIZNbgU8k4qMDm6g==$0" RENAME TO "DBI_AFTER_UPDATE"
    
    Trigger altered.
    
    
    SQL> select TRIGGER_NAME, TABLE_NAME, STATUS from user_triggers;
    
    TRIGGER_NAME             TABLE_NAME             STATUS
    ------------------------ ---------------------- --------
    DBI_AFTER_UPDATE         DBI_T                  ENABLED

    The stats and the grants do come back automatically.

    SQL> select TABLE_NAME, LAST_ANALYZED
      2  from user_tables
      3  where TABLE_NAME in ('DBI_T','DBI_AUDIT');
    
    TABLE_NAME   LAST_ANALYZED
    ------------ --------------------
    DBI_AUDIT    17-FEB-17
    DBI_T        17-FEB-17
    
    
    SQL> select GRANTEE, OWNER, TABLE_NAME, GRANTOR, PRIVILEGE from user_tab_privs
      2  where table_name in ('DBI_T','DBI_AUDIT');
    
    GRANTEE        OWNER          TABLE_NAME           GRANTOR        PRIVILEGE
    -------------- -------------- -------------------- -------------- --------------------
    HR             SCOTT          DBI_AUDIT            SCOTT          SELECT
    HR             SCOTT          DBI_T                SCOTT          SELECT

     

    Conclusion

    The Flashback Drop feature does not just bring back your table. It does much more, it brings back your grants, the trigger, the statistics, the indexes and the constraints as well. If you are lucky, you can even combine it with the Flashback Query to retrieve your old names for the indexes, constraints and triggers.

     

    Cet article Oracle 12c – Combining Flashback Drop and Flashback Query est apparu en premier sur Blog dbi services.

    12cR2 real-time materialized view (on query computation)

    $
    0
    0

    Materialized views is a very old feature (you may remember that it was called snapshots a long time ago). It has all advantages of a view, as you can define any select statement that joins, filters, aggregates, and see it as one table. It has all advantages of a table, as it is stored in one segment, can be indexed, partitioned, have constraints, be compressed, etc. It looks like indexes as it stores data redundantly, in a different physical way, more focused on the way it will be queried rather than on the way data is entered. Like indexes, they can be used transparently (with query rewrite) but unlike indexes, they are not maintained synchronously but have to be refreshed. It has some advantages of replication because it can capture the changes done on source tables, into materialized view logs, so that refresh can be incremental (fast refresh).
    Oracle Database 12.2 goes a step further being able to deliver fresh result even when the materialized is stale. This is an amazing feature called real-time materialized view, that does on-query computation of fresh result from the stale one, joined with the materialized view log.

    I create my DEMO table on Oracle Exdata Express Cloud Service

    SQL> create table DEMO (id primary key,a,b) as select rownum,round(log(10,rownum)) a, rownum b from xmltable('1 to 100000');
    Table created.

    I plan to create a materialized view to aggregate the count and sum of B grouped by A. And DBMS_MVIEW can tell me what I need to be able to fast refresh it.

    Explain Materialized View

    The goal is to have real-time materialized view with frequent refreshes, which means that we need fast refresh to be possible after any kind of modification.


    SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
    PL/SQL procedure successfully completed.
     
    SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
     
    CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    REFRESH_FAST_AFTER_ONETAB_DML COUNT(*) is not present in the select list
    REFRESH_FAST
    REFRESH_FAST_AFTER_INSERT the detail table does not have a materialized view log PDB_ADMIN.DEMO
    REFRESH_FAST_AFTER_ANY_DML see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
    REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view
    REFRESH_FAST_AFTER_ONETAB_DML see the reason why REFRESH_FAST_AFTER_INSERT is disabled

    Here is what I have to do in order to have a materialized view that can be fast refreshed: COUNT(*) in the select, and create a materialized view log.

    Materialized view log


    SQL> create materialized view log on DEMO;
    Materialized view log created.

    Let’s check if it is ok now, with he additional count(*):

    SQL> delete from mv_capabilities_table;
    15 rows deleted.
     
    SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
    PL/SQL procedure successfully completed.
     
    SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
     
    CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    REFRESH_FAST
    REFRESH_FAST_AFTER_ANY_DML see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
    REFRESH_FAST_AFTER_INSERT mv log must have ROWID PDB_ADMIN.DEMO
    REFRESH_FAST_AFTER_INSERT mv log must have new values PDB_ADMIN.DEMO
    REFRESH_FAST_AFTER_INSERT mv log does not have all necessary columns PDB_ADMIN.DEMO
    REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view
    REFRESH_FAST_AFTER_ONETAB_DML see the reason why REFRESH_FAST_AFTER_INSERT is disabled

    I must add ROWID, used columns and NEW VALUES


    SQL> drop materialized view log on DEMO;
    Materialized view log dropped.
     
    SQL> create materialized view log on DEMO with sequence, rowid (a,b) including new values;
    Materialized view log created.

    You can see that I’ve added the sequence, that was not mentioned by the explain_mview. I’ll come back on that later and probably in another post.


    SQL> delete from mv_capabilities_table;
    16 rows deleted.
    SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
    PL/SQL procedure successfully completed.
    SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
     
    CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view

    Ok, now I’m ready to create the materialized view. The only remaining message is for partitioned tables.


    SQL> create materialized view DEMO_MV refresh fast on demand as select a,count(b),sum(b),count(*) from DEMO group by a;
    Materialized view created.

    Aggregate query on the source table

    I’m running a simple query that can get its result from the source table or from the materialized view


    SQL> select sum(b) from DEMO where a=3;
     
    SUM(B)
    ----------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID brdc1qcbc2npk, child number 0
    -------------------------------------
    select sum(b) from DEMO where a=3
     
    Plan hash value: 2180342005
     
    ---------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    ---------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 262 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 262 |
    |* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 262 |
    ---------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("A"=3)
    filter("A"=3)

    The query has read the source table. I need to enable query rewrite to get the CBO transparently transforming to a query on the materialized view.

    Query Rewrite


    SQL> alter materialized view DEMO_MV enable query rewrite;
    Materialized view altered.

    I also need the query_rewrite_integrity to be set. It is by default:

    SQL> show parameter query_rewrite
     
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    query_rewrite_enabled string TRUE
    query_rewrite_integrity string enforced

    Now, the rewrite can occur:

    SQL> select sum(b) from DEMO where a=3;
     
    SUM(B)
    ----------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID brdc1qcbc2npk, child number 0
    -------------------------------------
    select sum(b) from DEMO where a=3
     
    Plan hash value: 2792196921
     
    -----------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    -----------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
    |* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
    -----------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("DEMO_MV"."A"=3)
    filter("DEMO_MV"."A"=3)

    This query is optimized: 9 blocks read from the materialized view instead of 262 ones from the source table.

    You can note that it’s not a new child cursor: the previous cursor has been invalidated when I altered the materialized view.

    This rewrite can occur only because the materialized view has been refreshed and the source table had no modifications on it.

    Stale MVIEW

    Let’s do some DML on the source table.


    SQL> insert into DEMO values(0,0,0);
    1 row created.

    and query again


    SQL> select sum(b) from DEMO where a=3;
     
    SUM(B)
    ----------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID brdc1qcbc2npk, child number 1
    -------------------------------------
    select sum(b) from DEMO where a=3
     
    Plan hash value: 2180342005
     
    ---------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    ---------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
    |* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
    ---------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("A"=3)
    filter("A"=3)

    Now, the materialized view is stale. We cannot get the same result from it, so the rewrite didn’t happen.

    You can see that I have a new child cursor. The previous one cannot be shared because the previous one was only for non-stale materialized view.

    Stale tolerated

    If I want to keep using the materialized view, I have the option to accept stale results:


    SQL> alter session set query_rewrite_integrity=stale_tolerated;
    Session altered.

    Now, the rewrite can occur even when the source table has changed since the last refresh.


    SQL> select sum(b) from DEMO where a=3;
     
    SUM(B)
    ----------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID brdc1qcbc2npk, child number 2
    -------------------------------------
    select sum(b) from DEMO where a=3
     
    Plan hash value: 2792196921
     
    -----------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    -----------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
    |* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
    -----------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("DEMO_MV"."A"=3)
    filter("DEMO_MV"."A"=3)

    Of course, here you can’t see that the result is stale, because I inserted a row with value 0 which do not change the sum. Let’s do a count the rows, which is something that is also aggregated in my materialized view. I have the option to disable the rewrite and query the source table:


    SQL> select /*+ no_rewrite */ count(b) from DEMO;
     
    COUNT(B)
    ----------
    100001

    This is the accurate result, but with access to full table.

    The rewrite can also be forced by hint (because it is a cost decision)


    SQL> select /*+ rewrite */ count(b) from DEMO;
     
    COUNT(B)
    ----------
    100000

    Stale result here: I don’t see the latest modifications.

    Frequent refresh

    In order to limit the gap between fresh data and stale result, you can refresh the materialized view frequently. It’s not too expensive thanks to the materialized view log: fast refresh is incremental.

    Here I don’t want stale result:

    SQL> alter session set query_rewrite_integrity=enforced;
    Session altered.

    and I refresh the materialized view


    SQL> exec dbms_mview.refresh('DEMO_MV','f');
    PL/SQL procedure successfully completed.

    Then I can expect, for the time until the next updates, to get results from he materialized view.


    SQL> select sum(b) from DEMO where a=3;
     
    SUM(B)
    ----------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID brdc1qcbc2npk, child number 1
    -------------------------------------
    select sum(b) from DEMO where a=3
     
    Plan hash value: 2180342005
     
    ---------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    ---------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
    |* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
    ---------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("A"=3)
    filter("A"=3)

    Unfortunately I re-used the same cursor here. When you refresh, the cursors are not invalidated.

    I’m running another statement now to get it parsed again:

    SQL> select sum(b) this_is_another_cursor from DEMO where a=3;
     
    THIS_IS_ANOTHER_CURSOR
    ----------------------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 27xfg0qjcf7ff, child number 0
    -------------------------------------
    select sum(b) this_is_another_cursor from DEMO where a=3
     
    Plan hash value: 2792196921
     
    -----------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    -----------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
    |* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
    -----------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("DEMO_MV"."A"=3)
    filter("DEMO_MV"."A"=3)

    So, we now read the materialized view but this will last only while there is no updates on the table. So the idea is to trigger a refresh as soon as there are modifications. Ideally it should be like indexes, which are maintained automatically. But indexes are much simple: it’s a simple value to rowid mapping entry to maintain. And rowids do not change. Materialized views have joins, aggregates and contains all columns.

    Refresh on commit

    So the idea is to defer the maintenance of the materialized view to commit time. This is the latest point where we are required to do it as we want other sessions to never see stale results. And materialized view logs are there to store the incremental changes even if the transaction is very long. Of course, we need to be aware of it because in general the commit is an immediate and simple operation.

    Let’s define the materialized view to refresh on commit instead of on-demand


    SQL> alter materialized view DEMO_MV refresh on commit;
    Materialized view altered.

    I do some modifications


    SQL> delete from DEMO where id=0;
    1 row deleted.

    And I run my query


    SQL> select sum(b) this_is_a_third_cursor from DEMO where a=3;
     
    THIS_IS_A_THIRD_CURSOR
    ----------------------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 5dfs068dgbwvd, child number 0
    -------------------------------------
    select sum(b) this_is_a_third_cursor from DEMO where a=3
     
    Plan hash value: 2180342005
     
    ---------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    ---------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
    |* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
    ---------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("A"=3)
    filter("A"=3)

    Rewrite cannot happen here because the materialized view is stale. I didn’t commit yet. Of course, other sessions can query from the view because they must not see my modification.


    SQL> commit;
    Commit complete.

    The commit has triggered the fast refresh of the materialized view


    SQL> select sum(b) this_is_a_fourth_cursor from DEMO where a=3;
     
    THIS_IS_A_FOURTH_CURSOR
    -----------------------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 0075r0yzqt90a, child number 0
    -------------------------------------
    select sum(b) this_is_a_fourth_cursor from DEMO where a=3
     
    Plan hash value: 2792196921
     
    -----------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    -----------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
    |* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
    -----------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("DEMO_MV"."A"=3)
    filter("DEMO_MV"."A"=3)

    With on commit refresh, the materialized view is never stale. The problem is that it can slow down the transactions: in addition to fill the materialized view logs, the commit has the overhead to apply them. In 12.1 this is the only way to have a query on the materialized view that is always fresh. But there’s something new in 12.2.

    Real-time materialized views

    Even when the materialized view is stale, we can get fresh result without querying the source tables. We have the stale values in the materialized view and we have all changes logged into the materialized view log. Easy or not, merging that can be computed to get fresh result. We still need fast refresh but we don’t need refresh on commit anymore:


    SQL> alter materialized view DEMO_MV refresh on demand;
    Materialized view altered.

    And in order to use this new feature we have to enable it a materialized view level:


    SQL> alter materialized view DEMO_MV enable on query computation;
    Materialized view altered.

    Then let the magic happen:


    SQL> select sum(b) from DEMO where a=3;
     
    SUM(B)
    ----------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID brdc1qcbc2npk, child number 0
    -------------------------------------
    select sum(b) from DEMO where a=3
     
    Plan hash value: 2792196921
     
    -----------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    -----------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
    |* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
    -----------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    2 - storage("DEMO_MV"."A"=3)
    filter("DEMO_MV"."A"=3)

    Here my materialized view is not stale, so nothing special happened. Here is a some modification:

    SQL> insert into DEMO values(0,0,0);
    1 row created.

    and…

    SQL> select sum(b) try_again from DEMO where a=3;
    &npsp;
    TRY_AGAIN
    ----------
    4950617
    &npsp;
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
    &npsp;
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID dtmhccwr0v7r5, child number 0
    -------------------------------------
    select sum(b) try_again from DEMO where a=3
    &npsp;
    Plan hash value: 2180342005
    &npsp;
    ---------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    ---------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
    |* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
    ---------------------------------------------------------------------------------------------
    &npsp;
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    &npsp;
    2 - storage("A"=3)
    filter("A"=3)

    Still no magic here. For the session that did the modifications, it seems that query rewrite cannot happen. All changes are in the materialized view log, but applying the uncommited ones for my session seems to be impossible here. Well, let’s commit my changes.


    SQL> commit;
    Commit complete.

    and see the magic:


    SQL> select sum(b) try_again from DEMO where a=3;
    &nbps;
    SUM(B)
    ----------
    4950617
    &nbps;
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
    &nbps;
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID dtmhccwr0v7r5, child number 0
    -------------------------------------
    select sum(b) try_again from DEMO where a=3
    &nbps;
    Plan hash value: 2180342005
    &nbps;
    ---------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
    ---------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
    |* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
    ---------------------------------------------------------------------------------------------
    &nbps;
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    &nbps;
    2 - storage("A"=3)
    filter("A"=3)

    Oh… that’s my previous cursor. No invalidation occurs. I have to parse a different statement.


    SQL> select sum(b) here_I_am from DEMO where a=3;
     
    HERE_I_AM
    ----------
    4950617
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 34fqrktpthuk7, child number 1
    -------------------------------------
    select sum(b) here_I_am from DEMO where a=3
     
    Plan hash value: 1240257898
     
    -----------------------------------------------------------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
    -----------------------------------------------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 25 | | | |
    | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 25 | | | |
    | 2 | VIEW | | 1 | 705 | 1 |00:00:00.01 | 25 | | | |
    | 3 | UNION-ALL | | 1 | | 1 |00:00:00.01 | 25 | | | |
    |* 4 | FILTER | | 1 | | 1 |00:00:00.01 | 16 | | | |
    |* 5 | HASH JOIN OUTER | | 1 | 100 | 1 |00:00:00.01 | 16 | 3843K| 3843K| 1699K (0)|
    |* 6 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 | 1025K| 1025K| |
    | 7 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 7 | | | |
    | 8 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 7 | 1956K| 1956K| 2324K (0)|
    | 9 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 7 | | | |
    | 10 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 7 | | | |
    |* 11 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 7 | | | |
    | 12 | WINDOW SORT | | 1 | 1 | 1 |00:00:00.01 | 7 | 2048 | 2048 | 2048 (0)|
    |* 13 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 1 | 1 | 1 |00:00:00.01 | 7 | 1025K| 1025K| |
    | 14 | VIEW | | 1 | 605 | 0 |00:00:00.01 | 9 | | | |
    | 15 | UNION-ALL | | 1 | | 0 |00:00:00.01 | 9 | | | |
    |* 16 | FILTER | | 1 | | 0 |00:00:00.01 | 0 | | | |
    | 17 | NESTED LOOPS OUTER | | 1 | 600 | 0 |00:00:00.01 | 0 | | | |
    | 18 | VIEW | | 1 | 100 | 0 |00:00:00.01 | 0 | | | |
    |* 19 | FILTER | | 1 | | 0 |00:00:00.01 | 0 | | | |
    | 20 | HASH GROUP BY | | 1 | | 0 |00:00:00.01 | 0 | 2982K| 2982K| |
    |* 21 | VIEW | | 1 | 1 | 0 |00:00:00.01 | 0 | | | |
    | 22 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
    |* 23 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
    | 24 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
    |* 25 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
    |* 26 | INDEX UNIQUE SCAN | I_SNAP$_DEMO_MV | 0 | 6 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
    |* 27 | HASH JOIN | | 1 | 5 | 0 |00:00:00.01 | 9 | 3043K| 3043K| 1702K (0)|
    |* 28 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 | 1025K| 1025K| |
    | 29 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
    | 30 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2319K (0)|
    | 31 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
    | 32 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
    |* 33 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
    | 34 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
    |* 35 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
    -----------------------------------------------------------------------------------------------------------------------------------------------------------

    We got it. All the magic. The materialized view is read. The materialized view log is read. But we don’t need the source tables. All this is merged by outer join and union all. The plan is harder to read but it requires only 25 logical reads to get fresh results instead of 270 from the source table. The bigger the tables are, the more complex the query is, the more benefit you get as long as you don’t have too many changes since the last refresh. And this without any overhead on other transactions commits. That’s the beauty of 12cR2 Enterprise Edition. Can you imagine you have to code this yourself? For any query? For any modifications on source tables?

    FRESH_MV

    This was query rewrite: query the source table and have the CBO transform the query to query the materialized (given that the CBO costing estimates that it is cheaper). But you can also query the materialized view and ask to get fresh result by joining materialized view log to the stale result. And this can be used also in Standard Edition (only query rewrite is limited to Enterprise Edition). On-query computation when querying the materialized vue is enabled by the FRESH_MV hint:


    SQL> select /*+ fresh_mv */ * from DEMO_MV;
     
    A COUNT(B) SUM(B) COUNT(*)
    ---------- ---------- ---------- ----------
    5 68378 4500058747 68378
    2 285 49590 285
    3 2846 4950617 2846
    1 28 490 28
    4 28460 494990550 28460
    0 4 6 4
     
    6 rows selected.
     
    SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last +alias'));
     
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID gyar0v20qcksu, child number 0
    -------------------------------------
    select /*+ fresh_mv */ * from DEMO_MV
     
    Plan hash value: 2169890143
     
    ----------------------------------------------------------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
    ----------------------------------------------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | | 6 |00:00:00.01 | 12 | | | |
    | 1 | VIEW | | 1 | 730 | 6 |00:00:00.01 | 12 | | | |
    | 2 | UNION-ALL | | 1 | | 6 |00:00:00.01 | 12 | | | |
    |* 3 | VIEW | VW_FOJ_0 | 1 | 100 | 5 |00:00:00.01 | 9 | | | |
    |* 4 | HASH JOIN FULL OUTER | | 1 | 100 | 6 |00:00:00.01 | 9 | 2897K| 2897K| 3217K (0)|
    | 5 | VIEW | | 1 | 6 | 6 |00:00:00.01 | 9 | | | |
    | 6 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 9 | 1025K| 1025K| |
    | 7 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
    | 8 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2268K (0)|
    | 9 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
    | 10 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
    |* 11 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
    | 12 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
    |* 13 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
    | 14 | VIEW | | 1 | 630 | 1 |00:00:00.01 | 3 | | | |
    | 15 | UNION-ALL | | 1 | | 1 |00:00:00.01 | 3 | | | |
    |* 16 | FILTER | | 1 | | 0 |00:00:00.01 | 1 | | | |
    | 17 | NESTED LOOPS OUTER | | 1 | 600 | 1 |00:00:00.01 | 1 | | | |
    | 18 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
    |* 19 | FILTER | | 1 | | 1 |00:00:00.01 | 0 | | | |
    | 20 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2304K (0)|
    | 21 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
    | 22 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
    |* 23 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
    | 24 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
    |* 25 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
    |* 26 | INDEX UNIQUE SCAN | I_SNAP$_DEMO_MV | 1 | 6 | 1 |00:00:00.01 | 1 | 1025K| 1025K| |
    | 27 | MERGE JOIN | | 1 | 30 | 1 |00:00:00.01 | 2 | | | |
    | 28 | MAT_VIEW ACCESS BY INDEX ROWID | DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 2 | | | |
    | 29 | INDEX FULL SCAN | I_SNAP$_DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 1 | 1025K| 1025K| |
    |* 30 | FILTER | | 6 | | 1 |00:00:00.01 | 0 | | | |
    |* 31 | SORT JOIN | | 6 | 100 | 1 |00:00:00.01 | 0 | 2048 | 2048 | 2048 (0)|
    | 32 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
    | 33 | SORT GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 2048 | 2048 | 2048 (0)|
    | 34 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
    | 35 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
    |* 36 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
    | 37 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
    |* 38 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
    ----------------------------------------------------------------------------------------------------------------------------------------------------------

    Have you seen that we need even less logical reads (12) than before (25). There is an optimization here with RESULT CACHE. You get this when you have the sequence in the materialized view log, and you can see that the sequence is used in the predicates:


    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
    3 - filter("AV$0"."OJ_MARK" IS NULL)
    4 - access(SYS_OP_MAP_NONNULL("SNA$0"."A")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
    11 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
    "MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
    13 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
    filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
    16 - filter(CASE WHEN ROWID IS NOT NULL THEN 1 ELSE NULL END IS NULL)
    19 - filter(SUM(1)>0)
    23 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
    "MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
    25 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
    filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
    26 - access("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
    30 - filter("DEMO_MV"."COUNT(*)"+"AV$0"."D0">0)
    31 - access("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
    filter("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
    36 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
    "MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
    38 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
    filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))

    Of course, you also see a predicate with the staleness timestamp (here 2017-02-16 20:31:08) of the materialized view.

    This result cache is interesting because the materialized view log is read several times in the execution plan and this is a way to actually read it only once.

    SQL> select type,column_count,row_count,cache_id,name from v$result_cache_objects;
     
    TYPE COLUMN_COUNT ROW_COUNT CACHE_ID NAME
    ---------- ------------ ---------- ------------------------------ ------------------------------
    Dependency 0 0 PDB_ADMIN.MLOG$_DEMO PDB_ADMIN.MLOG$_DEMO
    Result 7 1 6jf9k1y2wt8xc5b00gv9px6ww0 DMLTYPES:MLOG$_DEMO

    The result cache has a dependency on the materialized view log, to be aware of additional changes, and when tracing the transformed query, we can see a lifetime of session for this result cache. /*+ RESULT_CACHE(LIFETIME=SESSION, NAME=”DMLTYPES:MLOG$_DEMO”) */. Note that I included the sequence in the materialized view log, but this is not required. I’ll show in a future post that the execution plan is different then, and not using result cache.

    So what?

    This is an amazing feature. You can optimize your queries transparently by creating materialized views, get fresh result, and minimize the refresh overhead. And depending on the size of the tables and the rate of modifications. You can choose the right refresh frequency with the goal to limit the materialized view logs to apply on each query. You have real-time result and bulk refresh at the same time. Oracle Database has always been a database for mixed workloads, where readers don’t block writers. And once again we have a feature to optimize queries by pre-calculating them, with minimal impact on source.

    It is transparent, but after this first test, I have a few questions that raise and that I’ll try to answer in future posts: Is it always better to have the sequence in the materialized view log? Is the default result cache size still sufficient? How can it use a 1 seconds only precision timestamp and not a SCN? What happens with this at winter Daylight Saving Time clock change? Can we get query rewrite when our own transaction has made the modifications? Do we need to invalidate cursors that read the source table? How accurate are the cardinality estimations on the very volatile materialized view? When full materialized view log is read, can it trigger a complete refresh?

     

    Cet article 12cR2 real-time materialized view (on query computation) est apparu en premier sur Blog dbi services.


    Oracle 12c – How to correct the error: “RMAN-20005: target database name is ambiguous”

    $
    0
    0

    I do have a Data Guard environment, where I have configured the RMAN DB_UNIQUE_NAME persistent setting for my primary and the standby. With the RMAN DB_UNIQUE_NAME settings I am able to run reports my Oracle Data Guard environment from any database. I could e.g. list all archivelogs for SITE1 from SITE2 or the other ways around.
    Or I could show all persistent settings for SITE1 from SITE2 and of course the other way around. The only prerequisite for this feature is the RMAN catalog. In case you are not connected to the RMAN catalog you end up with the following error:

    RMAN> SHOW ARCHIVELOG DELETION POLICY FOR DB_UNIQUE_NAME 'DBIT121_SITE2';
    
    using target database control file instead of recovery catalog
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of show command at 02/21/2017 13:58:53
    RMAN-05037: FOR DB_UNIQUE_NAME option cannot be used in nocatalog mode

    After connecting to the catalog, you can use this feature, e.g. to show the archive deletion policy.

    $ rman target sys/welcome1 catalog /@rcat
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Tue Feb 21 14:25:10 2017
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: DBIT121 (DBID=644484523)
    connected to recovery catalog database
    
    RMAN> SHOW ARCHIVELOG DELETION POLICY FOR DB_UNIQUE_NAME 'DBIT121_SITE1';
    RMAN configuration parameters for database with db_unique_name DBIT121_SITE1 are:
    CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY BACKED UP 1 TIMES TO DISK;
    
    RMAN> SHOW ARCHIVELOG DELETION POLICY FOR DB_UNIQUE_NAME 'DBIT121_SITE2';
    RMAN configuration parameters for database with db_unique_name DBIT121_SITE2 are:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

    There are quite a lot options which can be combined with the DB_UNIQUE_NAME feature like the following.

    LIST ARCHIVELOG ALL FOR DB_UNIQUE_NAME 'DBIT121_SITE2';
    REPORT SCHEMA FOR DB_UNIQUE_NAME 'DBIT121_SITE2';
    SHOW ALL FOR DB_UNIQUE_NAME 'DBIT121_SITE2';

    But getting back to my issue. I was running a resync catalog from my Standby database and ended up with the following error:

    RMAN> RESYNC CATALOG FROM DB_UNIQUE_NAME 'DBIT121_SITE1';
    
    resyncing from database with DB_UNIQUE_NAME DBIT121_SITE1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of resync from db_unique_name command on default channel at 02/21/2017 13:08:42
    RMAN-20005: target database name is ambiguous

    RMAN says that the target database name is ambiguous. But what does this mean. Let’s take a look a the RMAN error with the oerr utility. The oerr utility can not only be used with “ORA” error codes like “oerr ora 01555″, but also with “RMAN” error codes.

    $ oerr rman 20005
    20005, 1, "target database name is ambiguous"
    // *Cause: two or more databases in the recovery catalog match this name
    // *Action:
    //

    Ok. This error is much more precise. Looks like that RMAN found more the one database called DBIT121 in the catalog, and so RMAN does not know, on which DBID to perform the requested command. Ok. So let’s connect to the RMAN catalog and check if this is really the case.

    SQL> SELECT DB.DB_KEY,DB.DB_ID, DB.CURR_DBINC_KEY, DBINC.DB_NAME
            FROM DB, DBINC
           WHERE DB.CURR_DBINC_KEY = DBINC.DBINC_KEY
             AND DBINC.DB_NAME   = 'DBIT121' ;  2    3    4
    
        DB_KEY      DB_ID CURR_DBINC_KEY DB_NAME
    ---------- ---------- -------------- --------
             1  642589239              2 DBIT121
        546780  644484523         546781 DBIT121

    Indeed. I do have two different DBID’s pointing to the same DB_NAME. Kinda confusing for RMAN. But which one is the one that have been backed up. We could query the RC_BACKUP_SET and RC_BACKUP_PIECE views to find that out.

    SQL> SELECT RBS.DB_KEY
             , RD.NAME
             , RBS.DB_ID
      2    3    4           , RBS.BS_KEY
             , RBS.RECID
             , RBS.STAMP
             , RBS.BACKUP_TYPE
             , RBS.START_TIME, STATUS
      5    6    7    8    9        FROM RC_BACKUP_SET RBS, RC_DATABASE RD
         WHERE RBS.DB_KEY=RD.DB_KEY
           AND RBS.DB_ID=RD.DBID
           AND RD.NAME='DBIT121' ;  10   11   12
    ...
    ...
    
        DB_KEY NAME          DB_ID     BS_KEY      RECID      STAMP B START_TIM S
    ---------- -------- ---------- ---------- ---------- ---------- - --------- -
        546780 DBIT121   644484523     555608       3070  936496831 I 21-FEB-17 A
        546780 DBIT121   644484523     555609       3071  936496832 I 21-FEB-17 A
        546780 DBIT121   644484523     555610       3072  936496836 D 21-FEB-17 A
        546780 DBIT121   644484523     555611       3073  936496860 D 21-FEB-17 A
        546780 DBIT121   644484523     555612       3074  936496875 D 21-FEB-17 A
        546780 DBIT121   644484523     555613       3075  936496884 D 21-FEB-17 A
        546780 DBIT121   644484523     555614       3076  936496890 D 21-FEB-17 A
        546780 DBIT121   644484523     555615       3077  936496895 L 21-FEB-17 A
        546780 DBIT121   644484523     555616       3078  936496897 L 21-FEB-17 A
        546780 DBIT121   644484523     555617       3079  936496897 L 21-FEB-17 A
        546780 DBIT121   644484523     555618       3080  936496898 D 21-FEB-17 A
    
        DB_KEY NAME          DB_ID     BS_KEY      RECID      STAMP B START_TIM S
    ---------- -------- ---------- ---------- ---------- ---------- - --------- -
        546780 DBIT121   644484523     555619       3081  936496900 D 21-FEB-17 A
        546780 DBIT121   644484523     555620       3082  936498788 D 21-FEB-17 A
        546780 DBIT121   644484523     555621       3083  936502389 D 21-FEB-17 A
        546780 DBIT121   644484523     555622       3084  936505991 D 21-FEB-17 A
        546780 DBIT121   644484523     555623       3085  936509589 D 21-FEB-17 A
        546780 DBIT121   644484523     555624       3086  936513189 D 21-FEB-17 A
        546780 DBIT121   644484523     555625       3087  936516788 D 21-FEB-17 A
        546780 DBIT121   644484523     555626       3088  936520387 D 21-FEB-17 A
        546780 DBIT121   644484523     555627       3089  936523988 D 21-FEB-17 A
        546780 DBIT121   644484523     555628       3090  936527608 D 21-FEB-17 A
        546780 DBIT121   644484523     555629       3091  936531188 D 21-FEB-17 A
    ...
    ...

    After checking the output, I see that DBID 644484523 is the correct one, and DBID 642589239 is the one I want to get rid of.

    To do so, we can shutdown the Standby database and start it up with nomount. The reason for that, is that you can’t issue the SET DBID command against a database which is mounted or open.

    RMAN> SET DBID=642589239;
    
    executing command: SET DBID
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of set command at 02/21/2017 13:15:26
    RMAN-06188: cannot use command when connected to a mounted target database

    Ok. Let’s go the nomount and execute the “unregister database;” command after the correct DBID is set.

    $ rman target sys/welcome1 catalog /@rcat
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Tue Feb 21 14:25:10 2017
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: DBIT121 (not mounted)
    connected to recovery catalog database
    
    RMAN> SET DBID=642589239;
    
    executing command: SET DBID
    database name is "DBIT121" and DBID is 642589239
    
    RMAN> unregister database;
    
    database name is "DBIT121" and DBID is 642589239
    
    Do you really want to unregister the database (enter YES or NO)? YES
    database unregistered from the recovery catalog
    
    RMAN>

    Let’s check the RMAN catalog again.

    SQL> SELECT DB.DB_KEY, DB.DB_ID, DB.CURR_DBINC_KEY, DBINC.DB_NAME
            FROM DB, DBINC
           WHERE DB.CURR_DBINC_KEY = DBINC.DBINC_KEY
             AND DBINC.DB_NAME   = 'DBIT121' ;  2    3    4
    
        DB_KEY      DB_ID CURR_DBINC_KEY DB_NAME
    ---------- ---------- -------------- --------
        556718  644484523         556719 DBIT121

    Cool. Looks much better. :-) Now my resync catalog from SITE1 issued from SITE2 works again.

    RMAN> LIST DB_UNIQUE_NAME OF DATABASE;
    
    List of Databases
    DB Key  DB Name  DB ID            Database Role    Db_unique_name
    ------- ------- ----------------- ---------------  ------------------
    556718  DBIT121  644484523        PRIMARY          DBIT121_SITE1
    556718  DBIT121  644484523        STANDBY          DBIT121_SITE2
    
    RMAN> RESYNC CATALOG FROM DB_UNIQUE_NAME 'DBIT121_SITE1';
    
    resyncing from database with DB_UNIQUE_NAME DBIT121_SITE1
    starting full resync of recovery catalog
    full resync complete

    Conclusion

    The RMAN DB_UNIQUE_NAME persistent setting is a quite cool feature. This is something I would really recommend when working with RMAN and Data Guard. It allows you to do actions on primary from the standby or the standby from the primary. It doesn’t matter. But take care that you don’t have multiple DBID’s pointing to the same DB in your RMAN catalog.

     

    Cet article Oracle 12c – How to correct the error: “RMAN-20005: target database name is ambiguous” est apparu en premier sur Blog dbi services.

    12cR2: lockdown profiles and ORA-01219

    $
    0
    0

    When you cannot open a database, you will get some users unhappy. When you cannot open multitenant database, then the number of unhappy users is multiplied by the number of PDBs. I like to encounter problems in my lab before seeing them in production. Here is a case where I’ve lost a file. I don’t care about the tablespace, but would like to put it offline and at least be able to open the database.

    ORA-01113

    So, it’s my lab, I dropped a file while the database was down. The file belongs to a PDB but I cannot open the CDB:

    SQL> startup
    ORACLE instance started.
     
    Total System Global Area 1577058304 bytes
    Fixed Size 8793208 bytes
    Variable Size 1124074376 bytes
    Database Buffers 436207616 bytes
    Redo Buffers 7983104 bytes
    Database mounted.
    ORA-01113: file 23 needs media recovery
    ORA-01110: data file 23: '/tmp/STATSPACK.dbf'

    Yes this is a lab, I like to put datafiles in /tmp (lab only) and I was testing my Statspack scripts for an article to be published soon. I’ve removed the file and have no backup. I recommand to do nasty things on labs, because those things sometimes happen on production systems and better be prepared. This recommandation supposes you cannot mistake your lab prompt with a production one of course.

    ORA-01157

    The database is in mount. I cannot open it:

    SQL> alter database open;
    alter database open
    *
    ERROR at line 1:
    ORA-01157: cannot identify/lock data file 23 - see DBWR trace file
    ORA-01110: data file 23: '/tmp/STATSPACK.dbf'

    This is annoying. I would like to deal with this datafile later and open the CDB. I accept that the PDB it belongs to (PDB1 here) cannot be opened but I wish I can open the other ones quickly.

    ORA-01219

    Let’s go to the PDB and take the datafile offline:

    SQL> alter session set container=pdb1;
    Session altered.
     
    SQL> alter database datafile 23 offline for drop;
    alter database datafile 23 offline for drop
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01219: database or pluggable database not open: queries allowed on fixed tables or views only

    This is quite annoying. I know that the database is not open. I know that the pluggable database is not open. I want to put a datafile offline, and this is an operation that concerns only the controlfile. No need to have the database opened. Actually, I need to put this datafile offline in order to open the CDB.

    SQL_TRACE

    This is annoying, but you know why Oracle is the best database system: troubleshooting. I have an error produced by recursive SQL (ORA-00604) and I want to know the SQL statement that raised this error:


    SQL> alter session set sql_trace=true;
    alter session set sql_trace=true;
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01219: database or pluggable database not open: queries allowed on fixed tables or views only

    Oh yes, I forgot that I cannot issue any SQL statement. But you know why Oracle is the best database system: troubleshooting.


    SQL> oradebug setmypid
    Statement processed.
    SQL> oradebug EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
    Statement processed.
     
    SQL> alter database datafile 23 offline for drop;
    alter database datafile 23 offline for drop
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01219: database or pluggable database not open: queries allowed on fixed tables or views only
     
    SQL> oradebug EVENT 10046 TRACE NAME CONTEXT OFF;
    Statement processed.
    SQL> oradebug TRACEFILE_NAME
    /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_ora_20258.trc

    Here is the trace:

    *** 2017-02-21T13:36:51.239026+01:00 (PDB1(3))
    =====================
    PARSING IN CURSOR #140359700679600 len=34 dep=0 uid=0 oct=35 lid=0 tim=198187306591 hv=3069536809 ad='7b8db148' sqlid='dn9z45avgauj9'
    alter database datafile 12 offline
    END OF STMT
    PARSE #140359700679600:c=3000,e=71171,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=198187306590
    WAIT #140359700679600: nam='PGA memory operation' ela= 30 p1=327680 p2=1 p3=0 obj#=-1 tim=198187307242
    WAIT #140359700679600: nam='control file sequential read' ela= 14 file#=0 block#=1 blocks=1 obj#=-1 tim=198187307612
    WAIT #140359700679600: nam='control file sequential read' ela= 13 file#=0 block#=16 blocks=1 obj#=-1 tim=198187307743
    WAIT #140359700679600: nam='control file sequential read' ela= 6 file#=0 block#=18 blocks=1 obj#=-1 tim=198187307796
    WAIT #140359700679600: nam='control file sequential read' ela= 9 file#=0 block#=1119 blocks=1 obj#=-1 tim=198187307832

    This is expected. I’m in PDB1 (container id 3) and run my statement to put the datafile offline.
    And then it switches to CDB$ROOT (container 0):

    *** 2017-02-21T13:36:51.241022+01:00 (CDB$ROOT(1))
    =====================
    PARSING IN CURSOR #140359700655928 len=248 dep=1 uid=0 oct=3 lid=0 tim=198187308584 hv=1954812753 ad='7b67d9c8' sqlid='6qpmyqju884uj'
    select ruletyp#, ruleval, status, ltime from lockdown_prof$ where prof#=:1 and level#=:2 order by ltime
    END OF STMT
    PARSE #140359700655928:c=2000,e=625,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=198187308583
    =====================
    PARSE ERROR #140359700655928:len=249 dep=1 uid=0 oct=3 lid=0 tim=198187308839 err=1219
    select ruletyp#, ruleval, status, ltime from lockdown_prof$ where prof#=:1 and level#=:2 order by ltime
     
    *** 2017-02-21T13:36:51.241872+01:00 (PDB1(3))
    EXEC #140359700679600:c=4000,e=2684,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=198187309428
    ERROR #140359700679600:err=604 tim=198187309511

    I have a parse error when reading LOCKDOWN_PROF$ in the root container. It is a table, a dictionary table stored in SYSTEM tablespace. The CDB is not open. It is not accessible, reason for the error message.

    Then, I remember that I’ve set a lockdown profile at CDB level. It doesn’t make sense for CDB$ROOT, but I’ve set it to get it as default for all new created PDBs. Any statement that may be disabled by a lockdown profile has to read the lockdown profile rules stored in root. And here I learn that this occurs when parsing the DDL statement, not at execution time.

    In my opinion this is a bug. Either I should not set pdb_lockdown at CDB level, or it shouldn’t be checked when the CDB is closed. Because then any DDL will fail. I’m not blocked by the lockdown profile here. Just because the lockdown profile cannot be read.

    pdb_lockdown

    Now I know how to workaround the problem: unset the lockdown profile, offline my datafile, open the CDB, open the PDB, drop the tablespace.

    SQL> alter system set pdb_lockdown='';
    System altered.
    SQL> alter session set container=pdb1;
    Session altered.
    SQL> alter database datafile 23 offline for drop;
    Database altered.
    SQL> alter session set container=cdb$root;
    Session altered.
    SQL> alter database open;

    Lockdown profile is a very nice feature allowing fine grain control on what can be done by users on a PDB, even admins ones. But it is a new mecanism, leading to situations we have never seen before. Don’t forget the power (and fun) of troubleshooting.

     

    Cet article 12cR2: lockdown profiles and ORA-01219 est apparu en premier sur Blog dbi services.

    12c Unified Auditing and AUDIT_TRAIL=DB in mixed mode

    $
    0
    0

    Oracle enables some auditing by default, and if you don’t do anything, the tables where it is stored will grow in SYSAUX. Don’t wait to get an alert when it is too late. Everything that fills something automatically must be managed to archive or purge automatically. If not, one day you will have a problem.

    Imagine that you have 5 features doing something similar but in a different way because they were implemented one at a time. You want to stop this and have only 1 unified feature. That’s great. But you are also required to maintain compatibility with previous version, which means that you actually implemented a 5+1=6th feature :(

    Unified Auditing

    This exactly what happens with Unified Auditing. Because of this compatibility requirement, it is declined in two modes:

    • The ‘mixed mode’ that keeps all compatibility as the 5+1 case in my example
    • The ‘pure mode’ that do not take care of the past and is actually the one that unifies all. The real ‘Unified’ one.

    You are in ‘mixed mode’ by default and you see it as if there is nothing new enabled:

    SQL> select parameter,value from v$option where parameter='Unified Auditing';
     
    PARAMETER VALUE
    --------- -----
    Unified Auditing FALSE

    But there may be something enabled if the old auditing is enabled, because it is actually a mixed mode.

    AUDIT_TRAIL=DB

    Let me explain. I use the old auditing:

    SQL> show parameter audit
    NAME TYPE VALUE
    ---------------------------- ------- --------------------------------
    audit_trail string DB

    This means that I have the default audits (such as logon, logoff, ALTER/CREATE/DROP/GRANT ANY, and so on.
    In addition to that, I enabled the audit of create table:

    SQL> audit create table;
    Audit succeeded.

    I do some of these stuff and I can see info in the old audit trail:
    SQL> select action_name,sql_text from dba_audit_trail;
     
    ACTION_NAME SQL_TEXT
    ----------- --------
    CREATE TABLE
    LOGON
    SELECT
    LOGON
    LOGOFF

    If you are in that case, you probably manage this trail. Our recommandation is either to disable audit, or to manage it.

    But once upgraded to 12c, did you think about managing the new unified audit trail?

    SQL> select audit_type,unified_audit_policies,action_name,return_code,count(*) from unified_audit_trail group by audit_type,unified_audit_policies,action_name,return_code order by 1,2,3;
    ---- ------ ------------------------------------------------------------------ ---- ------------------- ----- -- --------------------------------------------------------- ----- -- ------
    AUDIT_TYPE UNIFIED_AUDIT_POLICIES ACTION_NAME RETURN_CODE COUNT(*)
    Standard ORA_LOGON_FAILURES LOGON 0 2
    Standard ORA_LOGON_FAILURES LOGON 1017 1
    Standard ORA_SECURECONFIG CREATE ROLE 0 1
    Standard ORA_SECURECONFIG DROP ROLE 0 1
    Standard EXECUTE 0 1

    Even with Unified Auditing set to off, some operations are audited when AUDIT_TRAIL=DB. If you don’t want them you have to disable them:

    noaudit policy ORA_SECURECONFIG;
    noaudit policy ORA_LOGON_FAILURES;

    As you see, in mixed mode the new unified auditing is enabled, and AUDIT_TRAIL is not ignored. This is the mode to use until you have migrated all your policies and audit trail queries to the new one. However you can see that in mixed mode, there is no double auditing but only new default policies. The old policies are only logged to to old audit trail.

    But if you don’t use auditing, then you don’t want the mixed mode.

    uniaud_on

    This is done with an instance shutdown, relinking onLinux or renaming a ddl on Windows.


    SQL> shutdown immediate;
    ORACLE instance shut down.
    SQL> host ( cd $ORACLE_HOME/rdbms/lib ; make -f ins_rdbms.mk uniaud_&2 ioracle ORACLE_HOME=$ORACLE_HOME )
    /usr/bin/ar d /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/lib/libknlopt.a kzanang.o
    /usr/bin/ar cr /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/lib/kzaiang.o
    chmod 755 /u01/app/oracle/product/12.2.0/dbhome_1/bin
     
    - Linking Oracle
    ...

    And then you are in ‘pure mode':


    SQL> select parameter,value from v$option where parameter='Unified Auditing';
     
    PARAMETER VALUE
    --------- -----
    Unified Auditing TRUE

    In that mode, AUDIT_TRAIL is ignored and you will never see new rows in the old AUD$:

    SQL> select action_name,sql_text from dba_audit_trail;
     
    no rows selected

    However, as in the mixed mode you will have to manage the new audit trail. My best recommandation is to keep it and add a purge job. One day you may want to have a look at unsuccessful logins of the past few days. But you still have the choice to disable the default polices, and then the only things you will see are the operations done on the trail:

    AUDIT_TYPE UNIFIED_AUDIT_POLICIES ACTION_NAME SQL_TEXT
    ---------- ---------------------- ----------- --------
    Standard EXECUTE BEGIN dbms_audit_mgmt.flush_unified_audit_trail; END;^@
    Standard EXECUTE BEGIN dbms_audit_mgmt.clean_audit_trail(audit_trail_type => dbms_audit_mgmt.audi
    Standard EXECUTE BEGIN dbms_audit_mgmt.flush_unified_audit_trail; END;^@

    The reason is that if a hacker getting super administrator rights has tried to whipe his traces, then at least this suspect operation remains.

    Test it

    To validate this blog post, I’ve tested all scenarios on 12.2.0.1 with the combination of:

    • audit_trail=db or audit_trail=none
    • uniaud_on or uniaud_off
    • audit or noaudit policy for ORA_SECURECONFIG and ORA_LOGON_FAILURES

    For each combination, I’ve purged both audit trails (AUD$ and AUD$UNIFIED) and run a few statements that are logged by default or by explicit audit.

    So what?

    Basically, the recommandation is still the same as before: either disable the audit or schedule a purge. There is no purge by default because auditing is different than logging. When your security policy is to audit some operations, they must not be purged before being archived, or processed.

    When you upgrade to 12c:

    1. If you want to manage only the old audit, then you should disable ORA_LOGON_FAILURES and ORA_SECURECONFIG.
    2. If you want to manage both, then add a job to purge the unified audit trail (audit_trail_type=>dbms_audit_mgmt.audit_trail_unified).
    3. If you don’t use the old auditing, then enable the ‘pure mode’. But then, AUDIT_TRAIL=NONE is ignored, so:
    4. If you don’t use the new unified auditing, then disable ORA_LOGON_FAILURES and ORA_SECURECONFIG.
    5. Or use the new unified auditing and set a job to purge it regularly.

    And control the growth of SYSAUX:

    SQL> select occupant_name,schema_name,occupant_desc,space_usage_kbytes from v$sysaux_occupants where occupant_name like 'AUD%';
     
    OCCUPANT_NAME SCHEMA_NAME OCCUPANT_DESC SPACE_USAGE_KBYTES
    ------------- ----------- ------------- ------------------
    AUDSYS AUDSYS AUDSYS schema objects 1280
    AUDIT_TABLES SYS DB audit tables 0

    SYS ‘DB audit tables’ is the old one, filled in ‘mixed mode’ only. AUDSYS ‘AUDSYS schema objects’ is the new unified one, filled in both modes.

    But I have something to add. The default policies do not audit something that you are supposed to do so frequently, it should not fills hundreds of MB before several decades.
    If you get this during the last hour:
    SQL> select audit_type,unified_audit_policies,action_name,return_code,count(*)
    2 from unified_audit_trail where event_timestamp>sysdate-1
    3 group by audit_type,unified_audit_policies,action_name,return_code
    4 order by count(*);
    AUDIT_TYPE UNIFIED_AUDIT_POLICIES ACTION_NAME RETURN_CODE COUNT(*)
    ---------- ---------------------- ----------- ----------- --------
    Standard AUDIT 0 2
    Standard EXECUTE 0 4
    Standard ORA_SECURECONFIG CREATE ROLE 0 9268
    Standard ORA_LOGON_FAILURES LOGON 1017 348

    then the problem is not auditing but an attack, either from a hacker of because of your application design connecting for each execution or running DDL all the time.

     

    Cet article 12c Unified Auditing and AUDIT_TRAIL=DB in mixed mode est apparu en premier sur Blog dbi services.

    Oracle 12c – Recreating a Controlfile in a Data Guard environment with noresetlogs

    $
    0
    0

    Sometimes you might run into situations where the controlfile does not represent the backups and archivelogs correctly, because of a mismatch of the control_file_record_keep_time and the RMAN retention. The controlfile has non circular and a circular records. Non circular are e.g. database information, redo threads, datafiles and so on. These non circular records don’t age out, however, they can be reused, e.g. when a tablespace is dropped. The circular records are e.g. the log history, archived logs, backupsets, datafile copies and so on. These records can age out. So, when you have a control_file_record_keep_time of 7 days and a RMAN recovery window of 14 days, then you obviously have a mismatch here. In 11gR2, Oracle stores 37 different record types in the control file, which can be check with:

    SELECT type FROM v$controlfile_record_section ORDER BY 1;

    12cR1 stores 41 different record types, where the AUXILIARY DATAFILE COPY, MULTI INSTANCE REDO APPLY, PDB RECORD and PDBINC RECORD was added. In 12cR2 there are even more. The TABLESPACE KEY HISTORY record type was added, so you end up with 42 different record types in 12cR2.

    If RMAN needs to add new backup set or archive log record to the control file, any records that expired as per the control_file_record_keep_time parameter are overwritten. But coming back to my issue. My controlfile is out of sync with the recovery catalog and in some situation you can’t correct it anymore, even with delete force commands or alike, and you end up with error like the following:

    ORA-19633: control file record 8857 is out of sync with recovery catalog

    There might be other solutions to fix it, however, I want to have a clean control file and so I am recreating it manually. However, I don’t want to open the DB with resetlogs.

    The high level steps to get this done are

    • Disable everything that might interfere with your action e.g. Fast Start Failover, Broker and so on
    • Adjust your control_file_record_keep_time to a higher value
    • Create the controlfile to trace
    • Unregister from RMAN catalog
    • Shutdown immediate and re-create the controlfile
    • Re-catalog your backups and archivelogs
    • Re-register into the RMAN catalog

    Ok, let’s get started and disable fast start failover first. We don’t want that the observer to kick in and do any nasty stuff during my action.

    DGMGRL> show configuration;
    
    Configuration - DBIT121
    
      Protection Mode: MaxAvailability
      Members:
      DBIT121_SITE1 - Primary database
        DBIT121_SITE2 - (*) Physical standby database
    
    Fast-Start Failover: ENABLED
    
    Configuration Status:
    SUCCESS   (status updated 2 seconds ago)
    
    DGMGRL> disable fast_start failover;
    Disabled.

    As a next step, I increase the control_file_record_keep_time to a much higher time. The formula is usually CONTROL_FILE_RECORD_KEEP_TIME = retention period + level 0 backup interval + 1. Meaning that with a retention period of 24 days and a weekly level 0 backup, it would be 24+7+1, so at least 32. But I don’t care if my controlfile is 20MB in size 30MB, so I set it directly to 72 days.

    -- Primary
    
    SQL> alter system set control_file_record_keep_time=72;
    
    System altered.
    
    -- Standby
    
    SQL> alter system set control_file_record_keep_time=72;
    
    System altered.

    The next important step is to create a trace of the controlfile, which can be adjusted manually later on, depending on your needs. Beforehand, I specify a tracefile identifier, so that I easily spot my trace file in the DIAG destination.

    SQL> alter session set tracefile_identifier='control';
    
    Session altered.
    
    SQL> alter database backup controlfile to trace noresetlogs;
    
    Database altered.
    
    oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit121_site1/DBIT121/trace/ [DBIT121] ls -rlt | grep control
    -rw-r----- 1 oracle oinstall     101 Feb 24 09:10 DBIT121_ora_25050_control.trm
    -rw-r----- 1 oracle oinstall    9398 Feb 24 09:10 DBIT121_ora_25050_control.trc
    
    oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit121_site1/DBIT121/trace/ [DBIT121] mv DBIT121_ora_25050_control.trc /u01/app/oracle/admin/DBIT121/create/recreate_controlfile.sql

    Let’s take a look at the control file trace which was created. It contains nearly everything that we need. Some parts might have to be adjusted, and some parts do not work at all or have to be done in a different way, but we will see later. But in general it is a very good starting point to get the job done.

    oracle@dbidg01:/u01/app/oracle/admin/DBIT121/create/ [DBIT121] cat recreate_controlfile.sql
    -- The following are current System-scope REDO Log Archival related
    -- parameters and can be included in the database initialization file.
    --
    -- LOG_ARCHIVE_DEST=''
    -- LOG_ARCHIVE_DUPLEX_DEST=''
    --
    -- LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
    --
    -- DB_UNIQUE_NAME="DBIT121_SITE1"
    --
    -- LOG_ARCHIVE_CONFIG='SEND, RECEIVE'
    -- LOG_ARCHIVE_CONFIG='DG_CONFIG=("DBIT121_SITE2")'
    -- LOG_ARCHIVE_MAX_PROCESSES=4
    -- STANDBY_FILE_MANAGEMENT=AUTO
    -- STANDBY_ARCHIVE_DEST=?/dbs/arch
    -- FAL_CLIENT=''
    -- FAL_SERVER=DBIT121_SITE2
    --
    -- LOG_ARCHIVE_DEST_2='SERVICE=DBIT121_SITE2'
    -- LOG_ARCHIVE_DEST_2='OPTIONAL REOPEN=300 NODELAY'
    -- LOG_ARCHIVE_DEST_2='LGWR AFFIRM NOVERIFY ASYNC=0'
    -- LOG_ARCHIVE_DEST_2='REGISTER NOALTERNATE NODEPENDENCY'
    -- LOG_ARCHIVE_DEST_2='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED'
    -- LOG_ARCHIVE_DEST_2='DB_UNIQUE_NAME=DBIT121_SITE2'
    -- LOG_ARCHIVE_DEST_2='VALID_FOR=(STANDBY_LOGFILE,ONLINE_LOGFILES)'
    -- LOG_ARCHIVE_DEST_STATE_2=ENABLE
    --
    -- LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    -- LOG_ARCHIVE_DEST_1='OPTIONAL REOPEN=300 NODELAY'
    -- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM NOVERIFY SYNC'
    -- LOG_ARCHIVE_DEST_1='REGISTER NOALTERNATE NODEPENDENCY'
    -- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
    -- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
    -- LOG_ARCHIVE_DEST_STATE_1=ENABLE
    --
    -- The following commands will create a new control file and use it
    -- to open the database.
    -- Data used by Recovery Manager will be lost.
    -- Additional logs may be required for media recovery of offline
    -- Use this only if the current versions of all online logs are
    -- available.
    -- After mounting the created controlfile, the following SQL
    -- statement will place the database in the appropriate
    -- protection mode:
    --  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY
    STARTUP NOMOUNT
    CREATE CONTROLFILE REUSE DATABASE "DBIT121" NORESETLOGS FORCE LOGGING ARCHIVELOG
        MAXLOGFILES 16
        MAXLOGMEMBERS 3
        MAXDATAFILES 100
        MAXINSTANCES 8
        MAXLOGHISTORY 292
    LOGFILE
      GROUP 1 (
        '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnop9_.log',
        '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnq4o_.log'
      ) SIZE 50M BLOCKSIZE 512,
      GROUP 2 (
        '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo42k_.log',
        '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo43q_.log'
      ) SIZE 50M BLOCKSIZE 512,
      GROUP 3 (
        '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppn86_.log',
        '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppngb_.log'
      ) SIZE 50M BLOCKSIZE 512
    -- STANDBY LOGFILE
    --   GROUP 4 (
    --     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log',
    --     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t89m_.log'
    --   ) SIZE 50M BLOCKSIZE 512,
    --   GROUP 5 (
    --     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log',
    --     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj8m_.log'
    --   ) SIZE 50M BLOCKSIZE 512,
    --   GROUP 6 (
    --     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log',
    --     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tpb4_.log'
    --   ) SIZE 50M BLOCKSIZE 512,
    --   GROUP 7 (
    --     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log',
    --     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twkt_.log'
    --   ) SIZE 50M BLOCKSIZE 512
    DATAFILE
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_system_d4fjt03j_.dbf',
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_sysaux_d4fjrlvs_.dbf',
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_undotbs1_d4fjvtd1_.dbf',
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_example_d4fjz1fz_.dbf',
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_users_d4fjvqb1_.dbf'
    CHARACTER SET AL32UTF8
    ;
    -- Configure RMAN configuration record 1
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 14 DAYS');
    -- Configure RMAN configuration record 2
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
    -- Configure RMAN configuration record 3
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO SHIPPED TO ALL STANDBY BACKED UP 1 TIMES TO DISK');
    -- Configure RMAN configuration record 4
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
    -- Configure RMAN configuration record 5
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RMAN OUTPUT','TO KEEP FOR 32 DAYS');
    -- Configure RMAN configuration record 6
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE1'' CONNECT IDENTIFIER  ''DBIT121_SITE1''');
    -- Configure RMAN configuration record 7
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE2'' CONNECT IDENTIFIER  ''DBIT121_SITE2''');
    -- Configure RMAN configuration record 8
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' PARMS  ''SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT121)''');
    -- Configure RMAN configuration record 9
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
    -- Configure RMAN configuration record 10
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','DISK');
    -- Commands to re-create incarnation table
    -- Below log names MUST be changed to existing filenames on
    -- disk. Any one log file from each branch can be used to
    -- re-create incarnation records.
    -- ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_1_%u_.arc';
    -- Recovery is required if any of the datafiles are restored backups,
    -- or if the last shutdown was not normal or immediate.
    RECOVER DATABASE
    -- Block change tracking was enabled, so re-enable it now.
    ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
    USING FILE '/u02/oradata/DBIT121_SITE1/changetracking/o1_mf_dbx3wgqg_.chg' REUSE;
    -- All logs need archiving and a log switch is needed.
    ALTER SYSTEM ARCHIVE LOG ALL;
    -- Database can now be opened normally.
    ALTER DATABASE OPEN;
    -- Commands to add tempfiles to temporary tablespaces.
    -- Online tempfiles have complete space information.
    -- Other tempfiles may require adjustment.
    ALTER TABLESPACE TEMP ADD TEMPFILE '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_temp_d4fjxn8l_.tmp'
         SIZE 206569472  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;
    -- End of tempfile additions.
    --
    --
    --
    ----------------------------------------------------------
    -- The following script can be used on the standby database
    -- to re-populate entries for a standby controlfile created
    -- on the primary and copied to the standby site.
    ----------------------------------------------------------
    ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log'
     SIZE 50M BLOCKSIZE 512 REUSE;
    ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t89m_.log'
                                           TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log';
    ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log'
     SIZE 50M BLOCKSIZE 512 REUSE;
    ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj8m_.log'
                                           TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log';
    ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log'
     SIZE 50M BLOCKSIZE 512 REUSE;
    ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tpb4_.log'
                                           TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log';
    ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log'
     SIZE 50M BLOCKSIZE 512 REUSE;
    ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twkt_.log'
                                           TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log';

    I am also stopping the broker to avoid any side effects and afterwards I unregister the database from the RMAN catalog. I will re-create it later on with the clean entries.

    -- primary
    
    SQL> alter system set dg_broker_start=false;
    
    System altered.
    
    oracle@dbidg01:/home/oracle/ [DBIT121] rman target sys/manager catalog rman/rman@rman
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:16:17 2017
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: DBIT121 (DBID=172831209)
    connected to recovery catalog database
    recovery catalog schema release 12.02.00.01. is newer than RMAN release
    
    RMAN> unregister database;
    
    database name is "DBIT121" and DBID is 172831209
    
    Do you really want to unregister the database (enter YES or NO)? YES
    database unregistered from the recovery catalog
    
    RMAN>

    The next step is very important. We need to shutdown the DB cleanly, either with normal or immediate. Afterwards, I create a copy of the current controlfiles. You never know, it is always good to have another fallback.

    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL>
    
    oracle@dbidg01:/home/oracle/ [DBIT121] cd /u02/oradata/DBIT121_SITE1/controlfile/
    oracle@dbidg01:/u02/oradata/DBIT121_SITE1/controlfile/ [DBIT121] mv o1_mf_d4fjws55_.ctl o1_mf_d4fjws55_.ctl.old
    oracle@dbidg01:/u02/oradata/DBIT121_SITE1/controlfile/ [DBIT121] cd /u03/fast_recovery_area/DBIT121_SITE1/controlfile/
    oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/controlfile/ [DBIT121] mv o1_mf_d4fjwsgr_.ctl o1_mf_d4fjwsgr_.ctl.old

    Now we can startup nomount, and recreate our control from scratch. It is very important that you specify REUSE and NORESETLOGS here.

    SQL> startup nomount
    ORACLE instance started.
    
    Total System Global Area 1325400064 bytes
    Fixed Size                  2924112 bytes
    Variable Size             436208048 bytes
    Database Buffers          872415232 bytes
    Redo Buffers               13852672 bytes
    
    SQL> CREATE CONTROLFILE REUSE DATABASE "DBIT121" NORESETLOGS FORCE LOGGING ARCHIVELOG
        MAXLOGFILES 16
        MAXLOGMEMBERS 3
        MAXDATAFILES 100
        MAXINSTANCES 8
        MAXLOGHISTORY 292
    LOGFILE
      GROUP 1 (
        '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnop9_.log',
        '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnq4o_.log'
      ) SIZE 50M BLOCKSIZE 512,
      GROUP 2 (
        '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo42k_.log',
        '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo43q_.log'
      ) SIZE 50M BLOCKSIZE 512,
      GROUP 3 (
        '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppn86_.log',
        '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppngb_.log'
     19    ) SIZE 50M BLOCKSIZE 512
    DATAFILE
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_system_d4fjt03j_.dbf',
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_sysaux_d4fjrlvs_.dbf',
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_undotbs1_d4fjvtd1_.dbf',
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_example_d4fjz1fz_.dbf',
      '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_users_d4fjvqb1_.dbf'
    CHARACTER SET AL32UTF8
     27  ;
    
    Control file created.
    
    SQL>

    Now we can configure the RMAN persistent settings like retention and so on.

    -- Configure RMAN configuration record 1
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 14 DAYS');
    -- Configure RMAN configuration record 2
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
    -- Configure RMAN configuration record 3
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO SHIPPED TO ALL STANDBY BACKED UP 1 TIMES TO DISK');
    -- Configure RMAN configuration record 4
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
    -- Configure RMAN configuration record 5
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RMAN OUTPUT','TO KEEP FOR 32 DAYS');
    -- Configure RMAN configuration record 6
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE1'' CONNECT IDENTIFIER  ''DBIT121_SITE1''');
    -- Configure RMAN configuration record 7
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE2'' CONNECT IDENTIFIER  ''DBIT121_SITE2''');
    -- Configure RMAN configuration record 8
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' PARMS  ''SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT121)''');
    -- Configure RMAN configuration record 9
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
    -- Configure RMAN configuration record 10
    VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','DISK');

    The next step is to the re-create the incarnation table. This might fail with a recursive SQL error if you use the SQL provided in the trace file. Just use REGISTER PHYSICAL LOGFILE instead of REGISTER LOGFILE and then it works.

    -- Commands to re-create incarnation table
    -- Below log names MUST be changed to existing filenames on
    -- disk. Any one log file from each branch can be used to re-create incarnation records.
    -- ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_1_%u_.arc';
    
    SQL> ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc';
    ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc'
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level
    
    
    SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc';
    
    Database altered.

    Because I have shutdown the database cleanly, there is no need to do any recovery and I can continue to enable the block change tracking file, open the database, and add my tempfile back to the database.

    SQL> recover database;
    ORA-00283: recovery session canceled due to errors
    ORA-00264: no recovery required
    
    SQL> ALTER SYSTEM ARCHIVE LOG ALL;
    
    System altered.
    
    SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/oradata/DBIT121_SITE1/changetracking/o1_mf_dbx3wgqg_.chg' REUSE;
    
    Database altered.
    
    SQL> ALTER DATABASE OPEN;
    
    Database altered.
    
    SQL>
    
    SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_temp_d4fjxn8l_.tmp'
      2  SIZE 206569472  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;
    
    Tablespace altered.

    Regarding the Standby Redo logs, the easiest is to remove the old ones, and simply recreate them afterwards, because you can’t add them back as long as they have Oracle managed file names.

    SQL> select * from v$standby_log;
    
    no rows selected
    
    SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log' SIZE 50M BLOCKSIZE 512 REUSE;
    ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log' SIZE 50M BLOCKSIZE 512 REUSE
    *
    ERROR at line 1:
    ORA-01276: Cannot add file
    /u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log.  File has an Oracle
    Managed Files file name.
    
    -- delete standby redo logs
    
    oracle@dbidg01:/u01/app/oracle/admin/DBIT121/create/ [DBIT121] cd /u02/oradata/DBIT121_SITE1/onlinelog/
    oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] ls -l
    total 358428
    -rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_1_d4fpnop9_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_2_d4fpo42k_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 24 09:47 o1_mf_3_d4fppn86_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_4_dbx3t840_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_5_dbx3tj3b_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_6_dbx3tp52_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_7_dbx3twdq_.log
    oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] rm o1_mf_4_dbx3t840_.log o1_mf_5_dbx3tj3b_.log o1_mf_6_dbx3tp52_.log o1_mf_7_dbx3twdq_.log
    oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] cd /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/
    oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/ [DBIT121] ls -l
    total 358428
    -rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_1_d4fpnq4o_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_2_d4fpo43q_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 24 09:47 o1_mf_3_d4fppngb_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_4_dbx3t89m_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_5_dbx3tj8m_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_6_dbx3tpb4_.log
    -rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_7_dbx3twkt_.log
    oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/ [DBIT121] rm o1_mf_4_dbx3t89m_.log o1_mf_5_dbx3tj8m_.log o1_mf_6_dbx3tpb4_.log o1_mf_7_dbx3twkt_.log
    
    -- recreate standby redo logs
    
    SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 4 SIZE 50M BLOCKSIZE 512;
    
    Database altered.
    
    SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 5 SIZE 50M BLOCKSIZE 512;
    
    Database altered.
    
    SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 6 SIZE 50M BLOCKSIZE 512;
    
    Database altered.
    
    SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 7 SIZE 50M BLOCKSIZE 512;
    
    Database altered.

    Don’t forget to enable Flashback as well, if your DataGuard is running in Max availability mode.

    SQL> alter database flashback on;
    
    Database altered.

    Now we need to recatalog all our backups and archivelogs again.

    oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/ [DBIT121] rman target /
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:50:16 2017
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: DBIT121 (DBID=172831209)
    
    RMAN> catalog recovery area;
    
    using target database control file instead of recovery catalog
    searching for all files in the recovery area
    
    List of Files Unknown to the Database
    =====================================
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_140_dbzswh06_.arc
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_141_dbzsxpv5_.arc
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx641px_.flb
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx642pf_.flb
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dby398lz_.flb
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbymcg20_.flb
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbyyg1r0_.flb
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/controlfile/o1_mf_d4fjwsgr_.ctl.old
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64pz6_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64q0b_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s0z_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s3n_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnsnf_TAG20170223T090856_dbx65kmx_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_ncnnf_TAG20170223T090856_dbx65lnt_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090923_dbx65mto_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypdc_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypfp_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpysqh_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy2f_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy56_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnsnf_TAG20170224T080812_dbzpzqnz_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_ncnnf_TAG20170224T080812_dbzpzqop_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080841_dbzpzskt_.bkp
    
    Do you really want to catalog the above files (enter YES or NO)? YES
    cataloging files...
    cataloging done
    
    List of Cataloged Files
    =======================
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_140_dbzswh06_.arc
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_141_dbzsxpv5_.arc
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64pz6_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64q0b_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s0z_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s3n_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnsnf_TAG20170223T090856_dbx65kmx_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_ncnnf_TAG20170223T090856_dbx65lnt_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090923_dbx65mto_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypdc_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypfp_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpysqh_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy2f_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy56_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnsnf_TAG20170224T080812_dbzpzqnz_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_ncnnf_TAG20170224T080812_dbzpzqop_.bkp
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080841_dbzpzskt_.bkp
    
    List of Files Which Were Not Cataloged
    =======================================
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx641px_.flb
      RMAN-07529: Reason: catalog is not supported for this file type
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx642pf_.flb
      RMAN-07529: Reason: catalog is not supported for this file type
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dby398lz_.flb
      RMAN-07529: Reason: catalog is not supported for this file type
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbymcg20_.flb
      RMAN-07529: Reason: catalog is not supported for this file type
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbyyg1r0_.flb
      RMAN-07529: Reason: catalog is not supported for this file type
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/controlfile/o1_mf_d4fjwsgr_.ctl.old
      RMAN-07519: Reason: Error while cataloging. See alert.log.
    
    List of files in Recovery Area not managed by the database
    ==========================================================
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbzwt72f_.log
      RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbzwtgl3_.log
      RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbzwtn04_.log
      RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
    File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbzwtvc7_.log
      RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
    
    number of files not managed by recovery area is 4, totaling 200.00MB
    
    RMAN>

    We are nearly done. We just need to enable the Data Guard broker again and enable fast start failover.

    SQL> alter system set dg_broker_start=true;
    
    System altered.
    
    SQL> alter system archive log current;
    
    System altered.
    
    DGMGRL> enable fast_start failover;
    Enabled.
    DGMGRL> show configuration;
    
    Configuration - DBIT121
    
     Protection Mode: MaxAvailability
     Members:
     DBIT121_SITE1 - Primary database
     DBIT121_SITE2 - (*) Physical standby database
    
    Fast-Start Failover: ENABLED
    
    Configuration Status:
    SUCCESS (status updated 21 seconds ago)
    
    DGMGRL> validate database 'DBIT121_SITE2';
    
     Database Role: Physical standby database
     Primary Database: DBIT121_SITE1
    
     Ready for Switchover: Yes
     Ready for Failover: Yes (Primary Running)
    

    Re-register the database into the RMAN catalog.

    oracle@dbidg01:/home/oracle/ [DBIT121] rman target sys/manager catalog rman/rman@rman
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:57:34 2017
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: DBIT121 (DBID=172831209)
    connected to recovery catalog database
    recovery catalog schema release 12.02.00.01. is newer than RMAN release
    
    RMAN> register database;
    
    database registered in recovery catalog
    starting full resync of recovery catalog
    full resync complete
    
    RMAN>

    Ready. That’s it. In case your Standby Controlfile is not correct as well (was not in my case), you can now simply create a new standby controlfile on the primary and move it to the standby like documented in the following MOS note (Steps to recreate a Physical Standby Controlfile (Doc ID 459411.1).

    Conclusion

    As a last resort, sometimes it is needed to recreate the controlfile manually, but in case you have all the online redo logs and your datafiles in place, you can do it with noresetlogs. And take care that your RMAN retention is always higher than your control file record keep time.

     

    Cet article Oracle 12c – Recreating a Controlfile in a Data Guard environment with noresetlogs est apparu en premier sur Blog dbi services.

    12cR2: Recover nonlogged blocks after NOLOGGING in Data Guard

    $
    0
    0

    You can accept to do NOLOGGING operations on bulk loads or index build according that you do a backup just after, and that your recovery plan mentions how to load the data again in case of media recovery. With a standby database, we usually force logging because we want redo to be generated for all operations in order to ship it and apply it on standby database. 12.2 brings a new solution: do nologging operations, without generating redo, and then ship the blocks to the standby. This is done on the standby by RMAN.

    On primary ORCLA

    I create the demo table
    SQL> create table DEMO tablespace users pctfree 99 as select rownum n from xmltable('1 to 1000');
    Table created.

    put it in NOLOGGING
    SQL> alter table DEMO nologging;
    Table altered.

    The database is not in force logging:
    SQL> select force_logging from v$database;
    FORCE_LOGGING
    ---------------------------------------
    NO

    Here is a direct-path insert
    SQL> insert /*+ append */ into DEMO select rownum n from xmltable('1 to 100000');
    100000 rows created.
     
    SQL> commit;
    Commit complete.

    My rows are here:
    SQL> select count(*) from DEMO;
    COUNT(*)
    ----------
    200000

    This is a nologging operation. Media recovery is not possible. The datafile needs backup:
    RMAN> report unrecoverable;
    using target database control file instead of recovery catalog
    Report of files that need backup due to unrecoverable operations
    File Type of Backup Required Name
    ---- ----------------------- -----------------------------------
    7 full or incremental /u01/oradata/ORCLA/users01.dbf

    On ADG standby ORCLB

    In Active Data Guard, I can query the table, but:
    SQL> select count(*) from DEMO
    *
    ERROR at line 1:
    ORA-01578: ORACLE data block corrupted (file # 7, block # 16966)
    ORA-01110: data file 7: '/u01/oradata/ORCLB/datafile/o1_mf_users_dbvmwdqc_.dbf'
    ORA-26040: Data block was loaded using the NOLOGGING option

    The blocks were not replicated because redo was not generated by the primary and then not shipped and applied on the standby.

    Note that this is not identifed by RMAN on the standby:
    RMAN> report unrecoverable;
    using target database control file instead of recovery catalog
    Report of files that need backup due to unrecoverable operations
    File Type of Backup Required Name
    ---- ----------------------- -----------------------------------
     
    RMAN>

    recover nonlogged blocks

    If I try some recovery here, I can’t because I’m still is apply mode, here is the message I get if I try:
    ORA-01153: an incompatible media recovery is active

    Let’s stop the apply:
    DGMGRL> edit database orclb set state=apply-off;
    Succeeded.

    In 12.1 I can recover the datafile from the primary with ‘recover from service’ but in 12.2 there is no need to ship the whole datafile. The non-logged block list has been shipped to the standby, recorded in the standby controlfile, and we can list them from v$nonlogged_block.

    And we can recover them with a simple command: RECOVER DATABASE NONLOGGED BLOCK

    RMAN> recover database nonlogged block;
    Starting recover at 25-FEB-17
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=58 device type=DISK
     
    starting recovery of nonlogged blocks
    List of Datafiles
    =================
    File Status Nonlogged Blocks Blocks Examined Blocks Skipped
    ---- ------ ---------------- --------------- --------------
    1 OK 0 0 104959
    2 OK 0 0 18783
    3 OK 0 0 62719
    4 OK 0 0 8959
    7 OK 0 16731 18948
     
    Details of nonlogged blocks can be queried from v$nonlogged_block view
     
    recovery of nonlogged blocks complete, elapsed time: 00:00:03

    Here it is, I can query the table now
    SQL> select count(*) from DEMO;
    COUNT(*)
    ----------
    200000

    I re-enable real-time apply
    DGMGRL> edit database orclb set state=apply-on;
    Succeeded.

    Switchover

    Now, what would happen if I do a switchover of failover between the nologging operation and the nonlogged recovery?
    I did the same on primary and then:
    DGMGRL> switchover to orclb;
    Performing switchover NOW, please wait...
    Operation requires a connection to database "orclb"
    Connecting ...
    Connected to "ORCLB"
    Connected as SYSDBA.
    New primary database "orclb" is opening...
    Operation requires start up of instance "ORCLA" on database "orcla"
    Starting instance "ORCLA"...
    ORACLE instance started.
    Database mounted.
    Database opened.
    Connected to "ORCLA"
    Switchover succeeded, new primary is "orclb"

    I can query the list of nonlogged blocks that was shipped to standby:

    SQL> select * from v$nonlogged_block;
     
    FILE# BLOCK# BLOCKS NONLOGGED_START_CHANGE# NONLOGGED
    ---------- ---------- ---------- ----------------------- ---------
    NONLOGGED_END_CHANGE# NONLOGGED RESETLOGS_CHANGE# RESETLOGS
    --------------------- --------- ----------------- ---------
    OBJECT# REASON CON_ID
    ---------------------------------------- ------- ----------
    7 307 16826 2197748
    2197825 1396169 22-FEB-17
    74006 UNKNOWN 0

    But I cannot recover because the database (the old standby that became primary) is opened:
    ORA-01126: database must be mounted in this instance and not open in any instance

    So what?

    This new feature is acceptable if you recover the nonlogged blocks on the standby just after the nologging operation on the primary. This can be used automatically for datawarehouse load, but also manually when doing a reorganization or an application release that touches the data. Just don’t forget the recover on the standby to avoid surprises later. It will not reduce the amount of data that is shipped to the standby, because shipping the blocks is roughly the same as shipping the redo for the direct-path writes. But one the primary you have the performance benefit of nologging operations.

     

    Cet article 12cR2: Recover nonlogged blocks after NOLOGGING in Data Guard est apparu en premier sur Blog dbi services.

    Viewing all 190 articles
    Browse latest View live