Quantcast
Channel: Archives des Oracle 12c - dbi Blog
Viewing all 190 articles
Browse latest View live

12c nologging and Data Guard

$
0
0

The title sounds weird because Data Guard synchronisation is based on the redo stream, so it makes no sense to do nologging operations on the primary. And this is the reason why we set FORCE LOGGING on a Data Guard configuration. However, to lower the downtime of a migration done with Data Pump, you may want to import with minimal logging and then re-synchronize the standby. This post is about the re-synchronisation in 12.1

Nologging Data Pump

When you want to lower the downtime for a migration, you can disable force logging (alter database no force logging), and run impdp with the following: transform=disable_archive_logging:y
Don’t forget to re-enable force_logging at the end and to re-synchronize the standby.

nonlogged (aka unrecoverable)

So, you have nonlogged blocks, we also call that unrecoverable because it cannot be recovered with the redo stream. If you are in 12.2 then everything is easy with recover database nonlogged block; and I explained that in a previous post: https://blog.dbi-services.com/12cr2-recover-nonlogged-blocks-after-nologging-in-data-guard/

If you are in 12.2 then it is half easy only. You can see where you have nonlogged blocks:
RMAN> select file#,reason,count(*) from v$nonlogged_block group by file#,reason;
&bsp;
FILE# REASON COUNT(*)
---------- ------- ----------
5 UNKNOWN 158
6 UNKNOWN 159
7 UNKNOWN 336
8 UNKNOWN 94
9 UNKNOWN 16
10 UNKNOWN 14

and this is the right way to query them. If you use RMAN ‘report unrecoverable’ it will not display the datafiles that had nologging operations on the primary.

In 12.1 you can RESTORE FROM SERVICE to recover from the primary rather than from a backup. It is straightforward. I’m just writing this blog post in case you see the following when you try to do this because the message can be misinterpreted:


RMAN> restore database from service 'MYDB_SITE1_dgmgrl';
 
Starting restore at 03-MAY-2017 13:22:12
using channel ORA_DISK_1
 
skipping datafile 1; already restored to SCN 3849354
skipping datafile 2; already restored to SCN 3849356
skipping datafile 3; already restored to SCN 3849358
skipping datafile 4; already restored to SCN 3849360
skipping datafile 5; already restored to SCN 3849365
skipping datafile 6; already restored to SCN 3849372
skipping datafile 7; already restored to SCN 3849382
skipping datafile 8; already restored to SCN 3849389
skipping datafile 9; already restored to SCN 3849395
skipping datafile 10; already restored to SCN 3849398
restore not done; all files read only, offline, or already restored
Finished restore at 03-MAY-2017 13:22:12

RMAN is clever enough: the data files are ok, according to their header and it skipped the restore. But you know that they are not ok, because some blocks are marked as corrupt because of nologging operations. Then what to do? There is a FORCE option in the restore command. But you probably don’t need it. If you get the previous message, it means that the datafiles are synchronized, which means that the APPLY is running. And, anyway, in order to restore you need to stop the APPLY.


DGMGRL> edit database orclb set state=apply-off;

Of course, once you stopped the apply, you run your RESTORE DATABASE FORCE. But you probably don’t need it. Now, the datafiles are stale and RMAN will not skip them even without the FORCE keyword.


RMAN> restore database from service 'MYDB_SITE1_dgmgrl';
 
Starting restore at 03-MAY-2017 13:22:37
using channel ORA_DISK_1
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service MYDB_SITE1_dgmgrl
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /media/raid-db/MYDB/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service MYDB_SITE1_dgmgrl
...
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 03-MAY-2017 13:25:30
RMAN> exit

Don’t forget to re-enable the Data Guard Apply at the end.

So what?

When you see all datafiles skipped, that probably means that you didn’t stop the APPLY. With APPLY stopped, and you probably stop it before the import as you plan to restore the standby later, then you probably don’t need the FORCE command. However, I’ll always recommend using the FORCE in this case because RMAN will skip the files without looking at the unlogged blocks. Imagine that you put a tablespace in read-only after the non-logged import but before stopping the apply. Then this one will be skipped.

 

Cet article 12c nologging and Data Guard est apparu en premier sur Blog dbi services.


Can you open PDB$SEED read write?

$
0
0

If you are in multitenant, you probably already felt the desire to open the PDB$SEED in READ WRITE mode.

  • Can you open PDB$SEED read write yourseld? Yes and No.
  • Should you open PDB$SEED read write yourself? Yes and No.
  • How to run upgrade scripts that need to write to PDB$SEED? catcon.pl


In 12.1 you have no reason to open the seed read write yourself. In 12.2 there is one reason when you are in LOCAL UNDO mode, because you may want to customize the UNDO tablespace.

12c in local undo

I am in 12.1 or in 12.2 in shared undo mode:
SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';
 
no rows selected

When the CDB is opened, the PDB$SEED is opened in read only mode.
SYS@CDB$ROOT SQL> show pdbs
&nsbp;
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB01 READ WRITE NO

I try to open the PDB$SEED in read write mode (FORCE is a shortcut to avoid to close it before)
SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open force;
&nsbp;
Error starting at line : 1 in command -
alter pluggable database pdb$seed open force
Error report -
ORA-65017: seed pluggable database may not be dropped or altered
65017. 00000 - "seed pluggable database may not be dropped or altered"
*Cause: User attempted to drop or alter the Seed pluggable database which is not allowed.
*Action: Specify a legal pluggable database name.
SYS@CDB$ROOT SQL>

Obviously, this is impossible and clearly documented. PDB$SEED is not a legal pluggable database for this operation.

Oracle Script

There is an exception to that: internal Oracle scripts need to run statements in the PDB$SEED. They run with “_oracle_script”=true where this operation is possible:

SYS@CDB$ROOT SQL> alter session set "_oracle_script"=true;
Session altered.
 
SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open read write force;
Pluggable database PDB$SEED altered.

catcon.pl

Of course, when upgrading, there are phases where you need the seed opened read-write. But you don’t to that yourself. The scripts to run in each container are called through catcon.pl which, by default, opens the seed read-write and ensures that the initial open mode is restored at the end even in case of error.

-m mode in which PDB$SEED should be opened; one of the following values
may be specified:
- UNCHANGED - leave PDB$SEED in whatever mode it is already open
- READ WRITE (default)
- READ ONLY
- UPGRADE
- DOWNGRADE

I have the following “/tmp/show_open_mode.sql” script

column name format a10
select name,open_mode,current_timestamp-open_time from v$containers;

I call it with catcon to run in PDB$SEED:

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -c 'PDB$SEED' -n 1 -d /tmp -l /tmp -b tmp -show_open_mode.sql

Here is the output in /tmp/tmp0.log

CATCON_STATEMENT
--------------------------------------
catconExec(): @/tmp/show_open_mode.sql
SQL> SQL> column name format a10
SQL> select name,open_mode,current_timestamp-open_time from v$containers;
NAME OPEN_MODE CURRENT_TIMESTAMP-OPEN_TIME
---------- ---------- ---------------------------------------------------------------------------
PDB$SEED READ WRITE +000000000 00:00:00.471398
SQL>
END_RUNNING
------------------------------------------------------------------------------------------------------------------------
==== @/tmp/show_open_mode.sql Container:PDB$SEED Id:2 17-05-07 05:02:06 Proc:0 ====
SQL>
END_RUNNING
------------------------------------------------------------------------------------------------------------------------
==== @/tmp/show_open_mode.sql Container:PDB$SEED Id:2 17-05-07 05:02:06 Proc:0 ====

The PDB$SEED was opened READ WRITE to run the statements.

We can see that in alert.log:

alter pluggable database pdb$seed close immediate instances=all
ALTER SYSTEM: Flushing buffer cache inst=0 container=2 local
Pluggable database PDB$SEED closed
Completed: alter pluggable database pdb$seed close immediate instances=all
alter pluggable database pdb$seed OPEN READ WRITE
Database Characterset for PDB$SEED is WE8MSWIN1252
Opening pdb PDB$SEED (2) with no Resource Manager plan active
Pluggable database PDB$SEED opened read write
Completed: alter pluggable database pdb$seed OPEN READ WRITE
alter pluggable database pdb$seed close immediate instances=all
ALTER SYSTEM: Flushing buffer cache inst=0 container=2 local
Pluggable database PDB$SEED closed
Completed: alter pluggable database pdb$seed close immediate instances=all
alter pluggable database pdb$seed OPEN READ ONLY instances=all
Database Characterset for PDB$SEED is WE8MSWIN1252
Opening pdb PDB$SEED (2) with no Resource Manager plan active
Pluggable database PDB$SEED opened read only
Completed: alter pluggable database pdb$seed OPEN READ ONLY instances=all

When the pre-upgrade and post-upgrade scripts are run from DBUA you can see the following in the logs:
exec_DB_script: opened Reader and Writer
exec_DB_script: executed connect / AS SYSDBA
exec_DB_script: executed alter session set "_oracle_script"=TRUE
/
exec_DB_script: executed alter pluggable database pdb$seed close immediate instances=all
/
exec_DB_script: executed alter pluggable database pdb$seed OPEN READ WRITE
/

This is displayed because DBUA runs catcon.pl in debug mode and you can do the same by adding ‘-g’ to the catcon.pl arguments.

12cR2 in local undo

In 12.2 there is a case where you can make a change to the PDB$SEED to customize the UNDO tablespace template. Here I am changing to LOCAL UNDO:


SYS@CDB$ROOT SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
&nsbp;
SYS@CDB$ROOT SQL> startup upgrade;
ORACLE instance started.
&nsbp;
Total System Global Area 1107296256 bytes
Fixed Size 8791864 bytes
Variable Size 939526344 bytes
Database Buffers 150994944 bytes
Redo Buffers 7983104 bytes
Database mounted.
Database opened.
&nsbp;
SYS@CDB$ROOT SQL> alter database local undo on;
Database altered.
&nsbp;
SYS@CDB$ROOT SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
 
SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';
 
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
------------- -------------- -----------
LOCAL_UNDO_ENABLED TRUE true if local undo is enabled

PDB$SEED is read only:

SYS@CDB$ROOT SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB01 READ WRITE NO

and _oracle_script is not set:

SYS@CDB$ROOT SQL> show parameter script
 
NAME TYPE VALUE
---- ---- -----
 

I get no error now and can open the seed in read-write mode:

SYS@CDB$ROOT SQL> alter pluggable database PDB$SEED open force;
Pluggable database PDB$SEED altered.
 
SYS@CDB$ROOT SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ WRITE NO
3 PDB01 READ WRITE NO

Customize UNDO seed

Once you open read write an undo tablespace is created. If you want to customize it, you can create another one and drop the previous one. This requires changing the undo_tablespace parameter:


SYS@CDB$ROOT SQL> show parameter undo
NAME TYPE VALUE
----------------- ------- ------
undo_tablespace string UNDO_1
 
SYS@CDB$ROOT SQL> create undo tablespace UNDO;
Tablespace UNDO created.
 
SYS@CDB$ROOT SQL> alter system set undo_tablespace=UNDO;
System SET altered.
 
SYS@CDB$ROOT SQL> drop tablespace UNDO_1 including contents and datafiles;
Tablespace UNDO_1 dropped.
 
SYS@CDB$ROOT SQL> shutdown immediate
Pluggable Database closed

You can leave it like this, just close and re-open read only. If you want to keep the same undo tablespace name as before, you need to play with create and drop, and change undo_tablespace again.

So what?

Don’t forget that you should not modify or drop PDB$SEED. If you want a customized template for your PDB creations, then you should create your PDB template to clone. You can clone remotely, so this is possible in single-tenant as well. Being able to open the PDB$SEED in read write is possible only for the exception of creating the UNDO tablespace in PDB$SEED when you move to local undo mode. This is not required, and then an UNDO tablespace will be created when you open a PDB with no undo_tablespace.
When running pre-upgrade and post-upgrade scripts, then don’t worry: catcon.pl is there to help run scripts in containers and handles that for you.

 

Cet article Can you open PDB$SEED read write? est apparu en premier sur Blog dbi services.

Grid Infrastructure Installation on SLES 12 SP1

$
0
0

This week I needed to install Oracle Grid Infrastructure 12c release 1 in a SLES 12 SP1 environment. Everything worked fine until I ran the root.sh at the end of the installation. Here’s a quick description of the problem and the workaround.

The root.sh script ran into error and the installation was completely unsuccessfull:
oracle:/u00/app/grid/12.1.0.2 # /u00/app/grid/12.1.0.2/root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/12.1.0.2
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 39: [: -eq: unary operator expected
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying dbhome to /usr/local/bin ...
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying oraenv to /usr/local/bin ...
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying coraenv to /usr/local/bin ...
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/12.1.0.2/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/03/31 09:56:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
PRCR-1006 : Failed to add resource ora.ons for ons
PRCR-1115 : Failed to find entities of type resource type that match filters (TYPE_NAME ends .type) and contain attributes
CRS-0184 : Cannot communicate with the CRS daemon.
2017/03/31 09:57:04 CLSRSC-180: An error occurred while executing the command 'srvctl add ons' (error code 0)
 
2017/03/31 09:57:55 CLSRSC-115: Start of resource 'ora.evmd' failed
 
2017/03/31 09:57:55 CLSRSC-202: Failed to start EVM daemon
 
The command '/u00/app/grid/12.1.0.2/perl/bin/perl -I/u00/app/grid/12.1.0.2/perl/lib -I/u00/app/grid/12.1.0.2/crs/install /u00/app/grid/12.1.0.2/crs/install/roothas.pl ' execution failed

When we run crsctl stat res –t :

grid@oracle_grid:/u00/app/grid/product/12.1.0.2/grid/bin> ./crsctl stat res -t
 --------------------------------------------------------------------------------
 Name Target State Server State details
 --------------------------------------------------------------------------------
 Cluster Resources
 --------------------------------------------------------------------------------
 ora.cssd
 1 OFFLINE OFFLINE STABLE
 ora.diskmon
 1 OFFLINE OFFLINE STABLE
 ora.evmd
 1 OFFLINE OFFLINE STABLE
 --------------------------------------------------------------------------------

After trying multiple times with other Oracle Grid Infrastructure versions from 11.2.0.4 to 12.2.0.1, I has to open a service request at Oracle, and they furnished me the following workaround:

Once rot.sh has failed, we do not close the GUI installer windows because we will use it to complete the installation after the root.sh is complete, at first we have to deconfigure the failed installation:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid/crs/install # ./roothas.pl -verbose -deconfig –force
oracle_grid:/u00/app/grid/product/12.1.0.2/grid/crs/install # . rootcrs.sh -deconfig –force

Then we modify the /etc/ld.so file by adding /lib64/noelision as first entry. The file should look like:

oracle@oracle_grid:/u00/app/oracle/product/12.1.0.2/dbhome_1/dbs/ [DORSTREA] cat /etc/ld.so.conf
/lib64/noelision
/usr/local/lib64
/usr/local/lib
include /etc/ld.so.conf.d/*.conf

Finally we create a symbolic link between $GI_HOME/lib/libpthread.so.0 and /lib64/noelision/libpthread-2.19.so

lrwxrwxrwx 1 root root            35 Apr 11 15:56 libpthread.so.0 -> /lib64/noelision/libpthread-2.19.so

We only have to try to run the root.sh, and finally it works fine:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid # . root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/product/12.1.0.2/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/product/12.1.0.2/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/04/11 15:56:37 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
oracle_grid    /u00/app/grid/product/12.1.0.2/grid/cdata/oracle_grid/backup_20170411_155653.olr     0    
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracle_grid'
CRS-2673: Attempting to stop 'ora.evmd' on 'oracle_grid'
CRS-2677: Stop of 'ora.evmd' on 'oracle_grid' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracle_grid' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/04/11 15:57:09 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

After the root.sh is successfully completed, we continue with the Oracle Installer, and everything is correctly configured for the Oracle Grid Infrastructure.

I will keep you informed of the bug evolution, and I will test ASAP the Oracle Grid Infrastructure installation  under SLES 12 SP2 …

 

 

 

Cet article Grid Infrastructure Installation on SLES 12 SP1 est apparu en premier sur Blog dbi services.

12cR2 Cross-container DML – insert into container()

$
0
0

Multitenant has been introduced in 12.1.0.1 with the goal to share resources but isolate data. However, having all PDBs in the same root may be convenient to manipulate data in multiple PDBs. In the first patchset, 12.1.0.2, a way to query cross-container has been introduced for the CDB administrator to see data in other containers. In the second release, 12.2.0.1, this goes further with the introduction of Application Containers and cross-PDB DML. Currently, not all possibilities are documented and not all documented features are actually working. This will probably improve in next patchset. I’ll start here with something simple: insert from root into a table which is in a PDB.

Here is my CDB with two PDBs

22:48:13 SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.
 
22:48:13 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO
 

I create a DEMO table in CDB$ROOT and do the same in PDB1 and PDB2

22:48:13 SQL> create table DEMO (n number primary key, text varchar2(90) );
Table DEMO created.
 
22:48:14 SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
 
22:48:14 SQL> create table DEMO (n number primary key, text varchar2(90) );
Table DEMO created.
 
22:48:14 SQL> connect sys/oracle@//localhost/PDB2 as sysdba
Connected.
 
22:48:14 SQL> create table DEMO (n number primary key, text varchar2(90) );
Table DEMO created.

I connect to CDB$ROOT and set a transaction name, then check all transactions

22:48:14 SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.
 
22:48:14 SQL> set transaction name 'Franck';
Transaction NAME succeeded.
 
22:48:14 SQL> select con_id,addr,xidusn,ubafil,ses_addr,ptx_xid,name,used_urec from containers(v$transaction);
no rows selected
 

I’m alone here with no transactions.

CONTAINERS()

Here is the cross-container syntax: using the CONTAINERS() and specifying the CON_ID column and value (CON_ID=3 for PDB1)

22:48:14 SQL> insert into containers(DEMO) (con_id,n,text) values (3,1,'Cross-container insert');
1 row inserted.
 
22:48:14 SQL> select con_id,addr,xidusn,ubafil,ses_addr,ptx_xid,name,used_urec from containers(v$transaction);
 
CON_ID ADDR XIDUSN UBAFIL SES_ADDR PTX_XID NAME USED_UREC
------ ---- ------ ------ -------- ------- ---- ---------
1 0000000067BB19E8 7 0 000000006ADD2EA8 0000000000000000 Franck 1
3 000000006642AEB8 1 62 000000006AC99610 0000000000000000 2

The interesting thing is that I have two transactions: one on my current container, and one on the container CON_ID=3 specified in my insert.

I’m doing the same for PDB2 which is CON_ID=4

22:48:14 SQL> insert into containers(DEMO) (con_id,n,text) values (4,1,'Cross-container insert');
1 row inserted.
 
22:48:15 SQL> select addr,con_id,addr,xidusn,ubafil,ses_addr,ptx_xid,name,used_urec from containers(v$transaction);
 
ADDR CON_ID ADDR XIDUSN UBAFIL SES_ADDR PTX_XID NAME USED_UREC
---- ------ ---- ------ ------ -------- ------- ---- ---------
0000000067BB19E8 1 0000000067BB19E8 7 0 000000006ADD2EA8 0000000000000000 Franck 1
000000006642AEB8 3 000000006642AEB8 1 62 000000006AC99610 0000000000000000 2
000000006644EA90 4 000000006644EA90 6 66 000000006B20F828 0000000000000000 2

looking at the transactions sessions, the ones on the PDBs looks like a database link connection:

22:48:15 SQL> select taddr,con_id,program,action,module from v$session where saddr in (select ses_addr from v$transaction);
 
TADDR CON_ID PROGRAM ACTION MODULE
----- ------ ------- ------ ------
000000006644EA90 4 oracle@VM104 (TNS V1-V3) oracle@VM104 (TNS V1-V3)
000000006642AEB8 3 oracle@VM104 (TNS V1-V3) oracle@VM104 (TNS V1-V3)
0000000067BB19E8 1 java@VM104 (TNS V1-V3) java@VM104 (TNS V1-V3)

It looks as database links, and we can actually see those open links in V$DBLINKS:

23:06:53 SQL> select * from v$dblink;
 
DB_LINK OWNER_ID LOGGED_ON HETEROGENEOUS PROTOCOL OPEN_CURSORS IN_TRANSACTION UPDATE_SENT COMMIT_POINT_STRENGTH CON_ID
------- -------- --------- ------------- -------- ------------ -------------- ----------- --------------------- ------
PDB1 0 YES YES UNKN 0 YES YES 1 1
PDB2 0 YES YES UNKN 0 YES YES 1 1

Commit

However, when using CONTAINERS() the session is not using the database links but something like parallel query switching to the containers. This means that it is not the same transaction and we don’t see the modifications:

22:48:15 SQL> select * from containers(DEMO);
no rows selected

Now, I commit:

22:48:15 SQL> commit;
Commit complete.

and all transactions are ended:

22:48:15 SQL> select taddr,con_id,program,action,module from v$session where saddr in (select ses_addr from v$transaction);
no rows selected

the links are still opened but not in a transaction anymore:

23:10:21 SQL> select * from v$dblink;
 
DB_LINK OWNER_ID LOGGED_ON HETEROGENEOUS PROTOCOL OPEN_CURSORS IN_TRANSACTION UPDATE_SENT COMMIT_POINT_STRENGTH CON_ID
------- -------- --------- ------------- -------- ------------ -------------- ----------- --------------------- ------
PDB1 0 YES YES UNKN 0 NO NO 1 1
PDB2 0 YES YES UNKN 0 NO NO 1 1

My inserts are now visible, either from the root with CONTAINER()

22:48:15 SQL> select * from containers(DEMO);
 
N TEXT CON_ID
- ---- ------
1 Cross-container insert 4
1 Cross-container insert 3

or from each PDB:

22:48:15 SQL> alter session set container=PDB1;
Session altered.
 
22:48:15 SQL> select * from DEMO;
 
N TEXT
- ----
1 Cross-container insert

So what?

This is a convenient way for the CDB administrator, or for the Application Root administrator, to do some DML on different containers, without having to create a database link. Of course, the common user can also switch to a PDB using the ‘alter session set container’ but this one does not allow to have a transaction that spans multiple containers. You can think of it as a shortcut to avoid creating database links from the root to its containers.

 

Cet article 12cR2 Cross-container DML – insert into container() est apparu en premier sur Blog dbi services.

random “ORA-01017: invalid username/password” in 12cR2

$
0
0

Since 12cR2 is out, we give our 12c new feature workshop with hands-on exercises on 12.1 and 12.2 releases. When I gave it last month, I had a small problem when doing demos: sometimes the connections as sysdba failed with “ORA-01017: invalid username/password”. It was at random, about one every 5 login attempts and I was sure that the password did not change. As I give another of this training next week, I tried to reproduce and fix this issue and finally found out that the problem was really random: dependent on the entropy when reading /dev/random

B8QsEZHIIAAOPe1
The context is special here as the workshop infrastructure is different than a production server. The participants have laptops and connect to VirtualBox machines running on a boosted laptop. This infrastructure is used by all dbi-services trainings, so we are a bit conservative here and still run on VirtualBox 4.3. I’m not sure that it makes a difference, but I didn’t encounter the problem when preparing the VM on my laptop running in VirtualBox 5.1. The VM run only the minimum to be connected through ssh or SQL Developer: just a network interface.

This issue occurred only when connecting to a 12.2.0.1 database. No problem when connecting to 12.1.0.2 even through the 12cR2 lister and with the 12cR2 sqlplus or SQLcl.

So, when I was giving the workshop, with a few demos and exercises on Table Recovery and PDB Point In Time Recovery, I was connecting several times as sysdba though the listener and it failed sometimes with “invalid username/password”. I’m sure about the password, they are all the same in this lab, and I didn’t play with the password file. I didn’t have time to troubleshoot that at that time.

Today, preparing the next workshop, I observed the same problem.

To reproduce it, I have put hundred of connect lines in a script and run it and here is an except of the output:
14:57:34 SQL> connect sys/manager@CDB2 as sysdba
Connected.
14:57:34 SQL> connect sys/manager@CDB2 as sysdba
Connected.
14:57:34 SQL> connect sys/manager@CDB2 as sysdba
Connected.
14:57:34 SQL> connect sys/manager@CDB2 as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied
 
 
Warning: You are no longer connected to ORACLE.
14:57:37 SQL> connect sys/manager@CDB2 as sysdba
Connected.
14:57:37 SQL> connect sys/manager@CDB2 as sysdba
Connected.
14:57:37 SQL> connect sys/manager@CDB2 as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied
 
 
Warning: You are no longer connected to ORACLE.
14:57:39 SQL> connect sys/manager@CDB2 as sysdba
Connected.
14:57:39 SQL> connect sys/manager@CDB2 as sysdba
Connected.
14:57:39 SQL> connect sys/manager@CDB2 as sysdba
Connected.

The failures appear totally random.

ORA-1017 during Key Generation

Those errors left only the following message in the trace:

Error information for ORA-1017 during Key Generation:
Logon user : SYS
ZT Code Error : The requested operation failed.

I activated errorstack but the information was not very meaningful for me.

strace

Of course, the error message is misleading and something bad happened using the password authentication process. The key generation may be related to AUTH_SESSKEY that is used to encrypt the password send by the client. In order to get more insight into this authentication process, I traced the listener and the child processes for their system calls.
[oracle@srvora05 trace]$ ps -edf | grep tnslsnr
oracle 10063 1 0 14:44 ? 00:00:00 /u01/app/oracle/product/12.2.0/dbhome_1/bin/tnslsnr LISTENER -inherit
oracle 11091 10264 0 15:05 pts/2 00:00:00 grep tnslsnr
 
[oracle@srvora05 trace]$ strace -o strace.log -e trace=file -f -p 2115&
[1] 9518
[oracle@srvora05 trace]$ Process 2115 attached with 2 threads - interrupt to quit

Here is what I’ve seen when a connection failure occured again:

4722 open("/dev/random", O_RDONLY|O_NONBLOCK) = 7
4722 read(7, 0x160dc7b0, 1) = -1 EAGAIN (Resource temporarily unavailable)
4722 close(7) = 0
4722 open("/dev/random", O_RDONLY|O_NONBLOCK) = 7
4722 read(7, 0x160dc7b8, 1) = -1 EAGAIN (Resource temporarily unavailable)
4722 close(7) = 0
4722 open("/dev/random", O_RDONLY|O_NONBLOCK) = 7
4722 read(7, 0x160dc7c0, 1) = -1 EAGAIN (Resource temporarily unavailable)
4722 close(7) = 0

Sure, the key is generated from random number. The random number is read from /dev/random in non-blocking mode. In blocking mode, reading from /dev/random waits to have enough entropy. Entropy is generated by hardware, and this is limited on a VM, running on a laptop with very few devices. Actually I don’t even know how VirtualBox shares the hardware entropy. Here, Oracle is opening /dev/random in non-blocking mode which is good because it does not wait. But what happens if an error is returned like in the above EAGAIN? Several retries. And then give up. And it seem to generate a key that is not correct, and the authentication process fails then with ORA-1017.

12cR2 vs. 12cR1

I was surprised to see that Oracle is using /dev/random rather than /dev/urandom and I was surprised that the problem didn’t occur in the previous version. We use a lot of 12.1 databases in the same infrastructure and never encountered the problem. So I traced the same in 12.1 and here it is:

6887 open("/dev/urandom", O_RDONLY) = 7
6887 fcntl(7, F_GETFD) = 0
6887 fcntl(7, F_SETFD, FD_CLOEXEC) = 0
6887 read(7, "\226\203>\351\317\370*\365", 8) = 8

No problem in 12.1 because /dev/urandom is used. urandom is non-blocking and returns a result that may have not enough entropy. Maybe this change was done for 12.2 to prevent a theoretically possible attack, as mentioned in the random(4) man page.

In 12.2 when entropy is enough we can see the following:
9951 read(7, "\341", 1) = 1
9951 close(7) = 0
9951 open("/dev/random", O_RDONLY|O_NONBLOCK) = 7
9951 read(7, "\363", 1) = 1
9951 close(7) = 0
9951 getrusage(0x1 /* RUSAGE_??? */, {ru_utime={0, 6998}, ru_stime={0, 10998}, ...}) = 0
9951 getrusage(0x1 /* RUSAGE_??? */, {ru_utime={0, 6998}, ru_stime={0, 10998}, ...}) = 0
9951 write(15, "\2\t\6\10\6\f\fAUTH_SESSKEY@"..., 521) = 521
9951 read(15, "\4\374\6 \3s\3\376\377\377\377\377\377\377\377\t!\1\376\377\377"..., 32783) = 1276
9951 getrusage(0x1 /* RUSAGE_??? */, {ru_utime={0, 6998}, ru_stime={0, 10998}, ...}) = 0

Actually, it seems that the authentication protocol used /dev/random in the past. More info on Uwe Küchler blog post (in German)

rngd

Coincidentally, a colleague of mine as posted a blog about Documentum where he needed to increase the entropy.

rngd is installed. I setup it to read /dev/urandom
[root@srvora05 oracle]# cat /etc/sysconfig/rngd
# Add extra options here
EXTRAOPTIONS="-r /dev/urandom"

and start it
[root@srvora05 oracle]# service rngd start
Starting rngd: [ OK ]

For future reboot I enable it for init:
chkconfig rngd on

As soon as rngd is started, I encountered no connection problems at all.

Note that for security reason there may be a better source of randomness than /dev/urandom. This is the solution for my workshop lab. Haveged is an idea:

So what?

The method to get an AUTH_SESSKEY to encrypt the password passed by the client has changed between 12.1 and 12.2. It is probably safer now (more entropy) but can fail the authentication when entropy is not enough. I’ve encountered the issue in a very special case: a lab in old version Virtual Box running on a laptop. I don’t know if we will see the same on real production VMs where entropy is probably higher. But it depends on the hypervisor as well. Note that the authentication protocol also generate a key from the client, and a 12.2 client reads /dev/random and who knows where the client is running from.

As the error message is quite misleading and really random, I hope that this post has sufficient information to help to troubleshoot when people will encounter the same issue and search on Google.

 

Cet article random “ORA-01017: invalid username/password” in 12cR2 est apparu en premier sur Blog dbi services.

12cR2 needs to connect with password for Cross-PDB DML

$
0
0

In a previous post, I explained that Cross-PDB DML, executing an update/delete/insert with the CONTAINERS() clause, seems to be implemented with implicit database links. Connecting through a database link requires a password and this blog post is about an error you may encounter: ORA-01017: invalid username/password; logon denied

This blog post also explains a consequence of this implementation, the big inconsistency of CONTAINERS() function because the implementation is completely different for queries (select) and for insert/delete/update, and you may finally write and read from different schemas.

We do not need Application Container for Cross-PDB DML and we don’t even need metadata link tables. Just tables with same columns. Here I have a DEMO table which is just a copy of DUAL, and it is created in CDB$ROOT and in PDB1 (CON_ID=3), owned by SYS.

Implicit database link

I’m connecting to CDB$ROOT with user, password and service name:

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.

I insert a row into the DEMO table in the PDB1, which is CON_ID=3:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
1 row created.

This works in 12.2, is documented, and is an alternative way to switching to the container.

But now, let’s try to do the same when connecting with ‘/ as sysdba':

SQL> connect / as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
 
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from PDB1

The first message mentions invalid user/password, and the second one mentions a database link having the same name as the container.
As I described in the previous post the CONTAINERS() opens an implicit database link when doing some modifications to another container. But a database link requires a connection and no user/password has been provided. It seems that it tries to connect with the same user and password as the one provided to connect to the root.

Then, I provide the user/password but with local connection (no service name):


SQL> connect sys/oracle as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied

There is no mention of a database link here, but still impossible to connect. Then it seems that the session needs our connection string to find out how to connect to the PDB.

Explicit database link

There is an alternative. You can create the database link explicitly and then it will be used by the container(), having all information required password and service. But the risk is that you define this database link to connect to another user.

Here I have also a DEMO table created in SCOTT:

SQL> create database link PDB1 connect to scott identified by tiger using '//localhost/PDB1';
Database link created.
 
SQL> select * from DEMO@PDB1;
 
D
-
X

From the root I insert with CONTAINERS() without mentioning the schema:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'S');
1 row created.

I have no errors here (I’m still connected / as sysdba) because I have a database link with the same name as the one it tries to use implicitly. So it works without any error or warning. But my database link does not connect to the same schema (SYS) but to SCOTT. And because a DEMO table was there with same columns, the row was actually inserted into the SCOTT schema:

SQL> select * from DEMO@PDB1;
 
D
-
X
S

The big problem here is that when doing a select through the same CONTAINER() function, a different mechanism is used, not using the database link but session switching to the other container, in same schema, so the row inserted through INSERT INTO CONTAINER() is not displayed by SELECT FROM CONTAINER():
SQL> select * from containers(DEMO);
 
D CON_ID
- ----------
X 1
X 3
Y 3

So what?

I don’t know if the first problem (invalid user/password) will be qualified as a bug but I hope the second one will. Cross-PDB DML will be an important component of Application Containers, and having a completely different implementation for SELECT and for INSERT/UPDATE/DELETE may be a source of problems. In my opinion, both should use container switch within the same session, but that means that a transaction should be able to write in multiple containers, which is not possible currently.

 

Cet article 12cR2 needs to connect with password for Cross-PDB DML est apparu en premier sur Blog dbi services.

Which privilege for CREATE PLUGGABLE DATABASE from DB LINK?

$
0
0

When cloning a PDB from a remote CDB you need to define a database link to be used in the CREATE PLUGGABLE DATABASE … FROM …@… command. The documentation is not completely clear about the privileges required on the source for the user defined in the database link, so here are the different possibilities.

Remote clone

Here is what the documentation says:CapturePDBPrivsClone

So you can connect to the CDB or to the PDB.

In order to connect to the CDB you need a common user with the CREATE SESSION system privilege:

SQL> create user C##DBA identified by oracle;
User C##DBA created.
SQL> grant create session to C##DBA container=current;
Grant succeeded.

No need for CONTAINER=ALL here because you connect only to the CDB$ROOT.

Then you need the CREATE PLUGGABLE DATABASE system privilege on the PDB. You can grant it from the CDB$ROOT with the CONTAINER=ALL but it is sufficient to grant it locally on the source PDB:

SQL> alter session set container=PDB1;
Session altered.
SQL> grant create pluggable database to C##DBA container=current;
Grant succeeded.

Note that, not documented, but the SYSOPER administrative privilege can replace the CREATE PLUGGABLE DATABASE so we can run the following instead of the previous one:

SQL> alter session set container=PDB1;
Session altered.
grant sysoper to C##DBA container=current;
Grant succeeded.

Both ways are usable for cloning, you create a database link to this common user, on the destination, and run the CLONE PLUGGABLE DATABASE:

SQL> create database link CDB1A connect to C##DBA identified by oracle using '//localhost/CDB1A';
Database link CDB1A created.
SQL> create pluggable database PDB1CLONE from PDB1@CDB1A file_name_convert=('CDB1A/PDB1','CDB2A/PDB1CLONE');
Pluggable database PDB1CLONE created.
SQL> alter pluggable database PDB1CLONE open;
Pluggable database PDB1CLONE altered.

This was using a common user but you can also define the user locally on the source PDB:

SQL> alter session set container=PDB1;
Session altered.
SQL> create user PDBDBA identified by oracle;
User PDBDBA created.
SQL> grant create session to PDBDBA container=current;
Grant succeeded.
SQL> grant create pluggable database to PDBDBA container=current;
Grant succeeded.

There again you have the alternative to use SYSOPER instead of CREATE PLUGGABLE DATABASE:

SQL> alter session set container=PDB1;
Session altered.
SQL> create user PDBDBA identified by oracle;
User PDBDBA created.
SQL> grant create session to PDBDBA container=current;
Grant succeeded.
SQL> grant sysoper to PDBDBA container=current;
Grant succeeded.

With one of those, you can clone from the target with a database link connecting to the local user only:

SQL> create database link CDB1A connect to PDBDBA identified by oracle using '//localhost/PDB1';
Database link CDB1A created.
SQL> create pluggable database PDB1CLONE from PDB1@CDB1A file_name_convert=('CDB1A/PDB1','CDB2A/PDB1CLONE');
Pluggable database PDB1CLONE created.
SQL> alter pluggable database PDB1CLONE open;
Pluggable database PDB1CLONE altered.

Then which alternative to use? The choice of the common or local user is up to you. I probably use a common user to do system administration, and cloning is one of them. But if you are in a PDBaaS environment where you are the PDB administrator, then you can clone your PDB to another CDB that you manage. This can mean cloning a PDB from the Cloud to a CDB on your laptop.

PDB Relocate

Things are different with the RELOCATE option where you drop the source PDB and redirect the connection to the new one. This is definitely a system administration task to do at CDB level and requires a common user. Trying it from a database link connecting to a local user will raise the following error:

ORA-17628: Oracle error 65338 returned by remote Oracle server
 
65338, 00000, "unable to create pluggable database"
// *Cause: An attempt was made to relocate a pluggable database using a
// database link to the source pluggable database.
// *Action: Use a database link that points to the source multitenant container
// database root and retry the operation.

Here is what the documentation says:CapturePDBPrivsRelocate

So, we need to have a common user on the source CDB, with CREATE SESSION privilege, and it makes sense to use an administrative privilege:

SQL> create user C##DBA identified by oracle;
User C##DBA created.
SQL> grant create session to C##DBA container=current;
Grant succeeded.
SQL> alter session set container=PDB1;
Session altered.
grant sysoper to C##DBA container=current;
Grant succeeded.

The documentation mentions that you can use either SYSDBA or SYSOPER, but from my tests (and Deiby Gómez ones) only SYSOPER works without raising an ‘insufficient privileges’. The documentation mentions that CREATE PLUGGABLE DATABASE is also necessary. Actually, it is not. And, with a relocate, it cannot be an alternative to SYSOPER. The user must be a common user, the CREATE SESSION must be granted commonly, but the SYSOPER can be locally for the PDB we relocate.

In summary

To clone a remote PDB you can use a common or local user, with SYSOPER or CREATE PLUGGABLE DATABASE privilege. To relocate a PDB you need a common user with SYSOPER.

 

Cet article Which privilege for CREATE PLUGGABLE DATABASE from DB LINK? est apparu en premier sur Blog dbi services.

New release model for JUL17 (or Oracle 17.3): RU and RUR

$
0
0

Updated June 5th

When reading about new version numbering in SQL Developer, I took the risk to change the title and guess the future version number for Oracle Database: Oracle 17.3 for the July (Q3) of 2017. Of course, this is just a guess.

Updated June 10th

Confirming the previous guess, we start to see some bugs planned to be fixed in version 18.1 which is probably the January 2018 Release Update.
Capture18.1

News from DOAGDB17 – May 30th

Oracle Database software comes in versions: 10g in 2004, 11g in 2007, 12c in 2013
In between, there are releases: 10gR2 in 2005, 11gR2 in 2009, 12cR2 in 2017
In between, there are Patch Sets: the latest 11gR2 is 11.2.0.4 (2 years after 11.2.0.3) and 12cR1 is 12.1.0.2 (one year after 12.1.0.1)
Those are full install: full Oracle Home. It can be in-place or into a new Oracle Home but it is a full install. There are a lot of changes in the system dictionary and you run catupgrd.sql to update it. It takes from 30 minutes to 1 hour on average depending on the components and the system.

Their primary goal is to release features. You should test them carefully. For example, the In-Memory option came in the first Patch Set of 12cR1

This will change in 2017 with annual feature releases. Well, this looks like a rename of Patch Set.

All releases are supported several years, with fixes (patches) provided for encountered issues: security, wrong result, hanging, crash, etc. Rather than installing one-off patches, and requesting merges for them, Oracle supplies some bundle patches: merged together, tested as a whole, cumulative, with a quarterly release frequency. Depending on the content and the platform, they are called Bundle Patches (in Windows), CPU (only security fixes), SPU (same as CPU but renamed to SPU), PSU (Patch Set Update), Proactive Bundle Patch (a bit more than in the PSU)…
The names have changed, the versioning number as well as they now include the date of release.
You apply patches into the Oracle Home with OPatch utility and into the database dictionary with the new datapatch utility.

Their primary goal is to get stability: fix issues with a low risk of regression. The minimum recommended is in the PSU, more fixes are in the ProactiveBP for known bugs.

This will change in 2017 with quarterly Release Updates. Well, this looks like a rename of PSU to RUR (Release Update Revision) and a rename of ProactiveBP as RU (Release Update).

The goal is to reduce the need to apply one-off patches on top of PSUs.

Here is all what I know about it, as presented by Andy Mendelsohn at DOAG Datenbank 2017 keynote:

IMG_3976

It is not common to have new announcements in the last fiscal year quarter. Thanks DOAG for this keynote.

 

Cet article New release model for JUL17 (or Oracle 17.3): RU and RUR est apparu en premier sur Blog dbi services.


12cR2 auditing all users with a role granted

$
0
0

12.1 introduced Unified Auditing where you define policies and then enable them. As with the traditional audit, you enable them for all users or for specific users. The unified auditing adds a syntax to audit all users except some listed ones. 12.2 adds a syntax to audit a group of users, defined by the role granted. This is the best way to enable a policy for a group of users, including those created later.

I create a simple policy, to audit logon and DBA role usage:

SQL> create audit policy DEMO_POLICY actions logon, roles DBA;
Audit POLICY created.

I create a new DBA user, USER1

SQL> create user USER1 identified by covfefe quota unlimited on USERS;
User USER1 created.
SQL> grant DBA to USER1;
Grant succeeded.

I want to enable the policy for this user because I want to audit all DBAs

SQL> audit policy DEMO_POLICY by USER1;
Audit succeeded.

I remove Audit records for this demo

SQL> exec dbms_audit_mgmt.clean_audit_trail(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified,use_last_arch_timestamp=>false);
PL/SQL procedure successfully completed.

Let’s connect with this user and see what is audited:

SQL> connect USER1/covfefe@//localhost/PDB1
Connected.
 
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM

The logon and the select on dictionary table (possible here thanks to the DBA role) has been audited because the policy is enabled for this user.

We have a new DBA and we create a new user for him:

SQL> create user USER2 identified by covfefe quota unlimited on USERS;
User USER2 created.
SQL> grant DBA to USER2;
Grant succeeded.

He connects and check what is audited:

SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.52.132814000 PM

Nothing is audited for this user. The DBA role usage is audited, but only for USER1.

Of course, we can add an audit statement for each user created for a DBA:

SQL> audit policy DEMO_POLICY by USER2;
Audit succeeded.

Then his new activity is audited:

SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.52.132814000 PM
Standard oracle VM104 pts/0 USER2 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.52.338928000 PM

But for security reason, we would like to be sure that any new user having the DBA role granted is audited.
Let’s try something else

SQL> noaudit policy DEMO_POLICY by USER1,USER2;
Noaudit succeeded.

We can simply audit all users:

SQL> audit policy DEMO_POLICY;
Audit succeeded.

But this is too much. Some applications constantly logon and logoff and we don’t want to have that in the audit trail.

SQL> noaudit policy DEMO_POLICY;
Noaudit succeeded.

We can still enable the policy for all users, and exempt those users we don’t want:

SQL> audit policy DEMO_POLICY except DEMO;
Audit succeeded.

Here is what is enabled, and this will audot all new users:

SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
DEMO DEMO_POLICY EXCEPT EXCEPT USER DEMO USER YES YES
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

But once again, this is not what we want.

SQL> noaudit policy DEMO_POLICY by DEMO;
Noaudit succeeded.
 
SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

Audit all users to whom roles are granted directly

In 12cR2 we have the possibility to do exactly what we want: audit all users having the DBA role granted:

SQL> audit policy DEMO_POLICY by users with granted roles DBA;
Audit succeeded.

This enables the audit for all users for whom the DBA role has been directly granted:

SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
DEMO_POLICY INVALID BY GRANTED ROLE DBA ROLE YES YES
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

The important thing is that a newly created user will be audited as long as he has the DBA role directly granted:

SQL> create user USER3 identified by covfefe quota unlimited on USERS;
User USER3 created.
SQL> grant DBA to USER3;
Grant succeeded.
 
SQL> connect USER3/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.17.915217000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.17.988151000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.117258000 PM
Standard oracle VM104 pts/0 USER2 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.18.322716000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.345351000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.415117000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.439656000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.455274000 PM
Standard oracle VM104 pts/0 USER3 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.18.507496000 PM

This policy applies to all users having the DBA role, and gives the possibility to audit more than their DBA role usage: here I audit all login from users having the DBA role.

So what?

We don’t use roles only to group privileges to grant. A role is usually granted to define groups of users: DBAs, Application user, Read-only application users, etc. The Unified Auditing can define complex policies, combining the audit of actions, privileges, and roles. The 12.2 syntax allows enabling the policy to a specific group of users.

 

Cet article 12cR2 auditing all users with a role granted est apparu en premier sur Blog dbi services.

12cR2 PDB refresh as a poor-man standby?

$
0
0

Disclaimer

My goal here is only to show that the Refreshable PDB feature works by shipping and applying redo, and then can synchronize a copy of the datafiles. I do not recommend to use it for disaster recovery in any production environment yet. Even if I’m using only supported features, those features were not designed for this usage, and are quite new and not stable yet. Disaster Recovery must use safe and proven technologies and this is why I’ll stick with Dbvisit standby for disaster recovery in Standard Edition.

This post explains what I had in my mind whith the following tweet:
CapturePoorManSBY

Primary PRDPDB

On my primary server, I have a CDB1 container database in Standard Edition with one Pluggable Database: PDRDPDB:

21:36:45 SQL> connect sys/oracle@//192.168.78.105/CDB1 as sysdba
Connected.
 
21:36:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO

I need a user there to be able to remote clone from it:

21:36:46 SQL> grant create session, sysoper, dba to C##DBA identified by oracle container=all;
Grant succeeded.

Standby server

On my standby server, I have a CDB1 container database in Standard Edition, where I create a database link to the production CDB using the user created above to connect to it:

21:36:46 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:36:46 SQL> create database link CDB1A connect to C##DBA identified by oracle using '//192.168.78.105/CDB1A';
Database link created.

My standby server runs Grid Infrastructure and has the database created on /acfs which is an ACFS filesystem. We will see the reason later when we will need to create a PDB snapshot copy. Any filesystem where you can use PDB snapshot copy would be fine.

Standby SBYPDB

The creation of the ‘standby’ pluggable database is done with a simple remote clone command and can be run in 12cR2 with the source PRDPDB still opened read write:


21:36:46 SQL> create pluggable database SBYPDB from PRDPDB@CDB1A
21:36:46 2 file_name_convert=('/u01/oradata/CDB1A/PRDPDB','/acfs/oradata/CDB1/SBYPDB')
21:36:46 3 refresh mode every 1 minutes;
 
Pluggable database created.

The REFRESH MODE is a 12cR2 feature which primary goal is to maintain and refresh a master clone for further provisioning of thin clones. This clone is refreshed every 1 minute, which means that I expect to have at most a one minute gap between PRDPDB and SBYPDB data, with the additional time to apply the 1 minute redo, of course.

Activity on the source

I will simulate a crash of the primary server and a failover to the standby, when transactions are running. I’ll run this activity on the SCOTT.EMP table:

21:39:03 SQL> connect scott/tiger@//192.168.78.105/PRDPDB;
Connected.
 
21:39:04 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 17-nov-1981 00:00:00 5000

I’m now updating the date and incrementing the number each second.

21:39:09 SQL> exec for i in 1..150 loop update emp set hiredate=sysdate, sal=sal+1; dbms_lock.sleep(1); commit; end loop
 
PL/SQL procedure successfully completed.

Here is the latest data on the primary server:

21:41:39 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

Crash the primary

The primary server is not supposed to be accessible in case of Disaster Recovery, so I’m crashing it:

21:41:39 SQL> disconnect
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
21:41:39 SQL> connect / as sysdba
Connected.
21:41:39 SQL> shutdown abort
ORACLE instance shut down.

Activate the standby

The datafiles are up to date, with a maximum 1 minute gap and all I want is open it and have the application re-connect to it. However a refreshable clone can be opened only read-only. This makes sense: you cannot apply more redo from source once opened read-write. So my first idea was to stop the refresh mode:

21:41:45 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:41:45 SQL> alter session set container=SBYPDB;
Session altered.
 
21:41:45 SQL> alter pluggable database SBYPDB refresh mode none;
alter pluggable database SBYPDB refresh mode none
*
ERROR at line 1:
ORA-17627: ORA-12514: TNS:listener does not currently know of service requested
in connect descriptor
ORA-17629: Cannot connect to the remote database server

It seems that Oracle tries to do one last refresh when you stop the refresh mode, but this fails here because the source is not accessible. I think that it should be possible to open read-write without applying more redo. However, these refreshable clones were not designed for failover.

I hope that one day we will just be able to end refresh mode without connecting to source, accepting to lose the latest transactions.

Open Read Only

Without an access to the source, I stay in refresh mode and I can only open read only:
21:41:45 SQL> alter pluggable database SBYPDB open read-only;
Pluggable database altered.
 
21:41:47 SQL> alter session set container=SBYPDB;
Session altered.
&nsbp;
21:41:47 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:41:47 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

My data is there, with my less than one minute gap, but that’s not sufficient for me. I want to run my application on it.

Snapshot Clone

My first idea to get the PDB read write on the standby server is to clone it. Of course, the failover time should not depend on the size of the database, so my idea is to do a snapshot copy, and this is why I’ve setup my standby CDB on ACFS. Here I’m cloning the SBYPDB to the same name as the primary: PRDPDB

21:41:47 SQL> alter session set container=CDB$ROOT;
Session altered.
 
21:41:47 SQL> create pluggable database PRDPDB from SBYPDB file_name_convert=('SBYPDB','PRDPDB') snapshot copy;
Pluggable database created.
 
21:42:03 SQL> alter pluggable database PRDPDB open;
Pluggable database altered.

I have now my new PRDPDB opened read write with the latest data that was refreshed:

21:42:26 SQL> alter session set container=PRDPDB;
Session altered.
 
21:42:26 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:42:26 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

I’m running on a snapshot here. I can stay like that, or plan to move it out of the snapshot in the future. There is no online datafile move in Standard Edition, but there is the online pluggable database relocate. Anyway, running the database in a snapshot is sufficient to run a production after a Disaster Recovery and I can remove the SBYPRD so that there is no need to copy the ACFS extents on future writes.

Keep the snapshot

At that point, you should tell me that I cannot snapshot copy a PDB within the same CDB here because I’m in Standard Edition. And that’s right: you can create only one PDB there and you are supposed to get a ‘feature not enabled’. But I was able to do it here in my lab, with a small trick to inverse the CON_ID sequence:

Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO
4 SBYPDB MOUNTED

Remote snapshot clone should be possible as well. But there’s another licensing issue here. Using ACFS snapshots for the database is not allowed in Standard Edition. This means that this solution probably requires another snapshot solution than the one I’m using here in my lab.

If you don’t fear to violate the single-tenant rules, you may prefer to keep the SBYPRD for a while. Imagine that you are able to restart the crashed server for a few minutes, then you can do the last refresh of SBYPRD to have a look at the transactions that were lost in the 1 minute window.

I re-start the crashed CDB:

21:42:26 SQL> connect / as sysdba
Connected to an idle instance.
21:42:27 SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 356519592 bytes
Database Buffers 486539264 bytes
Redo Buffers 7974912 bytes
Database mounted.
Database opened.

and now, on my standby server, I can finally stop the refresh mode:

21:42:51 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:42:52 SQL> alter pluggable database SBYPDB close;
Pluggable database altered.
 
21:42:52 SQL> alter session set container=SBYPDB;
Session altered.
 
21:42:52 SQL> alter pluggable database SBYPDB refresh mode none;
Pluggable database altered.

Be careful not to have jobs or services starting here because your production is now on the snapshot clone PRDPDB running on the same server. Let’s open it:

21:43:02 SQL> alter pluggable database SBYPDB open restricted;
Pluggable database altered.
 
21:43:24 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

And here we are with the data at the moment of the crash. Then, the application owner can manually check what was missed between the last refresh (which made its way to PRDPDB) and the crash (visible in SBYPDB).

Unplug/Plug

I was not very satisfied by the snapshot clone because of the limitations in Standard Edition, which is where this solution may be interesting. I have the datafiles but cannot open the SBYPDB read write. I tried to unplug them but cannot because of the refresh mode:

SQL> alter pluggable database SBYPDB unplug into '/tmp/tmp.xml';
 
Error starting at line : 1 in command -
alter pluggable database SBYPDB unplug into '/tmp/tmp.xml'
Error report -
ORA-01113: file 23 needs media recovery
ORA-01110: data file 23: '/acfs/oradata/CDB1/SBYPDB/undotbs01.dbf'
01113. 00000 - "file %s needs media recovery"
*Cause: An attempt was made to online or open a database with a file that
is in need of media recovery.
*Action: First apply media recovery to the file.

I know that I don’t need more recovery. So let’s unplug it in another way:

SQL> alter pluggable database SBYPDB open read only;
Pluggable database SBYPDB altered.
 
SQL> exec dbms_pdb.describe('/tmp/tmp.xml','SBYPDB');
PL/SQL procedure successfully completed.

Then drop it but keep the datafiles:

SQL> alter pluggable database SBYPDB close;
Pluggable database SBYPDB altered.
 
SQL> drop pluggable database SBYPDB;
Pluggable database SBYPDB dropped.

And plug it back:

SQL> create pluggable database SBYPDB using '/tmp/tmp.xml';
Pluggable database SBYPDB created.
 
SQL> alter pluggable database SBYPDB open;
Pluggable database SBYPDB altered.

Here it is. This takes a bit longer than the snapshot solution but still ready to activate the ‘standby’ PDB without copying datafiles.

So what?

All the new 12cR2 multitenant features are available in all Editions, which is very good. Here with ALTER PLUGGABLE DATABASE … REFRESH we have log shipping and apply, for free in Standard Edition, at PDB level. And I’ve tested two ways to open this standby PDB in case of disaster recovery. I’m using only supported features here, but be careful that those features were not designed for this goal. The normal operations on refreshable clone require that the remote CDB is accessible. But there are workarounds here because you can describe/drop/plug or snapshot clone from a PDB that you can open read only.

 

Cet article 12cR2 PDB refresh as a poor-man standby? est apparu en premier sur Blog dbi services.

12c NSSn process for Data Guard SYNC transport

$
0
0

In a previous post https://blog.dbi-services.com/dataguard-wait-events-have-changed-in-12c/ I mentioned the new processes NSA for ASYNC transport and NSS for SYNC transport. I’m answering a bit late to a comment about the number of processes: yes there is one NSSn process per LOG_ARCHIVE_DEST_n destination in SYNC and the numbers match.

Here is my configuration with two physical standby:
DGMGRL> show configuration
 
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 56 seconds ago)

Both are in SYNC:
DGMGRL> show database orclb logxptmode;
LogXptMode = 'sync'
DGMGRL> show database orclc logxptmode;
LogXptMode = 'sync'

I can see two NSS processes:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 4952 1 0 16:05 ? 00:00:00 ora_nss3_ORCLA
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

Here are the two log archive dest:
SQL> select name,value from v$parameter where regexp_like(name,'^log_archive_dest_[23]$');
NAME VALUE
---- -----
log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
log_archive_dest_3 service="ORCLC", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclc" net_timeout=30, valid_for=(online_logfile,all_roles)

I set the 3rd one in ASYNC:
DGMGRL> edit database orclc set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS3 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

I set the 2nd destination to ASYNC:
DGMGRL> edit database orclb set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS2 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"

Now starting the 3rd destination first:
DGMGRL> edit database orclc set property logxptmode=SYNC;
Property "logxptmode" updated

I can see that nss3 has started as it is the log_archive_dest_3 which is in SYNC now:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA

Then starting the second one:
DGMGRL> edit database orclb set property logxptmode=SYNC;
Property "logxptmode" updated

Here are the two processes:

DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA
oracle 5393 1 0 16:20 ? 00:00:00 ora_nss2_ORCLA

So if you see some SYNC Remote Write events in ASH, look at the program name to know which destination it is.

 

Cet article 12c NSSn process for Data Guard SYNC transport est apparu en premier sur Blog dbi services.

12cR2 Application Containers and Foreign Keys

$
0
0

Application Container brings a new way to share data among databases, and adds a new dimension to referential integrity. A foreign key in an application PDB can reference a row belonging to a root data link table. But then, should a delete on the root validate that there are no orpheans in the application PDBs? And what if those PDBs are closed at the time of this delete? Here is a small example.

If you run this in 12.2.0.1 you will get an error because the search for parent key is done only on the current container. This is considered as a bug: 21955394: CDB:ORA-02291 WHEN FOREIGN KEY REFERS TO THE PRIMARY KEY IN DATA LINK

The example that follows is run with the patch applied to fix this bug.

I’m connecting to root where I have no user PDB yet.

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.
 
SQL> select con_id, name, application_root application_root, application_pdb application_pdb,application_root_con_id application_root_con_id from v$containers;
 
CON_ID NAME APPLICATION_ROOT APPLICATION_PDB APPLICATION_ROOT_CON_ID
------ ---- ---------------- --------------- -----------------------
1 CDB$ROOT NO NO
2 PDB$SEED NO NO

I create the application container root

SQL> create pluggable database SCOTT_ROOT as application container admin user SCOTT_ADMIN identified by oracle roles=(DBA);
Pluggable database SCOTT_ROOT created.
 
SQL> alter pluggable database SCOTT_ROOT open;
Pluggable database SCOTT_ROOT altered.
 
SQL> select con_id, name, application_root application_root, application_pdb application_pdb,application_root_con_id application_root_con_id from v$containers;
 
CON_ID NAME APPLICATION_ROOT APPLICATION_PDB APPLICATION_ROOT_CON_ID
------ ---- ---------------- --------------- -----------------------
1 CDB$ROOT NO NO
2 PDB$SEED NO NO

I connect to this application root and start the installation of the application

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
 
SQL> alter pluggable database application SCOTT begin install '1.0';
Pluggable database APPLICATION altered.

I’m installing SCOTT tables DEPT and EMP tables but I changed their definition from utlsampl.sql:

  • DEPT is an EXTENDED DATA LINK where a set of row is common, inserted on application root and visible by all application PDBs
  • EMP is a METADATA LINK where each application PDB has its own data isolated from others, but having the same structure


SQL> GRANT CONNECT,RESOURCE,UNLIMITED TABLESPACE TO SCOTT IDENTIFIED BY tiger container=all;
Grant succeeded.
 
SQL> alter session set current_schema=SCOTT;
Session altered.
 
SQL> CREATE TABLE DEPT sharing=extended data
2 (DEPTNO NUMBER(2) CONSTRAINT PK_DEPT PRIMARY KEY,
3 DNAME VARCHAR2(14) ,
4 LOC VARCHAR2(13) ) ;
Table DEPT created.
 
SQL> CREATE TABLE EMP sharing=metadata
2 (EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
3 ENAME VARCHAR2(10),
4 JOB VARCHAR2(9),
5 MGR NUMBER(4),
6 HIREDATE DATE,
7 SAL NUMBER(7,2),
8 COMM NUMBER(7,2),
9 DEPTNO NUMBER(2) CONSTRAINT FK_DEPTNO REFERENCES DEPT);
Table EMP created.
 
SQL> INSERT INTO DEPT VALUES
2 (10,'ACCOUNTING','NEW YORK');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES (20,'RESEARCH','DALLAS');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES
2 (30,'SALES','CHICAGO');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES
2 (40,'OPERATIONS','BOSTON');
1 row inserted.
 
SQL> COMMIT;
Commit complete.
 
SQL> alter pluggable database application SCOTT end install '1.0';
Pluggable database APPLICATION altered.

The application root has departments 10, 20, 30 and 40 in DEPT shared with all PDBs and has defined that EMP has a foreign key to it.

I create an application PDB from this application root

SQL> create pluggable database SCOTT_ONE admin user SCOTT_ONE_ADMIN identified by covfefe;
Pluggable database SCOTT_ONE created.
 
SQL> alter pluggable database SCOTT_ONE open;
Pluggable database SCOTT_ONE altered.

I sync it to get common DDL and DML applied

SQL> connect sys/oracle@//localhost/SCOTT_ONE as sysdba
Connected.
 
SQL> alter pluggable database application SCOTT sync;
Pluggable database APPLICATION altered.
 
SQL> select name,con_id,application_pdb,application_root_con_id from v$containers;
 
NAME CON_ID APPLICATION_PDB APPLICATION_ROOT_CON_ID
---- ------ --------------- -----------------------
SCOTT_ONE 8 YES 6

Now let’s connect to the application PDB. I can see the DEPT rows inserted from root because it is a DATA LINK.

SQL> connect scott/tiger@//localhost/SCOTT_ONE
Connected.
 
SQL> select * from dept;
 
DEPTNO DNAME LOC
------ ----- ---
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

EMP is empty here

SQL> select * from emp;
 
no rows selected

I insert an EMP row in the application PDB which references a DEPT row in the application root:

SQL> INSERT INTO EMP VALUES
2 (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);
 
1 row inserted.

As DEPT is and EXTENDED DATA LINK, I can add new rows in my PDB:

SQL> INSERT INTO DEPT VALUES
2 (50,'MY LOCAL DEPT','LAUSANNE');
 
1 row inserted.

And I can have an EMP row referencing this local parent:

SQL> INSERT INTO EMP VALUES
2 (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,50);
1 row inserted.
 
SQL> commit;
Commit complete.

This looks good. Now what happens of we delete all rows from DEPT in the application root?

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
SQL> delete from SCOTT.DEPT;
4 rows deleted.
 
SQL> commit;
Commit complete.

No error here. But then, I have orphans in my application PDB:

SQL> connect scott/tiger@//localhost/SCOTT_ONE
Connected.
SQL> select * from dept;
 
DEPTNO DNAME LOC
---------- -------------- -------------
50 MY LOCAL DEPT LAUSANNE
 
SQL> select * from emp;
 
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 50

So what?

Referential integrity works across containers: an application PDB can reference parent key in the application root (according that bug is fixed). However, no ORA-02292 (child record found) is raised when child records are not in the current container. This one makes sense. Enforcing the verification of child records in all PDBs would require that they are opened, and may require locking the table in all containers. We must be aware that doing DML on the application root can lead to inconsistency if not done correctly.

Operations on the application root are application releases (upgrades and patches) and must be validated and tested carefully. For the example above, deleting all rows from DEPT can be done as an application patch which deletes from the EMP table as well:

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
SQL> alter pluggable database application SCOTT begin patch 1 ;
Pluggable database APPLICATION altered.
SQL> delete from scott.emp;
0 rows deleted.
SQL> delete from scott.dept where deptno in (10,20,30,40);
4 rows deleted.
SQL> alter pluggable database application SCOTT end patch 1 ;
Pluggable database APPLICATION altered.

The delete from EMP does nothing in the application root here, but it will be done on the PDB when applying the patch:

SQL> select * from dept;
 
DEPTNO DNAME LOC
---------- -------------- -------------
50 MY LOCAL DEPT LAUSANNE
 
SQL> select * from emp;
 
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 50

Note that I’ve defined exactly which rows from DEPT I wanted to delete in the where clause of delete from scott.dept where deptno in (10,20,30,40);
You may be tempted to do something like: delete from scott.dept where deptno in (select deptno from scott.dept);
But keep in mind that the statements you run in the root are re-played as-is in the PDBs. And when you sync the PDB, you can see no rows from DEPT because there were already purged from the root. Actually, what you want is to delete from EMP the rows which refer to the rows you have deleted from the root. It is not possible to get them with a subquery, except if you have stored them into another data link table before deleting them. Changes in the application root must be managed like application patches.

 

Cet article 12cR2 Application Containers and Foreign Keys est apparu en premier sur Blog dbi services.

12c Multitenant Internals: VPD for V$ views

$
0
0

I described in an earlier post on AWR views how the dictionary views were using metadata and object links to show information from other containers. But this mechanism cannot work for fixed views (aka V$) because they don’t have their definition in the dictionary.

The big difference is that most of V$ views are available long before the dictionary is opened or even created. Just start an instance in NOMOUNT and you can query the V$ views. Even in multitenant, you can switch to different containers in MOUNT, and query V$ views, when no dictionary is opened.

SQL> alter database mount;
Database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED MOUNTED
3 PDB1 MOUNTED
 
SQL> alter session set container=pdb1;
Session altered.
 
SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 PDB1 MOUNTED

V$ views query information from the instance and this information pertain to one container:

  • CON_ID=0 for the CDB itself
  • CON_ID=1 for CDB$ROOT
  • CON_ID=2 for PDB$SEED
  • CON_ID=3 for the first PDB you have created

When you are in root, the V$ views are queried as normal and show all information – from all containers – with their related CON_ID

When you are in a PDB, you must see the objects that belong to your PDB, but not those that belong to other PDBS. But this is not sufficient. For example, you may query the version, and the version is related to the CDB itself, with CON_ID=0:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select * from v$version;
 
BANNER CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production 0
PL/SQL Release 12.2.0.1.0 - Production 0
CORE 12.2.0.1.0 Production 0
TNS for Linux: Version 12.2.0.1.0 - Production 0
NLSRTL Version 12.2.0.1.0 - Production 0

Then, in a PDB you should see your PDB objects and the CON_ID=0 ones. Oracle needs a new mecanism for that. One way would be to switch to root, query the V$ and filter on CON_ID. We don’t need that. Context switch is there to access data from a different container tablespace, because tablepaces are not shared. But V$ views expose data from the instance, and the instance is shared. Any container can see all rows, and we just want to filter some rows.

Here is the execution plan when querying V$VERSION from a PDB:


SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> explain plan for select * from v$version;
Explained.
 
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1078166315
 
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 68 | 0 (0)| 00:00:01 |
|* 1 | FIXED TABLE FULL| X$VERSION | 1 | 68 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("CON_ID"=0 OR "CON_ID"=3) AND
"INST_ID"=USERENV('INSTANCE'))

An additional predicate (“CON_ID”=0 OR “CON_ID”=3) is added to the view. How is it done? Oracle has a security feature for that: Virtual Private Database – aka Row Level Security – which adds a where clause dynamically.

One way to get more information about virtual private databases is to have an error on its execution and I know that a user with only select privilege cannot EXPLAIN PLAN (see MOS Note 1029064.6).

I connect to a PDB with a low privileged user:
SQL> connect scott/tiger@//localhost/PDB1
Connected.

I explain plan the V$VERSION fixed view.
SQL> explain plan for select * from v$version;
 
Error starting at line : 10 File @ /media/sf_share/122/blogs/multitenant-vpd.sql
In command -
explain plan for select * from v$version
Error report -
ORA-28113: policy predicate has error
28113. 00000 - "policy predicate has error"
*Cause: Policy function generates invalid predicate.
*Action: Review the trace file for detailed error information.

Interesting error which confirms the guess: this is a VPD error and it generates a trace:
*** 2017-06-26T22:45:17.838507+02:00 (PDB1(3))
*** SESSION ID:(141.17865) 2017-06-26T22:45:17.838537+02:00
*** CLIENT ID:() 2017-06-26T22:45:17.838541+02:00
*** SERVICE NAME:(pdb1) 2017-06-26T22:45:17.838545+02:00
*** MODULE NAME:(java@VM104 (TNS V1-V3)) 2017-06-26T22:45:17.838548+02:00
*** ACTION NAME:() 2017-06-26T22:45:17.838552+02:00
*** CLIENT DRIVER:(jdbcoci : 12.2.0.1.0) 2017-06-26T22:45:17.838555+02:00
*** CONTAINER ID:(3) 2017-06-26T22:45:17.838558+02:00
 
-------------------------------------------------------------
Error information for ORA-28113:
Logon user : SCOTT
Table/View : SYS.V_$VERSION
VPD Policy name : CON_ID
Policy function: SYS.CON_ID
RLS view :
SELECT "BANNER","CON_ID" FROM "SYS"."V_$VERSION" "V_$VERSION" WHERE (con_id IN (0, 3) )
ORA-01039: insufficient privileges on underlying objects of the view
-------------------------------------------------------------

There’s no container switch here, all is running in PDB1 with CON_ID=3 and the internal VPD has added a where clause to filter rows with CON_ID=0 and CON_ID=3

Do not search for the VPD policy name ‘CON_ID’ and function ‘CON_ID’ in the dictionary views because this happens even when the dictionary is not accessible. This is an internal policy used when querying fixed views in multitenant and which probably use some of the VPD code only.

 

Cet article 12c Multitenant Internals: VPD for V$ views est apparu en premier sur Blog dbi services.

Display Data Guard configuration in SQL Developer

$
0
0

The latest version of SQL Developer, the 17.2 one released after Q2 of 2017, has a new item in the DBA view showing the Data Guard configuration. This is the occasion to show how you can cascade the log shipping in Oracle 12c

A quick note about this new versioning: this is the release for 2017 Q2 and the version number has more digits to mention the exact build time. Here this version is labeled 17.2.0.188.1159 and we can see when it has been built:

SQL> select to_date('17.x.0.188.1159','rr."x.0".ddd.hh24mi') build_time from dual;
 
BUILD_TIME
--------------------
07-JUL-2017 11:59:00

Non-Cascading Standby

Here is my configuration with two standby databases:

DGMGRL> show configuration
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 9 seconds ago)

I have only the LogXptMode defined here, without any RedoRoutes

DGMGRL> show database orcla LogXptMode
LogXptMode = 'SYNC'

with this configuration, the broker has set the following log destination on orcla, orclb and orclc:

INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLA log_archive_dest_1 location=USE_DB_RECOVERY_FILE_DEST, valid_for=(ALL_LOGFILES, ALL_ROLES)
ORCLA log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
ORCLA log_archive_dest_3 service="ORCLC", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclc" net_timeout=30, valid_for=(online_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLB log_archive_dest_1 location=/u01/fast_recovery_area
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLC log_archive_dest_1 location=/u01/fast_recovery_area

In the latest SQL Developer you have the graphical representation of it from the DBA view / Dataguard / console:

SDDG001

Cascading Standby

In 12c we can define cascading standby: instead of the primary shipping the redo to all standby databases, you can have the primary shipping to one standby only, and this one can forward the redo to another one. You define that with the RedoRoute property:


DGMGRL> edit database orcla set property redoroutes = '(local:orclb) (orclb:orclc async)';
Property "redoroutes" updated
DGMGRL> edit database orclb set property redoroutes = '(orcla:orclc async) (local:orcla)';
Property "redoroutes" updated

The first route defined in each property is applied when orcla is the primary database:

  • on orcla (local:orclb) means that orcla sends redo to orclb when primary
  • on orclb (orcla:orclc async) means that orclb sends redo to orclc when orcla is primary. LogXptMode is SYNC but overriden here with ASYNC

The second route defined in each property is applied when orclb is the primary database:

  • on orcla (orclb:orclc async) means that orclb sends redo to orclc when orclb is primary. LogXptMode is SYNC but overriden here with ASYNC
  • on orclb (local:orcla) means that orclb sends redo to orcla when primary

With this configuration, and orcla still being the primary, the broker has set the following log destination on orcla, orclb and orclc:


INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLA log_archive_dest_1 location=USE_DB_RECOVERY_FILE_DEST, valid_for=(ALL_LOGFILES, ALL_ROLES)
ORCLA log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLB log_archive_dest_1 location=/u01/fast_recovery_area
ORCLB log_archive_dest_2 service="ORCLC", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=3
00 db_unique_name="orclc" net_timeout=30, valid_for=(standby_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLC log_archive_dest_1 location=/u01/fast_recovery_area

The show configuration from DGMGRL displays them indented to see the cascading redo shipping:

DGMGRL> show configuration
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database (receiving current redo)
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 27 seconds ago)

And SQL Developer Data Guard console shows:
SDDG002

Switchover

Now the goal of defining several routes is to have all log destination automatically changed when the database role change.
I’m doing a switchover:


Connected to "orclb"
Connected as SYSDG.
DGMGRL> switchover to orclb;
Performing switchover NOW, please wait...
New primary database "orclb" is opening...
Operation requires start up of instance "ORCLA" on database "orcla"
Starting instance "ORCLA"...
ORACLE instance started.
Database mounted.
Database opened.
Connected to "orcla"
Switchover succeeded, new primary is "orclb"

Now it is orcla which cascades the orclb redo to orclc:

DGMGRL> show configuration;
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orclb - Primary database
orcla - Physical standby database
orclc - Physical standby database (receiving current redo)
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 74 seconds ago)

Here is how it is displayed from SQL Developer:

SDDG003

We have seen how the configuration is displayed from DGMGRL and graphically from SQL Developer. Of course, you can also query the Data Guard configuration:

SQL> select * from V$DATAGUARD_CONFIG;
 
DB_UNIQUE_NAME PARENT_DBUN DEST_ROLE CURRENT_SCN CON_ID
-------------- ----------- --------- ----------- ------
orcla orclb PHYSICAL STANDBY 3407900 0
orclc orcla PHYSICAL STANDBY 3408303 0
orclb NONE PRIMARY DATABASE 0 0

and the broker configuration:

SQL> select * from V$DG_BROKER_CONFIG;
 
DATABASE CONNECT_IDENTIFIER DATAGUARD_ROLE REDO_SOURCE ENABLED STATUS VERSION CON_ID
-------- ------------------ -------------- ----------- ------- ------ ------- ------
orcla ORCLA PHYSICAL STANDBY -UNKNOWN- TRUE 0 11.0 0
orclb ORCLB PRIMARY -N/A- TRUE 0 11.0 0
orclc ORCLC PHYSICAL STANDBY orcla TRUE 0 11.0 0

This another reason to use the broker. Once the configuration is setup and tested, you have nothing else to think about when you do a switchover. The log archive destination is automatically updated depending on the database roles.

 

Cet article Display Data Guard configuration in SQL Developer est apparu en premier sur Blog dbi services.

Bequeath connect to PDB: set container in logon trigger?

$
0
0

There are little changes when you go to multitenant architecture and one of them is that you must connect with a service name. You cannot connect directly to a PDB with a beaqueath (aka local) connection. This post is about a workaround you may have in mind: create a common user and set a logon trigger to ‘set container’. I do not recommend it and you should really connect with a service. Here is an example.

Imagine that I have a user connecting with bequeath connection to a non-CDB, using user/password without a connection string, the database being determined by the ORACLE_SID. And I want to migrate to CDB without changing anything on the client connection configuration side. The best idea would be to use a service, explicitly or implicitly with TWO_TASK or LOCAL. But let’s imagine that you don’t want to change anything on the client side.

As we can connect only the the CDB$ROOT with a bequeath connection, we have to create a common user. Because the idea is not to change anything on client configuration, and there’s a very little chance that the user starts with C## I’ll start to remove the mandatory prefix for common users.


SQL> show parameter common_user_prefix
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
common_user_prefix string
 
SQL> alter system set common_user_prefix='' scope=spfile;
System altered.
 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
...

Then I create my common user:

SQL> create user MYOLDUSER identified by covfefe container=all;
User created.

This user must be able to connect to the CDB:

SQL> grant create session to MYOLDUSER container=current;
Grant succeeded.

And then I want it to switch immediately to PDB1 using a logon trigger:

SQL> create or replace trigger SET_CONTAINER_AT_LOGON after logon on database
2 when (user in ('MYOLDUSER'))
3 begin
4 execute immediate 'alter session set container=PDB1';
5 end;
6 /
Trigger created.

Once on PDB1 this user will have some privileges, and for the example I will grant him a default role:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> create role MYROLE;
Role created.
 
SQL> grant MYROLE to MYOLDUSER container=current;
Grant succeeded.

The documentation says that When you grant a role to a user, the role is granted as a default role for that user and is therefore enabled immediately upon logon so I don’t need to:

SQL> alter user MYOLDUSER default role MYROLE;
User altered.

But the doc say ‘logon’ and technically I do not logon to PDB1. I just set container. However, if you test it you will see that default roles are set also on ‘set container’. And anyway, we cannot set a role in a procedure, neither with ‘set role’ nor with dbms_session.set_role:

ORA-06565: cannot execute SET ROLE from within stored procedure

Then, I can now connect locally to the CDB$ROOT with this user:

SQL> connect MYOLDUSER/covfefe
Connected.

And I’m automatically switched to the PDB1:

SQL> show con_name
 
CON_NAME
------------------------------
PDB1

Issue #1: default roles

However the default roles are not set:

SQL> select * from session_roles;
 
no rows selected

I have to set the role once connected:

SQL> set role all;
Role set.
 
SQL> select * from session_roles;
 
ROLE
--------------------------------------------------------------------------------
MYROLE

This is probably not what we want when we cannot change anything on the application side. This is considered as a bug (Bug 25081564 : ALTER SESSION SET CONTAINER IN “ON LOGON TRIGGER” IS NOT WORKING) fixed in 18.1 (expected in Q1 2018) and there’s a patch for 12.1 and 12.2 https://updates.oracle.com/download/25081564.html

Issue #2: core dump

There’s another issue. If you run the same with SQLcl you have a core dump in the client library libclntsh.so on kpuSetContainerNfy

SQLcl: Release 17.2.0 Production on Tue Aug 22 22:00:52 2017
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
SQL> connect MYOLDUSER/covfefe
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fcaa172faf6, pid=31242, tid=140510230116096
#
# JRE version: Java(TM) SE Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libclntsh.so.12.1+0x11d8af6] kpuSetContainerNfy+0x66
#
# Core dump written. Default location: /media/sf_share/122/blogs/core or core.31242

There’s a SR opened for that. This is not a no-go because the context being no change to the client part, then sqlplus will probably be used. However, that’s another point which shows that ‘set container’ in a logon trigger may have some implementation problems.

Issue #3: security

In my opinion, there is a bigger problem here. With sqlplus (or with sqlcl not using local connection) I can connect to the CDB$ROOT and switch to PDB1. But look at all the commands above… where did I grant the ‘set container’ privilege for MYOLDUSER on the PDB1 container? Nowhere. MYOLDUSER has no create session and no set container privileges, but is able to connect to PDB1 thanks to the logon trigger. Of course , the logon trigger is defined by a DBA who knows what he does. But in my opinion, it is not a good idea to bypass the privilege checking.

So what?

With no default role, connecting without the right privilege, the security model is biased here. And disabling the common user prefix will raise other issues one day with plugging operations. Then, in my opinion, this is not a solution to workaround the need to connect with a service. Especially in the context where we run legacy application with no possibility to change the way it connects: you just postpone the problems to bigger ones later.

The real solution is to connect to a service (and that’s not difficult even when you can’t change the code, with TWO_TASK environment variable).

 

Cet article Bequeath connect to PDB: set container in logon trigger? est apparu en premier sur Blog dbi services.


When PDB name conflicts with CDB name

$
0
0

Going to multitenant architecture is not a big change. The administration things (DBA, monitoring, backups) connect to the CDB and the application things connect to the PDB. Without the multitenant option, it is still recommended to go to the CDB architecture. The non-CDB is deprecated and the multitenant architecture brings interesting features. People often ask how to name the CDB and the PDB, especially when they have naming rules or policies in the company. My recommendation is to name the PDB as you are used to naming the databases: the name often gives an idea of the data that is inside, the application, and the environment. The CDB is the container, and in my opinion, you should apply the same naming rules as for servers. Don’t forget that pluggable databases are made to be moved across CDB, so the CDB name should not depend on the content.

But, with single tenant, you have a one-to-one relationship between the CDB and the PDB and then may come the idea to set the same name for CDB and PDB… I’m not sure if it is supported or not and please, don’t do that.

Service Name

There’s one rule: the service name must be unique on a server, especially when registered to the same listener. The PDB name will be the default service name registered by the PDB. And the DB_UNIQUE_NAME of the CDB will be the default service name registered by the CDB. Then the PDB name must be different than the DBA_UNIQUE_NAME.

With this rule, it should be possible to have the same name for the CDB (the DB_NAME) and the PDB, given that we have set a different DB_UNIQUE_NAME.

Here is an example. The name of my Container Database is CDB1. But as it is part of a Data Guard configuration I changed the unique name to CDB1A (and standby will be CDB1B).

Here are the services from by CDB:

SQL> select * from v$services;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH GOAL DTP AQ_HA_NOTIFICATION CLB_GOAL COMMIT_OUTESSION_STATE_CONSISTENCY GLOBAL PDB SQL_TRANSLATION_PROFILE MAX_LAG_TIME STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT CON_ID
---------- ---- --------- ------------ ------------- ------------------ ---- --- ------------------ -------- ---------------------------------- ------ --- ----------------------- ------------ ----------- ---------------- ------------- ------
7 CDB1A 3104886812 CDB1A 27-AUG-17 1962062146 NONE N NO LONG NO NO CDB$ROOT NONE NONE 0 1
1 SYS$BACKGROUND 165959219 26-JAN-17 1784430042 NONE N NO SHORT NO NO CDB$ROOT NONE NONE 0 1
2 SYS$USERS 3427055676 26-JAN-17 1784430042 NONE N NO SHORT NO NO CDB$ROOT NONE NONE 0 1
0 pdb1 1888881990 pdb1 0 NONE N NO SHORT NO NO PDB1 NONE NONE 0 4
6 CDB1XDB 1202503288 CDB1XDB 27-AUG-17 1962062146 NONE N NO LONG NO NO CDB$ROOT NONE NONE 0 1

All are default services: CDB1A is the DB_UNIQUE_NAME, SYS$BACKGROUND for background processes, SYS$USERS when connecting without a service name, CDB1XDB is used to connec to XDB dispathers. PDB1 is the default service of my pluggable database PDB1.

I can also look at the services registred in the listener:


SQL> host lsnrctl status
 
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 28-AUG-2017 20:34:36
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 27-AUG-2017 20:41:33
Uptime 0 days 23 hr. 53 min. 3 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/VM104/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "57c2283990d42152e053684ea8c05ea0" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1A" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

There is just one additional service here: the GUI of my PDB (see https://blog.dbi-services.com/service-696c6f76656d756c746974656e616e74-has-1-instances/)

ORA-65149

Do you see any service named ‘CDB1′ here? No. Then I should be able to create a PDB with this name.

SQL> create pluggable database CDB1 admin user admin identified by covfefe file_name_convert=('pdbseed','cdb1');
 
Error starting at line : 1 in command -
create pluggable database CDB1 admin user admin identified by covfefe file_name_convert=('pdbseed','cdb1')
Error report -
ORA-65149: PDB name conflicts with existing service name in the CDB or the PDB
65149. 00000 - "PDB name conflicts with existing service name in the CDB or the PDB"
*Cause: An attempt was made to create a pluggable database (PDB) whose
name conflicts with the existing service name in the container
database (CDB) or the PDB.
*Action: Choose a different name for the PDB.

Ok. This is impossible. However, the error message is not correct. My PDB name does not conflict with existing service names. It may conflict with instance name or DB_NAME, but not with any service.

NID

As I’m not satisfied with this, I try to find another way to have the same name for CDB and PDB. I have a pluggable database named ‘PDB1′ and I’ll try to change the CDB name to this:


[oracle@VM104 ~]$ nid dbname=PDB1 target=sys/oracle
 
DBNEWID: Release 12.2.0.1.0 - Production on Mon Aug 28 20:40:08 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Connected to database CDB1 (DBID=926862412)
 
Connected to server version 12.2.0
 
Control Files in database:
/u01/oradata/CDB1A/control01.ctl
/u01/fast_recovery_area/CDB1A/control02.ctl
 
The following datafiles are read-only:
/u01/oradata/CDB1A/PDB1/USERS2.db (17)
These files must be writable by this utility.
 
Change database ID and database name CDB1 to PDB1? (Y/[N]) => Y
 
Proceeding with operation
Changing database ID from 926862412 to 3460932968
Changing database name from CDB1 to PDB1
Control File /u01/oradata/CDB1A/control01.ctl - modified
Control File /u01/fast_recovery_area/CDB1A/control02.ctl - modified
Datafile /u01/oradata/CDB1A/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/users01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/USERS.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/USERS2.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/temp01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/temp012017-08-27_18-30-16-741-PM.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/temp012017-08-27_18-30-16-741-PM.db - dbid changed, wrote new name
Control File /u01/oradata/CDB1A/control01.ctl - dbid changed, wrote new name
Control File /u01/fast_recovery_area/CDB1A/control02.ctl - dbid changed, wrote new name
Instance shut down
 
Database name changed to PDB1.
Modify parameter file and generate a new password file before restarting.
Database ID for database PDB1 changed to 3460932968.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.
 
SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 784338600 bytes
Database Buffers 58720256 bytes
Redo Buffers 7974912 bytes
ORA-01103: database name 'PDB1' in control file is not 'CDB1'
 
SQL> alter system set db_name=PDB1 scope=spfile;
 
System altered.
 
SQL> shutdown immediate
ORA-01507: database not mounted
 
ORACLE instance shut down.
SQL>
SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 784338600 bytes
Database Buffers 58720256 bytes
Redo Buffers 7974912 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
 
SQL> alter database open resetlogs;
 
Database altered.

That’s done.
My CDB is named PDB1:
SQL> select * from v$database;
 
DBID NAME CREATED RESETLOGS_CHANGE# RESETLOGS_TIME PRIOR_RESETLOGS_CHANGE# PRIOR_RESETLOGS_TIME LOG_MODE CHECKPOINT_CHANGE# ARCHIVE_CHANGE# CONTROLFILE_TYPE CONTROLFILE_CREATED CONTROLFILE_SEQUENCE# CONTROLFILE_CHANGE# CONTROLFILE_TIME OPEN_RESETLOGS VERSION_TIME OPEN_MODE PROTECTION_MODE PROTECTION_LEVEL REMOTE_ARCHIVE ACTIVATION# SWITCHOVER# DATABASE_ROLE ARCHIVELOG_CHANGE# ARCHIVELOG_COMPRESSION SWITCHOVER_STATUS DATAGUARD_BROKER GUARD_STATUS SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_PK SUPPLEMENTAL_LOG_DATA_UI FORCE_LOGGING PLATFORM_ID PLATFORM_NAME RECOVERY_TARGET_INCARNATION# LAST_OPEN_INCARNATION# CURRENT_SCN FLASHBACK_ON SUPPLEMENTAL_LOG_DATA_FK SUPPLEMENTAL_LOG_DATA_ALL DB_UNIQUE_NAME STANDBY_BECAME_PRIMARY_SCN FS_FAILOVER_STATUS FS_FAILOVER_CURRENT_TARGET FS_FAILOVER_THRESHOLD FS_FAILOVER_OBSERVER_PRESENT FS_FAILOVER_OBSERVER_HOST CONTROLFILE_CONVERTED PRIMARY_DB_UNIQUE_NAME SUPPLEMENTAL_LOG_DATA_PL MIN_REQUIRED_CAPTURE_CHANGE# CDB CON_ID PENDING_ROLE_CHANGE_TASKS CON_DBID FORCE_FULL_DB_CACHING
---- ---- ------- ----------------- -------------- ----------------------- -------------------- -------- ------------------ --------------- ---------------- ------------------- --------------------- ------------------- ---------------- -------------- ------------ --------- --------------- ---------------- -------------- ----------- ----------- ------------- ------------------ ---------------------- ----------------- ---------------- ------------ ------------------------- ------------------------ ------------------------ ------------- ----------- ------------- ---------------------------- ---------------------- ----------- ------------ ------------------------ ------------------------- -------------- -------------------------- ------------------ -------------------------- --------------------- ---------------------------- ------------------------- --------------------- ---------------------- ------------------------ ---------------------------- --- ------ ------------------------- -------- ---------------------
3460932968 PDB1 27-AUG-17 1495032 28-AUG-17 1408558 27-AUG-17 ARCHIVELOG 1495035 0 CURRENT 27-AUG-17 2574 1496538 28-AUG-17 NOT ALLOWED 27-AUG-17 READ WRITE MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE ENABLED 3460947145 3460947145 PRIMARY 0 DISABLED NOT ALLOWED DISABLED NONE NO NO NO NO 13 Linux x86 64-bit 3 3 1497050 NO NO NO CDB1A 0 DISABLED 0 NO NO YES 0 NOT APPLICABLE 3460932968 NO

And I have a PDB with the same name:

SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED
 
SQL> alter pluggable database PDB1 open;
 
Pluggable database PDB1 altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO

What was forbidden with a wrong error message was made possible with this other way.

So what?

Please, do not take this as a solution. There is clearly a problem here. Maybe the documentation and error message are wrong. Maybe the NID has a bug, allowing to do something that should be blocked. Or the create pluggable database has a bug, blocking something that should be possible. Until this is fixed (SR opened) I would recommend that the PDB name is always different than the CDB name, independently of service names. Well, I would recommend it anyway as it brings a lot of confusion: when you mention a database name, people will not know whether you are referring to the CDB or the PDB.

 

Cet article When PDB name conflicts with CDB name est apparu en premier sur Blog dbi services.

12c dbms_stats.gather_table_stats on GTT do not commit

$
0
0

In my UKOUG OracleScene article on 12c online statistics and GTT I mentioned the following:

A final note about those 12c changes in statistics gathering on GTT. In 11g the dbms_stats did a commit at the start. So if you did gather stats after the load, you had to set the GTT as ON COMMIT PRESERVE ROWS. Or you just vacuum what you’ve loaded. That has changed in 12c. If you now choose to do a conventional insert followed by dbms_stats (having set private stats of course) then you don’t need to set on commit preserve rows anymore.

Today, I realized that I’ve never explained exactly when dbms_stats.gather_table_stats commits the transaction or not. Because, of course, it depends. In summary: 12c non-SYS owner GTT with private statistics.

Here is an example. I connect as non-SYS user:

SQL> connect demo/demo@//localhost/pdb1
Connected.
SQL> show user
USER is "DEMO"

I create a permanent table and a global temporary table:

SQL> create table DEMO(text varchar2(20));
Table created.
 
SQL> create global temporary table DEMOGTT(text varchar2(20));
Table created.

In the permanent table, I insert my row. The goal is to be sure that this insert is not commited and can be rolled back at the end:

SQL> insert into DEMO values('Forget me, please!');
1 row created.

In the global temporary table I insert one row. The goal is to be sure that the row remains until the end of my transaction (on commit delete rows):

SQL> insert into DEMOGTT values('Preserve me, please!');
1 row created.

Here it is:

SQL> select * from DEMO;
 
TEXT
--------------------
Forget me, please!
 
SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!

Then, I gather statistics on the GTT:

SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.

I check that my rows in the GTT are still there, which is a proof that no commit happened:

SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!

And I check that, as no commit happened, I can rollback my previous insert on the permanent table:

SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
no rows selected

This is the new behavior in 12c. The same in 11g would have committed my transaction before and after the call to dbms_stats.

GTT only

Here is the same example when gathering the stats on the permanent table:
SQL> show user
USER is "DEMO"
SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
&nbsp:
SQL> select * from DEMOGTT;
no rows selected
&nbsp:
SQL> rollback;
Rollback complete.
&nbsp:
SQL> select * from DEMO;
&nbsp:
TEXT
--------------------
Forget me, please!

The transaction was committed by dbms_stats here: no rows from GTT (on commit delete rows), and the insert in permanent table was commited before my rollback.

Not for SYS

When connected as SYS:
SQL> show user
USER is "SYS"
SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.
 
SQL> select * from DEMOGTT;
no rows selected
 
SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
 
TEXT
--------------------
Forget me, please!

The transaction was committed by dbms_stats here: when the table is owned by SYS, dbms_stats commits.

I mean, not for SYS owner

If I’m connected by SYS but gather stats on a non-SYS table, dbms_stats do not commit:

SQL> show user
USER is "SYS"
SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT');
PL/SQL procedure successfully completed.
 
SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!
 
SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
no rows selected

The behaviour is not related to the user who runs dbms_stats, but the owner of the GTT.

Private statistics only

The default in 12c for GTT is private statistics, visible to session only. Trying the same with shared statistics (as in 11g):
SQL> show user
USER is "DEMO"
 
SQL> select dbms_stats.get_prefs(ownname=>user,tabname=>'DEMO_GTT',pname=>'GLOBAL_TEMP_TABLE_STATS') from dual;
 
DBMS_STATS.GET_PREFS(OWNNAME=>USER,TABNAME=>'DEMO_GTT',PNAME=>'GLOBAL_TEMP_TABLE
--------------------------------------------------------------------------------
SESSION
 
SQL> exec dbms_stats.set_table_prefs(user,'DEMO_GTT','GLOBAL_TEMP_TABLE_STATS','SHARED' );
PL/SQL procedure successfully completed.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.
&nbsp:
SQL> select * from DEMOGTT;
no rows selected
&nbsp:
SQL> rollback;
Rollback complete.
&nbsp:
SQL> select * from DEMO;
&nbsp:
TEXT
--------------------
Forget me, please!
 
SQL> exec dbms_stats.set_table_prefs(user,'DEMO_GTT', 'GLOBAL_TEMP_TABLE_STATS',null);
PL/SQL procedure successfully completed.

The dbms_stats did commit my transaction here.

So what?

Private session statistics for GTT is a great feature. Use it: gather statistics after filling the GTT. And don’t worry about on commit delete rows GTT (the default) because this statistic gathering do not commit the transation.

 

Cet article 12c dbms_stats.gather_table_stats on GTT do not commit est apparu en premier sur Blog dbi services.

12c Access Control Lists

$
0
0

There is already enough information about the new simplified 12c way to define Access Control Lists, such as in oracle-base.
I’m just posting my example here to show how it is easy.

If, as a non-SYS user you want to connect to a host with TCP, you get an error:

SQL> connect DEMO1/demo@//localhost/PDB1
Connected.
SQL>
SQL>
SQL> declare
2 c utl_tcp.connection;
3 n number:=0;
4 begin
5 c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
6 end;
7 /
 
Error starting at line : 27 File @ /media/sf_share/122/blogs/12cacl.sql
In command -
declare
c utl_tcp.connection;
n number:=0;
begin
c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
end;
Error report -
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_TCP", line 19
ORA-06512: at "SYS.UTL_TCP", line 284
ORA-06512: at line 5
24247. 00000 - "network access denied by access control list (ACL)"
*Cause: No access control list (ACL) has been assigned to the target
host or the privilege necessary to access the target host has not
been granted to the user in the access control list.
*Action: Ensure that an access control list (ACL) has been assigned to
the target host and the privilege necessary to access the target
host has been granted to the user.
SQL>

Here are the ACLs defined by default:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
 
SQL> select * from dba_host_acls;
 
HOST LOWER_PORT UPPER_PORT ACL ACLID ACL_OWNER
---- ---------- ---------- --- ----- ---------
* NETWORK_ACL_4700D2108291557EE05387E5E50A8899 0000000080002724 SYS
 
SQL> select * from dba_host_aces;
 
HOST LOWER_PORT UPPER_PORT ACE_ORDER START_DATE END_DATE GRANT_TYPE INVERTED_PRINCIPAL PRINCIPAL PRINCIPAL_TYPE PRIVILEGE
---- ---------- ---------- --------- ---------- -------- ---------- ------------------ --------- -------------- ---------
* 1 GRANT NO GSMADMIN_INTERNAL DATABASE RESOLVE
* 2 GRANT NO GGSYS DATABASE RESOLVE

So, I add an ACL to access to towel.blinkenlights.nl on telnet port (23) for my user DEMO1:

SQL> exec dbms_network_acl_admin.append_host_ace(host=>'towel.blinkenlights.nl',lower_port=>23,upper_port=>23,ace=>xs$ace_type(privilege_list =>xs$name_list('connect'),principal_name=>'DEMO1',principal_type =>xs_acl.ptype_db));
 
PL/SQL procedure successfully completed.
 
SQL> select * from dba_host_acls;
 
HOST LOWER_PORT UPPER_PORT ACL ACLID ACL_OWNER
---- ---------- ---------- --- ----- ---------
towel.blinkenlights.nl 23 23 NETWORK_ACL_5876ADC67B6635CEE053684EA8C0F378 000000008000281F SYS
* NETWORK_ACL_4700D2108291557EE05387E5E50A8899 0000000080002724 SYS
 
SQL> select * from dba_host_aces;
 
HOST LOWER_PORT UPPER_PORT ACE_ORDER START_DATE END_DATE GRANT_TYPE INVERTED_PRINCIPAL PRINCIPAL PRINCIPAL_TYPE PRIVILEGE
---- ---------- ---------- --------- ---------- -------- ---------- ------------------ --------- -------------- ---------
* 1 GRANT NO GSMADMIN_INTERNAL DATABASE RESOLVE
* 2 GRANT NO GGSYS DATABASE RESOLVE
towel.blinkenlights.nl 23 23 1 GRANT NO DEMO1 DATABASE CONNECT

Now I can connect from my user:

SQL> connect DEMO1/demo@//localhost/PDB1
Connected.
 
SQL> declare
2 c utl_tcp.connection;
3 n number:=0;
4 begin
5 c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
6 end;
7 /
 
PL/SQL procedure successfully completed.

If you don’t know why I used towel.blinkenlights.nl, then just try to telnet to it and have fun…

 

Cet article 12c Access Control Lists est apparu en premier sur Blog dbi services.

Active Data Guard services in Multitenant

$
0
0

A database (or the CDB in multitenant) registers its name as the default service. When a standby database is on the same server, or same cluster, you have no problem because this database name is the db_unique_name which is different between the primary and the standby(s).

In multitenant, in addition to that, each PDB registers its name as a service. But the PDB name is the same in the primary and the standby database. This means that we have the same service name registered for the PDB in primary and standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

We cannot change that, and then it is strongly recommended to create different services for the PDB in primary and standby.

The PDB default service name

Here is what we want to avoid.
I’ve a container database (db_name=CDB2) with its primary (db_unique_name=CDB2A) and standby (db_unique_name=CDB2B) on the same server, registered to the same listener:

Service "59408d6bed2c1c8ee0536a4ea8c0cfa9" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2A" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2AXDB" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2A_DGB" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2A_DGMGRL" has 1 instance(s).
Instance "CDB2A", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2B" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2BXDB" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2B_DGB" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2B_DGMGRL" has 1 instance(s).
Instance "CDB2B", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2_CFG" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

The PDB1 service is registered from both instances, and then when I use it in my connection string I’m connected at random to the primary or the standby:

22:27:46 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:27:51 SQL> select * from v$instance;
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2B VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:00 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:06 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2A VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:07 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:10 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2B VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:11 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:13 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2A VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE

I don’t want to use a service that connects at random and then I need to create different services.

Read-Only service for the Active Data Guard standby

I’m in Oracle Restart and I create the service with srvctl (but you can also create it with dbms_service when not running with Grid Infrastructure):


srvctl add service -db cdb2b -service pdb1_ro -pdb pdb1 -role physical_standby

This creates the service for the standby database (CDB2B) to be started when in physical standby role, and the service connects to the pluggable database PDB1.
But I cannot start it:

srvctl start service -db cdb2b -service pdb1_ro -pdb pdb1
 
 
PRCD-1084 : Failed to start service pdb1_ro
PRCR-1079 : Failed to start resource ora.cdb2b.pdb1_ro.svc
CRS-5017: The resource action "ora.cdb2b.pdb1_ro.svc start" encountered the following error:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_SERVICE", line 5
ORA-06512: at "SYS.DBMS_SERVICE", line 288
ORA-06512: at line 1
. For details refer to "(:CLSN00107:)" in "/u01/app/12.2/diag/crs/vm106/crs/trace/ohasd_oraagent_oracle.trc".
 
CRS-2674: Start of 'ora.cdb2b.pdb1_ro.svc' on 'vm106' failed

The reason is that the service information must be stored in the dictionary, SYS.SERVICE$ table, and you cannot do that on a read-only database.

This has been explained a long time ago by Ivica Arsov on his blog: https://iarsov.com/oracle/data-guard/active-services-on-physical-standby-database/ and nothing has changed. You need to create the service on the primary so that the update of SYS.SERVICE$ is propagated to the standby database through log shipping:


srvctl add service -db cdb2a -service pdb1_ro -pdb pdb1 -role physical_standby

This is not sufficient because the insert in SYS.SERVICE$ does not occur yet:

SQL> alter session set container=PDB1;
 
Session altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- --------- ----------
3 PDB1 READ WRITE NO
 
SQL> select * from service$;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH DELETION_DATE FAILOVER_METHOD FAILOVER_TYPE FAILOVER_RETRIES FAILOVER_DELAY MIN_CARDINALITY MAX_CARDINALITY GOAL FLAGS EDITION PDB RETENTION_TIMEOUT REPLAY_INITIATION_TIMEOUT SESSION_STATE_CONSISTENCY SQL_TRANSLATION_PROFILE MAX_LAG_TIME GSM_FLAGS PQ_SVC STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT
---------- ---- --------- ------------ ------------- ------------------ ------------- --------------- ------------- ---------------- -------------- --------------- --------------- ---- ----- ------- --- ----------------- ------------------------- ------------------------- ----------------------- ------------ --------- ------ ----------- ---------------- -------------
14 pdb1 1888881990 pdb1 15-SEP-17 1332716667 136 PDB1

As explained by Ivica in his blog post, we need to start the service once to have the row inserted in SERVICE$:

srvctl start service -db cdb2a -service pdb1_ro -pdb pdb1
srvctl stop service -db cdb2a -service pdb1_ro

Now the service information is persistent in the dictionary:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
3 PDB1 READ WRITE NO
 
SQL> select * from service$;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH DELETION_DATE FAILOVER_METHOD FAILOVER_TYPE FAILOVER_RETRIES FAILOVER_DELAY MIN_CARDINALITY MAX_CARDINALITY GOAL FLAGS EDITION PDB RETENTION_TIMEOUT REPLAY_INITIATION_TIMEOUT SESSION_STATE_CONSISTENCY SQL_TRANSLATION_PROFILE MAX_LAG_TIME GSM_FLAGS PQ_SVC STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT
---------- ---- --------- ------------ ------------- ------------------ ------------- --------------- ------------- ---------------- -------------- --------------- --------------- ---- ----- ------- --- ----------------- ------------------------- ------------------------- ----------------------- ------------ --------- ------ ----------- ---------------- -------------
14 pdb1 1888881990 pdb1 15-SEP-17 1332716667 136 PDB1
1 pdb1_ro 1562179816 pdb1_ro 15-SEP-17 1301388390 0 0 0 8 PDB1 86400 300 DYNAMIC ANY 0 0 0 0

This is from the primary, but after the redo has been transported and applied, I have the same on the standby. Now I can start the service I’ve created for the standby:

srvctl start service -db cdb2b -service pdb1_ro -pdb pdb1

Here is the new service registered on the listener, which I can use to connect to the read-only PDB1 on the Active Data Guard standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_ro" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_rw" has 1 instance(s).

Read-Write service for the primary

You can see above that in order to select from SERVICE$ I connected to CDB$ROOT and switched to PDB1 with ‘set container’. There’s no other choice because using the service name directs me at random to any instance. Then, I need a service to connect to the primary only, and I’ll call it PDB1_RW as it is opened in Read Write there.

srvctl add service -db cdb2a -service pdb1_rw -pdb pdb1 -role primary
srvctl start service -db cdb2a -service pdb1_rw

Finally, here are the services registered from the listener:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_ro" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_rw" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...

I’ll probably never use the ‘PDB1′ service because I want to know where I connect to.

In case of switchover, I also create the Read Write service in for the standby:

srvctl add service -db cdb2b -service pdb1_rw -pdb pdb1 -role primary

Here are the resources when CDB2A is the primary:

$ crsctl stat resource -t -w "TYPE = ora.service.type"
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdb2a.pdb1_ro.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2a.pdb1_rw.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2b.pdb1_ro.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2b.pdb1_rw.svc
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------

I test as switchover to CDB2B:

$ dgmgrl sys/oracle
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Sep 15 23:41:26 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Welcome to DGMGRL, type "help" for information.
Connected to "CDB2B"
Connected as SYSDG.
DGMGRL> switchover to cdb2b;
Performing switchover NOW, please wait...
New primary database "cdb2b" is opening...
Oracle Clusterware is restarting database "cdb2a" ...
Switchover succeeded, new primary is "cdb2b"

Here are the services:

[oracle@VM106 blogs]$ crsctl stat resource -t -w "TYPE = ora.service.type"
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdb2a.pdb1_ro.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2a.pdb1_rw.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2b.pdb1_ro.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2b.pdb1_rw.svc
1 ONLINE ONLINE vm106 STABLE
--------------------------------------------------------------------------------

So what?

The recommendations are not new here:

  • Always do the same on the primary and the standby. Create services on both sites, then have started them depending on the role
  • Always use one or several application services rather than the default one, in order to have better control and flexibility on where you connect

In multitenant, because services are mandatory to connect to a container with a local user, all the recommendations about services are even more important than before. If you follow them, you will see that multitenant is not difficult at all.

This case may seem improbable, because you probably don’t put the standby on the same server or cluster as the primary. But you may have several standby databases on the same server. About the service registered from the PDB name, just don’t use it. I’m more concerned by the GUID service name (here 59408d6bed2c1c8ee0536a4ea8c0cfa9) which is also declared by both databases. If you plan to use online PDB relocate in a Data Guard configuration then be careful with that. I’ve not tested it, but it is probably better to keep the standby PDB closed, or at least do not register it on the same listener.

 

Cet article Active Data Guard services in Multitenant est apparu en premier sur Blog dbi services.

Wrong result with multitenant, dba_contraints and current_schema

$
0
0

Multitenant architecture is not such a big change and this is why I recommend it when you start a project in 12c or if you upgrade to 12.2 – of course after thoroughly testing your application. However, there is a point where you may encounter problems on dictionary queries, because it is really a big change internally. The dictionary separation has several side effects. You should test carefully the queries you do on the dictionary views to get metadata. Here is an example of a bug I recently encountered.

This happened with a combination of things you should not do very often, and not in a critical use case: query dictionary for constraints owned by your current schema, when different than the user you connect with.

I create two users: USER1 and USER2
SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> grant dba to USER1 identified by USER1 container=current;
Grant succeeded.
SQL> grant dba to USER2 identified by USER2 container=current;
Grant succeeded.

USER1 owns a table which has a constraint:

SQL> connect USER1/USER1@//localhost/PDB1
Connected.
SQL> create table DEMO(dummy constraint pk primary key) as select * from dual;
Table DEMO created.

USER2 can access to the table either by prefixing it with USER1 or by setting the current_schema to USER1

SQL> connect USER2/USER2@//localhost/PDB1
Connected.
SQL> alter session set current_schema=USER1;
Session altered.

Bug

Ok, now imagine you want to read constraint metadata for the current schema you have set:

SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 /
 
no rows selected

No rows selected is a wrong result here because my current_schema is USER1 and USER1 has constraints:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = 'USER1'
4 /
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

So, where’s the problem? Let’s have a look at the execution plan:

SQL_ID 2fghqwz1cktyf, child number 0
-------------------------------------
select sys_context('USERENV','CURRENT_SCHEMA'), a.* from
sys.dba_constraints a where owner =
sys_context('USERENV','CURRENT_SCHEMA')
 
Plan hash value: 1258862619
 
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.32 | 2656 |
| 1 | PARTITION LIST ALL | | 1 | 2 | 0 |00:00:00.32 | 2656 |
|* 2 | EXTENDED DATA LINK FULL| INT$INT$DBA_CONSTRAINTS | 2 | 2 | 0 |00:00:00.32 | 2656 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter((("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR
("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER(SY
S_CONTEXT('USERENV','CON_ID')))) AND "OWNER"=SYS_CONTEXT('USERENV','CURRENT_SCHEMA')))

I am in 12.2 and DBA_CONSTRAINTS reads from INT$DBA_CONSTRAINTS which reads from INT$INT$DBA_CONSTRAINTS and in multitenant this view being an extended data view will read from CDB$ROOT and from the current container. This is why we see EXTENDED DATA LINK FULL in the execution plan and up to this point the predicates are correct: “OWNER”=SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’)

The execution through data link is run on each container with parallel processes: they switch to the container and run the underlying query on the view. But when I look at the sql trace of the parallel process running the query on my PDB I can see that the predicate on OWNER has replaced the SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’) with the hardcoded value:

SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) OPT_PARAM('_ENABLE_VIEW_PDB', 'FALSE') */ OWNER,CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,OBJECT_TYPE#,SEARCH_CONDITION,SEARCH_CONDITION_VC,R_OWNER,R_CONSTRAINT_NAME,DELETE_RULE,STATUS,DEFERRABLE,DEFERRED,VALIDATED,GENERATED,BAD,RELY,LAST_CHANGE,INDEX_OWNER,INDEX_NAME,INVALID,VIEW_RELATED,ORIGIN_CON_ID FROM NO_COMMON_DATA(SYS."INT$INT$DBA_CONSTRAINTS") "INT$INT$DBA_CONSTRAINTS" WHERE ("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR "INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER('3')) AND "INT$INT$DBA_CONSTRAINTS"."OWNER"=q'"USER2"'

And unfortunately, this value is not the right one: USER2 is my connected user, but not the CURRENT_SCHEMA that I have set. In the same trace, I can see where this value comes from:

select 'q''"' || SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') || '"''' from sys.dual

but it seems that the current_schema was lost through the call to the parallel process and the PDB switch to my container.

Workaround

The problem is easy to workaround. This works:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = ( select sys_context('USERENV','CURRENT_SCHEMA') from dual )
4 /
 
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

And anyway, better to get the current schema before and pass it as a bind variable. The bind variables are passed correctly through data link queries:


SQL> variable v varchar2(30)
SQL> exec select sys_context('USERENV','CURRENT_SCHEMA') into :v from dual;
 
PL/SQL procedure successfully completed.
 
SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 --where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 where owner = :v
5 /

So what?

The multitenant architecture is a real challenge for dictionary views. The dictionary is separated: system metadata in CDB$ROOT and user metadata in PDB. But, because of compatibility with non-CDB architecture, the dictionary views must show both of them, and this is where it becomes complex: what was separated on purpose has now to be merged. And complexity is subject to bugs. If you want to get an idea, have a look at dcore.sql in ORACLE_HOME/rdbms/admin and compare 11g version with 12c ones, with all the evolution in 12.1.0.1, 12.1.0.2 and 12.2.0.1

 

Cet article Wrong result with multitenant, dba_contraints and current_schema est apparu en premier sur Blog dbi services.

Viewing all 190 articles
Browse latest View live