Quantcast
Channel: Archives des Oracle 12c - dbi Blog
Viewing all 190 articles
Browse latest View live

Oracle 12c – DB_UNKNOWN in ASM

$
0
0

Have you ever noticed a DB_UNKNOWN directory in your ASM structure? It usually happens in combination with spfile creation in ASM, or with RMAN spfile restores to ASM.

The correct location is +DATA/<SID>/PARAMETERFILE/SPFILE.<#>.<#>, and an ASM alias from +DATA/<SID>/ pointing to it.

But sometimes, the spfile ends up in +DATA/DB_UNKNOWN/PARAMETERFILE/SPFILE.<#>.<#>

Technically no issue. The spfile in the DB_UNKNOWN directory is perfectly ok and can be used. However, you might need to adjust your init<SID>.ora in case you have a config like the following

oracle@oel001:/u00/app/oracle/product/12.1.0.2/dbs/ [OCM121] cat initOCM121.ora
SPFILE='+data/DB_UNKNOWN/PARAMETERFILE/SPFILE.293.927371209'

Maybe you have a 4 node RAC, then you need to adjust it on every node. Maybe you have a cluster resource with a spfile entry. Then you need to adjust that one as well. And besides that, to what database does the DB_UNKNOWN belong to? Imagine you have 20 DB’s running and you need to find out, which database has something in the DB_UNKNOWN directory, in case there are more entries.

No … it is not a good situation. It has to be corrected. But how?

First of all, let’s create a situation that ends up with a DB_UNKNOWN directory.

It is quite easy to do. Typically, with spfile restores or with a “create spfile from pfile”

  1. Shutdown the DB
  2. Startup RMAN dummy instance
  3. Restore the spfile to pfile
  4. Shutdown the Instance
  5. Adjust the pfile
  6. Create the spfile from pfile while the DB is shutdown

Here is an example with 12cR1 (12.1.0.2). I am jumping directly to the RMAN restore, because RMAN dummy instance was already explained in http://blog.dbi-services.com/oracle-12c-when-the-rman-dummy-instance-does-not-start-up/

Ok. Let’s check the current location of the spfile of the cluster resource.

oracle@oel001:/home/oracle/ [OCM121] srvctl config database -d OCM121 | grep -i spfile
Spfile: +DATA/OCM121/spfileOCM121.ora

Now we can run the RMAN restore of the spfile to pfile. Restoring it to a pfile first has the advantage, that we can take a look at all settings and maybe adjust them, before we put it back into production.

run {
restore spfile to pfile '/tmp/initOCM121.ora' for db_unique_name='OCM121' from
'+fra/OCM121/AUTOBACKUP/2016_10_29/s_926511850.517.926511853';
}

Starting restore at 08-NOV-2016 11:01:04
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=364 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=12 device type=DISK
channel ORA_DISK_2: skipped, AUTOBACKUP already found
channel ORA_DISK_1: restoring spfile from AUTOBACKUP +fra/OCM121/AUTOBACKUP/2016_10_29/s_926511850.517.926511853
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 08-NOV-2016 11:01:14

The pfile was successfully created. Now we can correct some settings in the pfile if we want and then create a spfile again.

oracle@oel001:/home/oracle/ [OCM121] ls -l /tmp/initOCM121.ora
-rw-r--r-- 1 oracle asmadmin 1777 Nov  8 11:01 /tmp/initOCM121.ora

Ok. Let’s create the new spfile while the DB is shutdown.

oracle@oel001:/home/oracle/ [OCM121] sqh

SQL*Plus: Release 12.1.0.2.0 Production on Tue Nov 8 11:03:56 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> create spfile='+DATA' from pfile='/tmp/initOCM121.ora';

File created.

Oppssss … and now it happened. The directory DB_UNKNOWN is created. While the database is shutdown, Oracle does not know the DB_NAME and so, it has to create a placeholder directory to save the spfile.

ASMCMD> pwd
+data
ASMCMD> ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    CDB121/
                                        Y    DB_UNKNOWN/
                                        Y    OCM121/

ASMCMD> pwd
+data/DB_UNKNOWN/PARAMETERFILE
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   NOV 08 11:00:00  Y    SPFILE.293.927371209

However, this is not the configuration that we want. To correct it, cleanup the DB_UNKNOWN entries, and start your DB into the nomount state and execute then the spfile from pfile command again.

SQL> startup nomount pfile=/tmp/initOCM121.ora
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  2924928 bytes
Variable Size             436211328 bytes
Database Buffers         1157627904 bytes
Redo Buffers               13848576 bytes

SQL> create spfile='+DATA' from pfile='/tmp/initOCM121.ora';

File created.

And here we go. The spfile is the correct location.

ASMCMD> pwd
+data/OCM121/PARAMETERFILE
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   NOV 08 11:00:00  Y    spfile.291.927372029

The only thing missing is the ASM alias. That one has to be created manually afterwards.

ASMCMD> cd +data/OCM121
ASMCMD> mkalias +data/OCM121/PARAMETERFILE/spfile.291.927372029 spfileOCM121.ora
ASMCMD>

Conclusion

It makes a big difference if you create your spfile in the nomount state or while the database is shutdown. You might end up with a totally different directory structure in ASM. With 12.1.0.2 and 11.2.0.4 the nomount state is enough to end up in the correct location. In earlier versions you might need to startup mount to have the same effect.

Cheers,
William

 

 

 

Cet article Oracle 12c – DB_UNKNOWN in ASM est apparu en premier sur Blog dbi services.


Oracle 12cR2: Pluggable database relocation

$
0
0

Here is, in my opinion, the most beautiful feature of the multitenant architecture. You know how I love Transportable Tablespaces. But here:

  • No need to put the source in read/only
  • No need to export/import the metadata logically
  • No need for any option: available even in Standard Edition

Standard Edition

I am in Standard Edition here in both source and target, no option required for this:

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 13:40:05 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

Source: PDB1 on CDB1

On server opc1 I have a container database CDB1 with one pluggable database PDB1 where I create a new table:

23:40:20 (opc1)CDB1 SQL>alter session set container=PDB1;
Session altered.
23:40:20 (opc1)CDB1 SQL>create table DEMO as select current_timestamp insert_timestamp,instance_name from v$instance;
Table created.
23:40:21 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:21 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1

Export encryption key

I’m in Oracle Public Cloud where tablespaces are encrypted. To ship a pluggable database I must export the keys. Here is the query to get them:

23:40:23 (opc1)CDB1 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1';
 
KEY_ID
------------------------------------------------------------------------------
AWlnBaUXG0/gv4evS9Ywu8EAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

And I can filter with this query to export it:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1')
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

I can’t do that with auto-login wallet.

23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ AUTOLOGIN

Let’s open the wallet with password:

23:40:23 (opc1)CDB1 SQL>administer key management set keystore close;
keystore altered.
23:40:23 (opc1)CDB1 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ PASSWORD

and re-try my export:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
keystore altered.

This file must be copied to the destination server. I did it with scp. You can also use dbms_file_transfer as you will need a database link anyway for the remote clone.

Import encryption key

On the destination server, where I have no CDB (I’m limited to one PDB here without the multitenant option)

23:40:31 (opc2)CDB2 SQL>show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO

I have to import the encryption key:

23:40:31 (opc2)CDB2 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
 
23:40:31 (opc2)CDB2 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d";
keystore altered.

I’m now ready to relocate my PDB as I’m sure I’ll be ready to open it.

Database link

The remote clone is done through a DB link. I’ve a TNS entry named CDB1:

23:40:31 (opc2)CDB2 SQL>select dbms_tns.resolve_tnsname('CDB1') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('CDB1')
--------------------------------------------------------------------------------
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=opc1)(PORT=1521))(CONNECT_DAT
A=(SERVER=DEDICATED)(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM
=oracle)(HOST=SE222.compute-opcoct.oraclecloud.internal)(USER=oracle))))
 
23:40:31 (opc2)CDB2 SQL>create database link CDB1 connect to C##DBA identified by oracle using 'CDB1';
Database link created.

DML on source

In order to show that the source doesn’t have to be read only as in previous release, I’m running the following inserts every 5 minutes:

23:40:44 (opc1)CDB1 SQL>commit;
Commit complete.
23:40:44 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:44 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
6 rows selected.

PDB remote clone

Here is the syntax.
I need to provide the masterkey of the source wallet.
The RELOCATE is this new feature where the source PDB will be relocated to the destination when the clone is opened.

23:40:48 (opc2)CDB2 SQL>create pluggable database PDB1 from PDB1@CDB1 keystore identified by "Ach1z0#d" relocate;
Pluggable database created.
23:41:08 (opc2)CDB2 SQL>

It took some time, shipping the datafiles through the DB link, but this is online.
I was still inserting during this time:

23:41:04 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

Note that you need to be in ARCHIVELOG and LOCAL UNDO to be able to do this because syncronisation will be made by media recovery when we open the clone.

Open the clone

Now, the theory is that when we open the clone, DML is quiesced on source during the recovery of the target and sessions can continue on the target once opened.

23:41:09 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
alter pluggable database PDB1 open
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
23:41:26 (opc2)CDB2 SQL>

Bad luck. Every time I tested this scenario, the first open after the relocate fails in deadlock and the session on the source crashes:

23:41:09 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
10-NOV-16 11.41.09.773300 PM +00:00 CDB1
 
11 rows selected.
 
23:41:14 (opc1)CDB1 SQL> commit;
ERROR:
ORA-03114: not connected to ORACLE

It’s a good occasion to look at the traces.
We can see some messages about the recovery:

*** 2016-11-10T23:41:12.660402+00:00 (PDB1(3))
Media Recovery Log /u03/app/oracle/fast_recovery_area/CDB1/foreign_archivelog/PDB1/2016_11_10/o1_mf_1_24_2025109931_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

Those FOREIGN ARCHIVED LOG is a new type of file that you will see in the FRA in 12.2.

So I lost my session on source and now if I try again it works:

23:42:20 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
Pluggable database altered.
23:42:24 (opc2)CDB2 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

All the inserts that were commited on the source are there.
Even with this deadlock bug (SR 3-13618219421), it’s the easiest and fastest way to migrate a database, with the minimum of downtime. Especially in Standard Edition where transportable tablespaces import is not enabled.
Without the deadlock bug, the sessions on the source are supposed to be still running , only paused during the recovery, and then continue on the destination.

 

Cet article Oracle 12cR2: Pluggable database relocation est apparu en premier sur Blog dbi services.

Oracle 12cR2: MAX_PDBS

$
0
0

Oracle database 12.2 is there on the Database Cloud Service, in multitenant. In EE High Performance or Extreme Performance, you have the multitenant option: you can create 4096 pluggable database (instead of 252 in 12.1). If you are in lower services, you can create only one user PDB (not counting application root and proxy PDB). If you are in Standard Edition, it’s simple: it is a hard limit. If you are in simple Enterprise Edition without option, then you have a way to be sure you stay under the limit: MAX_PDBS parameters.

Containers and Plugable Databases

A CDB is a container (CON_ID=0) that contains containers:

  • CDB$ROOT (CON_ID=1)
  • PDB$SEED (CON_ID=2)
  • User created PDBs (CON_ID between 3 and 4098)

Here is how I show it:
CaptureCON_ID

MAX_PDBS

In 12.1 you have no supported way to prevent creating more than one PDB. In 12.2 you have a parameter, MAX_PDBS, which is documented as the maximum number of user created pluggable database. You you can expect it to have the maximum of 4096 but it’s actually 4098 and this is the default value:

SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 4098

So to be sure, let’s create many pluggable databases.

I have one pluggable database, PDB1, opened in read-only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ ONLY NO

And use the following script to clone them:
for i in {1..5000} ; do echo "connect / as sysdba"; echo "create pluggable database pdb$i$RANDOM from pdb1 snapshot copy;" ; echo 'select max(con_id),count(*) from dba_pdbs;' ; echo "host df -h /u01 /u02" ; done | sqlplus / as sysdba
until it fails with:

SQL> create pluggable database pdb49613971 from pdb1 snapshot copy
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Note that I use clonedb=true snapshot copy because I don’t want to fill up my filesystem:

SQL> show parameter clonedb
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
clonedb boolean TRUE
clonedb_dir string /u02/oradata/CDB1
 
SQL> host du -h /u02/oradata/CDB1/CDB1_bitmap.dbf
31M /u02/oradata/CDB1/CDB1_bitmap.dbf

As you see I’ve put the bitmap file outside of $ORACLE_HOME/dbs because in 12.2 we have a parameter for that. So many new features…
In addition to that I had to increase sga, processes and db_files.

Here I have my 4097 PDBs

SQL> select max(con_iount(*) from dba_pdbs;
 
MAX(CON_ID) COUNT(*)
----------- ----------
4098 4097

which includes PDB$SEED. This means 4098 containers inside of my CDB:

SQL> select max(con_id),count(*) from v$containers;
 
MAX(CON_ID) COUNT(*)
----------- ----------
4098 4098

SQL> set pagesize 1000 linesize 1000
select min(con_id),max(con_id),count(*),substr(listagg(name,',' on overflow truncate)within group(order by con_id),1,30) from v$containers;SQL>
 
MIN(CON_ID) MAX(CON_ID) COUNT(*) SUBSTR(LISTAGG(NAME,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYCON_ID),1,30)
----------- ----------- ---------- ------------------------------------------------------------------------------------------------------------------------
1 4098 4098 CDB$ROOT,PDB$SEED,PDB1,PDB2105

So basically you can’t reach the MAX_PDBS default with user created PDBs.

But…

What is really cool with ‘cloud first’ is that we can test it, all on the same platform, probably hit bugs that will be fixed before the on-premises version. This is a great way to ensure that the version is stable when we will put production on it.

I have one PDB:

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 12:12:25 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
12:12:25 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED

I drop it:

12:12:26 SQL> drop pluggable database pdb1 including datafiles;
Pluggable database dropped.

I set MAX_PDBS to one:

 
12:12:44 SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 4098
 
12:13:24 SQL> alter system set max_pdbs=1 scope=memory;
System altered.
 
12:13:45 SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 1

And then try to re-create my PDB:

12:13:54 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

This is not what I expected. Let’s try to increase MAX_PDBS to two, even if I’m sure to have only one user PDB:


12:14:07 SQL> alter system set max_pdbs=2 scope=memory;
System altered.
 
12:14:18 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
Pluggable database created.

Ok. Let’s drop it and re-create it again:

12:15:20 SQL> drop pluggable database PDB1 including datafiles;
Pluggable database dropped.
 
12:16:02 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

That’s bad. It seems that the previously dropped PDBs are still counted:

12:16:07 SQL> alter system set max_pdbs=3 scope=memory;
System altered.
 
12:16:17 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
Pluggable database created.
 
12:17:10 SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
12:18:14 SQL> drop pluggable database PDB1 including datafiles;
Pluggable database dropped.
 
12:18:28 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Probably a small bug there. Some counters not reset maybe.

I’ve dropped one PDB from the CDB where I reached the limit of 4096:


SQL> select count(*) from dba_pdbs where con_id>2;
 
COUNT(*)
----------
4095

I can set MAX_PDBS to 4095 if I and to prevent creating a new one:

SQL> alter system set max_pdbs=4095;
System altered.

What if I want to set it lower than the number of PDBs I have? An error message would be nice, but probably not this one:

SQL> alter system set max_pdbs=4094;
alter system set max_pdbs=4094
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65331: DDL on a data link table is outside an application action.

Anyway, now that MAX_PDBS is set to 4095 I can’t create another one:

SQL> create pluggable database PDB2 from PDB1;
create pluggable database PDB2 from PDB1
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

which is the goal of this parameter and confirms that it counts the user created PDBs and not the all containers.

Here it seems that I can re-create my last PDB when I increase the MAX_PDBS:

SQL> alter system set max_pdbs=4096;
System altered.
SQL> create pluggable database PDB2 from PDB1;
Pluggable database created.

By the way, here is how the multitenant feature usage is detected:

SQL> select name feature_name,version,detected_usages,aux_count
from dba_feature_usage_statistics
where name like '%Pluggable%' or name like '%Multitenant%';
 
FEATURE_NAME
--------------------------------------------------------------------------------
VERSION DETECTED_USAGES AUX_COUNT
----------------- --------------- ----------
Oracle Multitenant
12.2.0.1.0 3 4096

The detected usage just means that I’m in a CDB. The AUX_COUNT tells me if I require the multitenant option. But that’s for a future blog post.

 

Cet article Oracle 12cR2: MAX_PDBS est apparu en premier sur Blog dbi services.

12cR2 has new SQL*Plus features

$
0
0

12cR2 is there. What’s new in SQL*Plus? For sure, you can’t expect lot of things from it. The new command line is the SQL*Developer sqlcl which aims to be 100% compatible with SQL*Plus with lot of more features. However, a few little things came here: default editor, command line history and easy row/LOB prefetch and statement caching.

_EDITOR

Yes, it seems that the default editor is ‘vi’ instead of ‘ed’, finally. This is a great improvement. Of course, you can set the VISUAL environment variable in your system. But when you come to another environment (which consultants do), this default will save lot of “define _editor=vi” keystroke.

The environment variables EDITOR and VISUAL are not set:

SQL> host set | grep -E "(^EDITOR|^VISUAL)"
 
SQL>

but the _EDITOR in sqlplus is set to ‘vi':

SQL> define
DEFINE _DATE = "11-NOV-16" (CHAR)
DEFINE _CONNECT_IDENTIFIER = "CDB1" (CHAR)
DEFINE _USER = "SYS" (CHAR)
DEFINE _PRIVILEGE = "AS SYSDBA" (CHAR)
DEFINE _SQLPLUS_RELEASE = "1202000100" (CHAR)
DEFINE _EDITOR = "vi" (CHAR)
DEFINE _O_VERSION = "Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production" (CHAR)
DEFINE _O_RELEASE = "1202000100" (CHAR)
DEFINE 1 = "sqlplus" (CHAR)
DEFINE _RC = "1" (CHAR)

Here is the default. For sure, vi is better than ‘ed’. ‘ed’ was the line editor from the era of 2400 baud network.

Command history

Yes. In 2016 SQL*Plus has a command line history. Do you need it? Probably not. If you are on Windows, you can navigate with arrow-up and arrow-down in any command line program. If you are on Linux, you have probably installed rlwrap. And finally, if you want to do something friendly on command line, you probably use sqlcl.

However, in 12cR2 a very basic history has been introduced to SQL*Plus.
You have to enable it:

SQL> show HISTORY
history is OFF
SQL> set HISTORY on
SQL> show history
history is ON and set to "100"

so the default is 100 lines, but you can increase it:

SQL> set HISTORY 999999
SP2-0267: history option 999999 out of range (0 through 100000)
SQL> set HISTORY 100000

what can you do with it?

SQL> help HISTORY
 
HISTORY
-------
 
Stores, lists, executes, edits of the commands
entered during the current SQL*Plus session.
 
HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]  
N is the entry number listed in the history list.
Use this number to recall, edit or delete the command.
 
Example:
HIST 3 RUN - will run the 3rd entry from the list.
 
HIST[ORY] without any option will list all entries in the list.

Here are some examples:

SQL> show history
history is OFF
SQL> set history on
SQL> show history
history is ON and set to "100"
SQL> prompt 1
1
SQL> prompt 2
2
SQL> history
1 show history
2 prompt 1
3 prompt 2
 
SQL> history list
1 show history
2 prompt 1
3 prompt 2
 
SQL> history 2 run
1
SQL> history 2 edit
 
SQL> history 2 delete
SQL> history
1 show history
2 prompt 2
3 prompt 1
 
SQL> history clear
SQL> history
SP2-1651: History list is empty.

As you see, it’s not the most user friendly. But for the basic DBA task that you do on a server you may find it safer than up-arrow. Imagine that a ‘shutdown immediate’ is in the history. Do you want to take the risk to run it because of some network latency and you run the line above the one you wanted? Or do you prefer to be sure tho have read the command befor

SET LOBPREF[ETCH], SET ROWPREF[ETCH], and SET STATEMENTC[ACHE].

Here are important performance improvements:

SQL> show lobprefetch
lobprefetch 0
SQL> show rowprefetch
rowprefetch 1
SQL> show statementcache
statementcache is 0

Those are things that you can to on OCI or JDBC and that you can easily do now in SQL*Plus: prefetch rows and LOBs to avoid fetch roundtrips and use statement caching to avoid parse calls.

I’ll probably blog about prefetch in a future blog, so for the moment, here is a quick demo of statement caching.

By default, statement caching is off. I run 3 times the same query:

select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.370333 PM +00:00 session cursor cache hits 15
11-NOV-16 05.00.41.370333 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.370333 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.370333 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (total) 6
11-NOV-16 05.00.41.370333 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.373429 PM +00:00 session cursor cache hits 22
11-NOV-16 05.00.41.373429 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.373429 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.373429 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (total) 7
11-NOV-16 05.00.41.373429 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.375993 PM +00:00 session cursor cache hits 29
11-NOV-16 05.00.41.375993 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.375993 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.375993 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (total) 8
11-NOV-16 05.00.41.375993 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (describe) 0
 
8 rows selected.

You can see that each one had its parse call. Of course, it’s not a hard parse because cursor is shared. It’s not even a soft parse thanks to session cursor cache. But it’s still a parse call.

Let’s set statement caching to one and run the query again 3 times:

set statementcache 1
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.378937 PM +00:00 session cursor cache hits 36
11-NOV-16 05.00.41.378937 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.378937 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.378937 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.378937 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.381403 PM +00:00 session cursor cache hits 42
11-NOV-16 05.00.41.381403 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.381403 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.381403 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.381403 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.383844 PM +00:00 session cursor cache hits 48
11-NOV-16 05.00.41.383844 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.383844 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.383844 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.383844 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (describe) 0
 
8 rows selected.

One more parse call only. The cursor was cached at client side.

How many statements can you cache?

SQL> set statementcache 999999
SP2-0267: statementcache option 999999 out of range (0 through 32767)

from 1 to 32767. The value 0 disable statement caching.

set statementcache 32767

Not yet in 12.2 ?

If you did not upgrade yet to 12.2 you have a way to use statement caching. You can set it in oraaccess.xml which can enable those optimizations for all OCI clients.

sqlplus -F

Those performance settings can be set to default values with the ‘-F’ argument.
Let set which settings are different:

[oracle@OPC122 ~]$ sqlplus -s / as sysdba <<< "store set a.txt replace"
Wrote file a.txt
[oracle@OPC122 ~]$ sqlplus -s -F / as sysdba <<< "store set b.txt replace"
Wrote file b.txt
[oracle@OPC122 ~]$ diff a.txt b.txt
3c3
set arraysize 100
31c31
set lobprefetch 16384
46c46
set rowprefetch 2
59c59
set statementcache 20

Those settings avoid roundtrips and unnecessary work. Documentation says that PAGESIZE set to higher value but I don’t see it here and anyway, it’s about formatting output and not about performance.

VARIABLE

You may use SQL*Plus to test queries with bind variables. Here is what you do before 12.2:

SQL> variable text char
SQL> exec :text:='X'
 
PL/SQL procedure successfully completed.
 
SQL> select * from DUAL where DUMMY=:text;
 
D
-
X

You can now simply:

SQL> variable text char='X'
SQL> select * from DUAL where DUMMY=:text;
 
D
-
X

SQLcl the SQLDeveloper command line

Since 11g SQLDeveloper is shipped in ORACLE_HOME and in 12.2 it includes SQLcl, the SQLDeveloper command line that is fully compatible with SQL*Plus scripts.
The version we have on the DBCS lacks the executable flag and the right JAVA_HOME:

[oracle@SE222 ~]$ /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql / as sysdba
-bash: /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql: Permission denied
[oracle@SE222 ~]$
[oracle@SE222 ~]$ bash /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql / as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Fri Nov 11 21:16:48 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
USER =
URL = jdbc:oracle:oci8:@
Error Message = No suitable driver found for jdbc:oracle:oci8:@
USER =
URL = jdbc:oracle:thin:@127.0.0.1:1521:CDB2
Error Message = No suitable driver found for jdbc:oracle:thin:@127.0.0.1:1521:CDB2
USER =
URL = jdbc:oracle:thin:@localhost:1521/orcl
Error Message = No suitable driver found for jdbc:oracle:thin:@localhost:1521/orcl
Username? (RETRYING) ('/ as sysdba'?)

I’ve defined the following alias:

alias sql='JAVA_HOME=$ORACLE_HOME/jdk bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'

and I’m ready to run it:

[oracle@SE222 ~]$ sql / as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Fri Nov 11 21:20:15 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL>

I like SQLcl except one thing – it’s in java and is long to start:

[oracle@SE222 ~]$ time sql /nolog
real 0m2.184s
user 0m3.054s
sys 0m0.149s

2 seconds is long when you run it frequently. Compare with sqlplus:

[oracle@SE222 ~]$ time sqlplus /nolog
real 0m0.015s
user 0m0.008s
sys 0m0.006s

 

Cet article 12cR2 has new SQL*Plus features est apparu en premier sur Blog dbi services.

12cR2 new index usage tracking

$
0
0

A common question is: how to know which indexes are not used, so that I can drop them. If you tried to use index monitoring you probably have seen the limits of it which make it difficult to use. It has been improved in 12.2 so let’s check if it helps to release the stress of performance regression when we drop an index… or not.

I’ll check two views here. Here is what documentation says about them:

  • DBA_INDEX_USAGE displays cumulative statistics for each index.
  • V$INDEX_USAGE_INFO keeps track of index usage since the last flush. A flush occurs every 15 minutes.
    After each flush, ACTIVE_ELEM_COUNT is reset to 0 and LAST_FLUSH_TIME is updated to the current time.

The documentation about V$INDEX_USAGE_INFO show a column INDEX_STATS_COLLECTION_TYPE where description explains that by default the statistics are collected based on sampling (only a few of the executions are considered when collecting the statistics). The type of collection that collects the statistics for each execution may have a performance overhead.

SAMPLED

I’ve found an undocumented to control this collection, which defaults to ‘SAMPLED’ and I’ll set it to ‘ALL’ to get deterministic test case:
17:53:51 SQL> alter session set "_iut_stat_collection_type"=ALL;
Session altered.

So this is the first problem with how reliable index usage tracking is. If your boss is running a report once a month which needs a index, you may miss this execution and think that this index is unused and decide to drop it. And you will have a regression. Do you want to take the risk on a sample monitoring?

Execution using index

On the SCOTT schema I’m running a query that uses the index PK_DEPT

17:53:51 SQL> set autotrace on explain
Autotrace Enabled
Displays the execution plan only.
 
17:53:51 SQL> select * from emp join dept using(deptno) where ename like 'K%';
 
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM DNAME LOC
10 7839 KING PRESIDENT 17-nov 00:00:00 5000 ACCOUNTING NEW YORK
 
Explain Plan
-----------------------------------------------------------
Plan hash value: 3625962092
 
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 117 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 117 | 3 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1 | 117 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMP | 1 | 87 | 2 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 30 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
3 - filter("EMP"."ENAME" LIKE 'K%')
4 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- this is an adaptive plan
 
17:53:52 SQL> set autotrace off
Autotrace Disabled

When I look at the index usage tracking views, I don’t see this usage and the reason is that the last flush is from before the execution:

17:53:52 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 2 3 30000 8 30790 13-NOV-16 05.48.12.218000000 PM 3
 
17:53:52 SQL> select * from dba_index_usage where owner='SCOTT';
 
no rows selected

The statistics are gathered in memory and are flushed to the dictionary every 15 minutes. For the moment, I’ve not found how to flush them manually, so I just wait 900 seconds:


17:53:52 SQL> host sleep 900
 
18:10:32 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 2 3 30000 9 45898 13-NOV-16 06.03.13.344000000 PM 3
 
18:10:32 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_COUNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_COUNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_1000_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 1 1 1 0 1 0 0 0 0 0 0 0 0 13-nov 18:03:13

Here is my index usage recorded. On execution. One row returned from the index.

DBMS_STATS

One drawback of index monitoring was that the statistics gathering was setting the monitoring to ‘YES’. Let’s see if it’s better in 12.2:


18:10:32 SQL> exec dbms_stats.gather_index_stats('SCOTT','PK_DEPT');
PL/SQL procedure successfully completed.

Again, waiting 15 minutes to get it flushed (and check LAST_FLUSH_TIME):


18:10:32 SQL> host sleep 900
 
18:27:12 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT F LUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 1 3 30000 1 0 48136 13-NOV-16 06.18.13.748000000 PM 3
 
18:27:12 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_CO UNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_CO UNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_10 00_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 2 2 5 0 1 1 4 0 0 0 0 0 0 13-nov 18:18:13

It seems that the index tracking usage has been incremented here. Total rows returned incremented by 4 which is the number of rows in DEPT, read by dbms_stats.
This will be very difficult to use to detect unused index because we can expect that even unused indexes have statistics gathering on them.

Index on Foreign Key to avoid table locks

There’s another risk we have when we drop an index. It may not be used for access, but to avoid a TM Share lock on a child table when deleting rows from the referenced table. This is again something that was not monitored. When the parent table has few rows, like some lookup tables, the index on the foreign key will probably not be used to access to the child rows, or to check that there are no child rows when you delete a parent one. A full scan will be faster. But an index on it is still required to avoid to lock the whole table when we delete rows from the parent.

Let’s create such an index.


18:27:12 SQL> create index FK_EMP on EMP(DEPTNO);
Index FK_EMP created.

I’ll delete DEPTNO=50 and I can verify that checking that there are no child rows is done without the need of the index:


SQL_ID 1v3zkdftt0vv7, child number 0
-------------------------------------
select * from emp where deptno=50
 
Plan hash value: 3956160932
 
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 3 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 1 | 0 |00:00:00.01 | 3 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("DEPTNO"=50)

Let’s delete the parent row and see if the index is used or not.


19:19:47 SQL> delete from DEPT where deptno='50';
0 rows deleted.
19:19:47 SQL> commit;
Commit complete.

This do not lock the EMP table because of the presence of the index FK_EMP. If the index were not there, a TM Share lock would have been acquired, which prevent concurreny DML on EMP table (at least).


19:19:48 SQL> host sleep 900
 
19:34:48 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 0 3 30000 12 48152 13-NOV-16 07.24.11.086000000 PM 3
 
19:34:48 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_COUNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_COUNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_1000_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 2 2 5 0 1 1 4 0 0 0 0 0 0 13-nov 18:18:13

No additional index usage has been detected. Do you take the risk to drop the index? Probably not. Even making the index invisible do not lower the risk. You may check DBA_TAB_MODIFICATIONS to know if the parent table is subject of deletes, but what if some transactions are updating the referenced key? This is also a case of TM Share lock, and this happens more that we think (for example when Hibernate updates all columns even those that do not change).

So what?

The new index usage tracking in 12.2 is very nice to get statistics on index usage, better than a simple ‘YES/NO’ flag as we have before. But detecting which index is not used and can be safely dropped is still something complex and that requires the application knowledge and comprehensive non-regression testing.
There is nothing yet that can tell you than all would have been the same if an index were not there.

 

Cet article 12cR2 new index usage tracking est apparu en premier sur Blog dbi services.

12cR2 Single-Tenant: Multitenant Features for All Editions

$
0
0

Now that 12.2 is there, in the Oracle Public Cloud Service, I can share the slides of the presentation I made for Oracle Open World:

I’ll give the same session in French, In Geneva on November 23rd at Oracle Switzerland. Ask me if you want an invitation.

The basic idea is that non-CDB is deprecated, and not available in the Oracle Public Cloud. If you don’t purchase the Multitenant Option, then you will use ‘Single-Tenant’. And in 12.2 there are interesting features coming with it. Don’t fear it. Learn it and benefit from it.

CaptureSingleTenant

In addition to that, I’ll detail

  • The 12.2 new security feature coming with multitenant: at DOAG 2016
  • The internals of multitenant architecture: at UKOUG TECH16

And don’t hesitate to come at the dbi services booth for questions and/or demos about Multitenant.
There’s also the book I co-authored: Oracle Database 12c Release 2 Multitenant (Oracle Press) which should be available within a few weeks.

 

Cet article 12cR2 Single-Tenant: Multitenant Features for All Editions est apparu en premier sur Blog dbi services.

12cR2: CREATE_FILE_DEST for PDB isolation

$
0
0

Two years ago I filled an OTN idea to ‘Constrain PDB datafiles into specific directory’ and made it an enhancement request for 12c Release 2. When you provision a PDB, the PDB admin can create tablespaces and put datafiles anywhere in your system. Of course this is not acceptable in a cloud environment. 12.1 has a parameter for directories (PATH_PREFIX) and 12.2 brings CREATE_FILE_DEST for datafiles.

create_file_dest

Here is the new option when you create a pluggable database:


SQL> create pluggable database PDB1 admin user admin identified by password role=(DBA)
create_file_dest='/u02/app/oracle/oradata/CDB2/PDB1';
 
Pluggable database created.

Let’s see where are my datafiles:


SQL> alter pluggable database PDB1 open;
Pluggable database altered.
SQL> alter session set container=PDB1;
Session altered.
SQL> select name from v$datafile;
 
NAME
--------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_system_d2od2o7b_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_sysaux_d2od2o7j_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_undotbs1_d2od2o7l_.dbf

My files have been created in the CREATE_FILE_DEST directory specified at PDB creation, and with an OMF structure.
So maybe I don’t want to include the CDB name and the PDB name but only a mount point.

If, as a local user, I try to create a datafile elsewhere I get an error:

SQL> connect admin/password@//localhost/pdb1.opcoct.oraclecloud.internal
Connected.
SQL> create tablespace APPDATA datafile '/tmp/appdata.dbf' size 5M;
create tablespace APPDATA datafile '/tmp/appdata.dbf' size 5M
*
ERROR at line 1:
ORA-65250: invalid path specified for file - /tmp/appdata.dbf

This is exactly what I wanted.

Because I’m bound to this directory, I don’t need to give an absolute path:

SQL> create tablespace APPDATA datafile 'appdata.dbf' size 5M;
 
Tablespace created.
 
SQL> select name from v$datafile;
 
NAME
-------------------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_system_d2od2o7b_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_sysaux_d2od2o7j_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_undotbs1_d2od2o7l_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/appdata.dbf

So you don’t need to use OMF there. If the PDB administrator wants to name the datafiles, he can, as long as they stays under the create_file_dest directory. You can create a datafile in a sub-directory of create_file_dest but it needs to exist of course.

db_create_file_dest

Here it just looks like OMF, so I check the db_create_file_dest parameter:


SQL> show parameter file_dest
 
NAME TYPE VALUE
------------------------------------ ----------- ---------------------------------
db_create_file_dest string /u02/app/oracle/oradata/CDB2/PDB1

and I try to change it (as local user):


SQL> connect admin/password@//localhost/pdb1.opcoct.oraclecloud.internal;
Connected.
SQL> alter system set db_create_file_dest='/tmp';
alter system set db_create_file_dest='/tmp'
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-01031: insufficient privileges
 
SQL> alter session set db_create_file_dest='/tmp';
ERROR:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-01031: insufficient privileges

No need to use lockdown profile here, it is verified at runtime that a local user cannot change it.

If you are connected with a common user, here connected as sysdba, this is the way to change what has been specified at PDB creation time:


SQL> show con_id
 
CON_ID
------------------------------
3
 
SQL> alter system set db_create_file_dest='/tmp';
System altered.
 
SQL> create tablespace APP1;
Tablespace created.
 
SQL> select name from v$datafile;
 
NAME
--------------------------------------------------------------------------------
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_system_d2od2o7b_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_sysaux_d2od2o7j_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_undotbs1_d2od2o7l_.dbf
/u02/app/oracle/oradata/CDB2/PDB1/appdata.dbf
/tmp/CDB2/415260E5D27B5D4BE0534E186A0A4CB8/datafile/o1_mf_app1_d2ohx5sp_.dbf

But…

The behavior when you create the PDB with the CREATE_FILE_DEST clause is different than when you create it without, and set db_create_file_dest later. In the second case, the restriction does not occur and a local DBA can create a datafile wherever he wants.

So I wanted to check whether this attribute is shipped when plugging PDBs. When looking at the pdb_descr_file xml file I don’t see anything different except the parameter:

<parameters>
<parameter>processes=300
<parameter>nls_language='AMERICAN'
<parameter>nls_territory='AMERICA'
<parameter>filesystemio_options='setall'
<parameter>db_block_size=8192
<parameter>encrypt_new_tablespaces='CLOUD_ONLY'
<parameter>compatible='12.2.0'
<parameter>db_files=250
<parameter>open_cursors=300
<parameter>sql92_security=TRUE
<parameter>pga_aggregate_target=1775294400
<parameter>sec_protocol_error_trace_action='LOG'
<parameter>enable_pluggable_database=TRUE
<spfile>*.db_create_file_dest='/u02/app/oracle/oradata/CDB2/PDB1'
</parameters>

So I tried to unplug/plug my PDB and the restriction is gone. So be careful.

I’ve not find a documented way to check if restriction is enabled or not (except trying to create a file outside of db_create_file_dest). Please comment if you know.
However, it seems that that a flag in CONTAINER$ is unset when restriction is there:

SQL> create pluggable database PDB1 admin user admin identified by password role=(DBA) create_file_dest='/u02/app/oracle/oradata/CDB2/PDB1';
Pluggable database created.
 
SQL> select con_id#,flags,decode(bitand(flags, 2147483648), 2147483648, 'YES', 'NO') from container$;
 
CON_ID# FLAGS DEC
---------- ---------- ---
1 0 NO
2 3221487616 YES
3 1610874880 NO

Creating the same PDB but without the create_file_dest clause has the same flag as ‘NO’

create pluggable database PDB1 admin user admin identified by password role=(DBA);
Pluggable database created.
 
SQL> select con_id#,flags,decode(bitand(flags, 2147483648), 2147483648, 'YES', 'NO') from container$;
 
CON_ID# FLAGS DEC
---------- ---------- ---
1 0 NO
2 3221487616 YES
3 1074003968 NO

I suppose that it is stored elsewhere because those flags are set only once PDB is opened.

 

Cet article 12cR2: CREATE_FILE_DEST for PDB isolation est apparu en premier sur Blog dbi services.

12cR2: Upgrade by unplug/plug in the Oracle Cloud Service

$
0
0
    12.2 is available in the Oracle Public Cloud DBaaS. If you have a 12.1 DBaaS service, there’s no button to upgrade it. I’ll describe all the possible upgrade procedures and the first one, and the most simple, is to create a new DBaaS service in 12.2 and unplug/plug the PDBs to it.

    Here is my DBaaS in 12.1

    [oracle@HP121A ~]$ sqlplus / as sysdba
     
    SQL*Plus: Release 12.1.0.2.0 Production on Sat Nov 19 14:47:04 2016
     
    Copyright (c) 1982, 2014, Oracle. All rights reserved.
     
     
    Connected to:
    Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics
    and Real Application Testing options
     
    SQL> show pdbs
     
    CON_ID CON_NAME OPEN MODE RESTRICTED
    ---------- ------------------------------ ---------- ----------
    2 PDB$SEED READ ONLY NO
    3 PDB1 READ WRITE NO

    Unplug

    I close the PDB1 and unplug it.

    SQL> alter pluggable database PDB1 close;
     
    Pluggable database altered.
     
    SQL> alter pluggable database PDB1 unplug into '/tmp/PDB1.xml';
     
    Pluggable database altered.

    Copy files

    I’ve opened ssh between the two VMs and copy the xml file
    [oracle@HP122A tmp]$ scp 141.144.32.166:/tmp/PDB1.xml .
    The authenticity of host '141.144.32.168 (141.144.32.168)' can't be established.
    RSA key fingerprint is 84:e4:e3:db:67:20:e8:e2:f7:ff:a6:4d:9e:ee:a4:08.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '141.144.32.168' (RSA) to the list of known hosts.
    PDB1.xml 100% 6118 6.0KB/s 00:00

    From the xml file I see which files are referenced:
    [oracle@HP121A ~]$ grep path /tmp/PDB1.xml
    <path>/u02/app/oracle/oradata/HP121A/41A8A48F54195236E0534EC5C40A569E/datafile/o1_mf_system_d30owr5v_.dbf</path>
    <path>/u02/app/oracle/oradata/HP121A/41A8A48F54195236E0534EC5C40A569E/datafile/o1_mf_sysaux_d30owr69_.dbf</path>
    <path>/u02/app/oracle/oradata/HP121A/41A8A48F54195236E0534EC5C40A569E/datafile/o1_mf_temp_d30owr6h_.dbf</path>

    and copy them

    [oracle@HP122A tmp]$ scp -r 141.144.32.168:/u02/app/oracle/oradata/HP121A/41A8A48F54195236E0534EC5C40A569E /u02/app/oracle/oradata/HP121A
    o1_mf_temp_d30owr6h_.dbf 100% 20MB 20.0MB/s 00:00
    o1_mf_system_d30owr5v_.dbf 100% 270MB 135.0MB/s 00:02
    o1_mf_sysaux_d30owr69_.dbf 100% 570MB 114.0MB/s 00:05

    Plug

    It’s only one command to plug it into the 12.2 CDB:

    [oracle@HP122A tmp]$ sqlplus / as sysdba
     
    SQL*Plus: Release 12.2.0.1.0 Production on Sat Nov 19 14:50:26 2016
     
    Copyright (c) 1982, 2016, Oracle. All rights reserved.
     
    Connected to:
    Oracle Database 12c EE High Perf Release 12.2.0.1.0 - 64bit Production
     
    SQL> show pdbs
     
    CON_ID CON_NAME OPEN MODE RESTRICTED
    ---------- ------------------------------ ---------- ----------
    2 PDB$SEED READ ONLY NO
     
    SQL> create pluggable database PDB1 using '/tmp/PDB1.xml';
    Pluggable database created.

    At that time, you can drop it from the source but probably you will remove the service once you are sure the migration is ok.

    Upgrade

    That was easy, but that was only the transportation of the PDB to another CDB, but it cannot be opened so easily on a newer version CDB. When we open the PDB we get a warning:
    SQL> alter pluggable database PDB1 open;
     
    Warning: PDB altered with errors.

    and have to look at the PDB_PLUG_IN_VIOLATIONS

    SQL> select MESSAGE from pdb_plug_in_violations order by time
     
    MESSAGE
    -----------------------------------------------------------------------------------------------------------
    APEX mismatch: PDB installed version NULL CDB installed version 5.0.4.00.12
    CDB is using local undo, but no undo tablespace found in the PDB.
    CDB parameter compatible mismatch: Previous '12.1.0.2.0' Current '12.2.0'
    Database option APS mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option CATALOG mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option CATJAVA mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option CATPROC mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option CONTEXT mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option DV mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option JAVAVM mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option OLS mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option ORDIM mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option OWM mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option SDO mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option XDB mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option XML mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    Database option XOQ mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
    PDB's version does not match CDB's version: PDB's version 12.1.0.2.0. CDB's version 12.2.0.1.0.

    Each component report a newer version. We have to upgrade them running catupgrd.sql.
    In 12.2 we have a new script that calls the catctl.pl and catupgrd.sql to make this easier. It is a shell script located in ORACLE_HOME/bin and is dbupgrade. As with catcon.pl we have the ‘-c’ argument to run it on PDB1:

    [oracle@HP122A tmp]$ $ORACLE_HOME/bin/dbupgrade -c PDB1

    How long does it take? Documentation says that:
    It is easier to apply a patch to one CDB than to multiple non-CDBs and to upgrade one CDB than to upgrade several non-CDBs.
    So this supposes that upgrade work is mostly done at CDB level. PDBs have only metadata links to them. It’s only a virtual dictionary.

    More than 3 years after the multitenant architecture was released, there are big doubts about the time it takes to upgrade a PDB plugged from a previous version:

    So I keep the answer for the next blog post.

     

    Cet article 12cR2: Upgrade by unplug/plug in the Oracle Cloud Service est apparu en premier sur Blog dbi services.


12cR2: How long to upgrade a PDB?

$
0
0

In the previous post I described how simple it is to unplug a PDB and plug it into a newer version CDB. One goal of dictionary separation in the multitenant architecture is to keep system objects on CDB$ROOT only. Knowing that an upgrade does not touch the application metadata and data, does this make PDB upgrade fast as a simple refresh of metadata links?

CDB$ROOT upgrade

As a point of comparison I’ve run an upgrade on an empty CDB from 12.1.0.2 to 12.2.0.1 and here is the summary:

Oracle Database 12.2 Post-Upgrade Status Tool 11-19-2016 14:04:51
[CDB$ROOT]  
Component Current Version Elapsed Time
Name Status Number HH:MM:SS
 
Oracle Server UPGRADED 12.2.0.1.0 00:11:19
JServer JAVA Virtual Machine UPGRADED 12.2.0.1.0 00:04:29
Oracle Real Application Clusters UPGRADED 12.2.0.1.0 00:00:00
Oracle Workspace Manager UPGRADED 12.2.0.1.0 00:00:41
OLAP Analytic Workspace UPGRADED 12.2.0.1.0 00:00:14
Oracle OLAP API UPGRADED 12.2.0.1.0 00:00:08
Oracle Label Security UPGRADED 12.2.0.1.0 00:00:05
Oracle XDK UPGRADED 12.2.0.1.0 00:01:01
Oracle Text UPGRADED 12.2.0.1.0 00:00:31
Oracle XML Database UPGRADED 12.2.0.1.0 00:01:33
Oracle Database Java Packages UPGRADED 12.2.0.1.0 00:00:07
Oracle Multimedia UPGRADED 12.2.0.1.0 00:01:22
Spatial UPGRADED 12.2.0.1.0 00:04:46
Oracle Application Express VALID 5.0.0.00.31 00:00:02
Oracle Database Vault UPGRADED 12.2.0.1.0 00:00:15
Final Actions 00:01:50
Post Upgrade 00:00:12
 
Total Upgrade Time: 00:29:17 [CDB$ROOT]

This was running on a Oracle Public Cloud DBaaS with two OCPUs which means four threads. It’s about 30 minutes to upgrade the system dictionary and all components.
Those are the times we are used to. Since 12c some operations are parallelized to make it faster than in previous versions.

The more components you install, the longer it takes. Even if it is recommended to install all components in a CDB in case a PDB needs it, you may think about this.

PDB upgrade

When you plug a PDB, you should not have all this work to do. You can expect that the metadata links and data links just work, now pointing to the new version. At most, a quick check or refresh may be necessary to ensure that object types did not change.

At UKOUG TECH16 in 12c Multitenant: Not a Revolution, Just an Evolution I demo how those links work internally and I show that running a full CATUPGRD.SQL on each container is not required to be run for each object. However, the DBUPGRADE script runs it. Let’s see if it is optimized for pluggable databases.

In 12.2 the command is easy:

[oracle@HP122A tmp]$ $ORACLE_HOME/bin/dbupgrade -c PDB1

You can see that it runs the catctl.pl commands that we used in 12.1

Start processing of PDB1
[/u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catctl.pl -c 'PDB1' -I -i pdb1 -n 2 -l /home/oracle /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catupgrd.sql]

Here is what will be run.

Number of Cpus = 2
Database Name = HP122A
DataBase Version = 12.2.0.1.0
Generated PDB Inclusion:[PDB1] CDB$ROOT Open Mode = [OPEN] Components in [PDB1] Installed [APEX APS CATALOG CATJAVA CATPROC CONTEXT DV JAVAVM OLS ORDIM OWM SDO XDB XML XOQ] Not Installed [EM MGW ODM RAC WK]

Summary is here:

Oracle Database 12.2 Post-Upgrade Status Tool 11-19-2016 15:25:15
[PDB1]  
Component Current Version Elapsed Time
Name Status Number HH:MM:SS
 
Oracle Server UPGRADED 12.2.0.1.0 00:08:59
JServer JAVA Virtual Machine UPGRADED 12.2.0.1.0 00:02:16
Oracle Real Application Clusters UPGRADED 12.2.0.1.0 00:00:00
Oracle Workspace Manager UPGRADED 12.2.0.1.0 00:00:27
OLAP Analytic Workspace UPGRADED 12.2.0.1.0 00:00:22
Oracle OLAP API UPGRADED 12.2.0.1.0 00:00:07
Oracle Label Security UPGRADED 12.2.0.1.0 00:00:03
Oracle XDK UPGRADED 12.2.0.1.0 00:00:40
Oracle Text UPGRADED 12.2.0.1.0 00:00:18
Oracle XML Database UPGRADED 12.2.0.1.0 00:01:25
Oracle Database Java Packages UPGRADED 12.2.0.1.0 00:00:03
Oracle Multimedia UPGRADED 12.2.0.1.0 00:01:13
Oracle Application Express VALID 5.0.0.00.31 00:00:02
Oracle Database Vault UPGRADED 12.2.0.1.0 00:00:40
Final Actions 00:01:49
Post Upgrade 00:01:17
 
Total Upgrade Time: 00:23:55 [PDB1]  
Database time zone version is 18. It is older than current release time
zone version 26. Time zone upgrade is needed using the DBMS_DST package.
 
Grand Total Upgrade Time: [0d:0h:25m:0s]

When you compare with a CDB$ROOT upgrade the gain is very small. We saved 25% of Oracle Server time. JVM and XDK was x2 faster. But finally, that’s only 5 minutes.

It is important to understand that the upgrade time depends on the components installed. Here is the percentage of time per component:

CapturedbupgradePDB

About the core of the database, what we know as catalog/catproc, here is the detail showing which phases are run in parallel.
Note that the phase number is important because in 12.2 you can restart a failed upgrade from where it stopped.


------------------------------------------------------
Phases [0-117] Start Time:[2016_11_19 15:00:37] Container Lists Inclusion:[PDB1] Exclusion:[NONE] ------------------------------------------------------
*********** Executing Change Scripts ***********
Serial Phase #:0 [PDB1] Files:1 Time: 36s
*************** Catalog Core SQL ***************
Serial Phase #:1 [PDB1] Files:5 Time: 39s
Restart Phase #:2 [PDB1] Files:1 Time: 1s
*********** Catalog Tables and Views ***********
Parallel Phase #:3 [PDB1] Files:19 Time: 23s
Restart Phase #:4 [PDB1] Files:1 Time: 0s
************* Catalog Final Scripts ************
Serial Phase #:5 [PDB1] Files:6 Time: 15s
***************** Catproc Start ****************
Serial Phase #:6 [PDB1] Files:1 Time: 12s
***************** Catproc Types ****************
Serial Phase #:7 [PDB1] Files:2 Time: 9s
Restart Phase #:8 [PDB1] Files:1 Time: 0s
**************** Catproc Tables ****************
Parallel Phase #:9 [PDB1] Files:70 Time: 48s
Restart Phase #:10 [PDB1] Files:1 Time: 1s
************* Catproc Package Specs ************
Serial Phase #:11 [PDB1] Files:1 Time: 12s
Restart Phase #:12 [PDB1] Files:1 Time: 1s
************** Catproc Procedures **************
Parallel Phase #:13 [PDB1] Files:97 Time: 8s
Restart Phase #:14 [PDB1] Files:1 Time: 1s
Parallel Phase #:15 [PDB1] Files:118 Time: 11s
Restart Phase #:16 [PDB1] Files:1 Time: 1s
Serial Phase #:17 [PDB1] Files:13 Time: 3s
Restart Phase #:18 [PDB1] Files:1 Time: 1s
***************** Catproc Views ****************
Parallel Phase #:19 [PDB1] Files:33 Time: 25s
Restart Phase #:20 [PDB1] Files:1 Time: 0s
Serial Phase #:21 [PDB1] Files:3 Time: 8s
Restart Phase #:22 [PDB1] Files:1 Time: 1s
Parallel Phase #:23 [PDB1] Files:24 Time: 82s
Restart Phase #:24 [PDB1] Files:1 Time: 1s
Parallel Phase #:25 [PDB1] Files:11 Time: 42s
Restart Phase #:26 [PDB1] Files:1 Time: 0s
Serial Phase #:27 [PDB1] Files:1 Time: 0s
Serial Phase #:28 [PDB1] Files:3 Time: 5s
Serial Phase #:29 [PDB1] Files:1 Time: 0s
Restart Phase #:30 [PDB1] Files:1 Time: 0s
*************** Catproc CDB Views **************
Serial Phase #:31 [PDB1] Files:1 Time: 2s
Restart Phase #:32 [PDB1] Files:1 Time: 1s
Serial Phase #:34 [PDB1] Files:1 Time: 0s
***************** Catproc PLBs *****************
Serial Phase #:35 [PDB1] Files:283 Time: 17s
Serial Phase #:36 [PDB1] Files:1 Time: 0s
Restart Phase #:37 [PDB1] Files:1 Time: 0s
Serial Phase #:38 [PDB1] Files:1 Time: 3s
Restart Phase #:39 [PDB1] Files:1 Time: 1s
*************** Catproc DataPump ***************
Serial Phase #:40 [PDB1] Files:3 Time: 49s
Restart Phase #:41 [PDB1] Files:1 Time: 1s
****************** Catproc SQL *****************
Parallel Phase #:42 [PDB1] Files:13 Time: 51s
Restart Phase #:43 [PDB1] Files:1 Time: 0s
Parallel Phase #:44 [PDB1] Files:12 Time: 8s
Restart Phase #:45 [PDB1] Files:1 Time: 1s
Parallel Phase #:46 [PDB1] Files:2 Time: 2s
Restart Phase #:47 [PDB1] Files:1 Time: 1s
************* Final Catproc scripts ************
Serial Phase #:48 [PDB1] Files:1 Time: 5s
Restart Phase #:49 [PDB1] Files:1 Time: 1s
************** Final RDBMS scripts *************
Serial Phase #:50 [PDB1] Files:1 Time: 16s

In the summary when we compare with a CDB$ROOT upgrade we don’t see the Spatial part that took 4 minutes but we see it in the detail:

***************** Upgrading SDO ****************
Restart Phase #:81 [PDB1] Files:1 Time: 1s
Serial Phase #:83 [PDB1] Files:1 Time: 23s
Serial Phase #:84 [PDB1] Files:1 Time: 4s
Restart Phase #:85 [PDB1] Files:1 Time: 1s
Serial Phase #:86 [PDB1] Files:1 Time: 5s
Restart Phase #:87 [PDB1] Files:1 Time: 0s
Parallel Phase #:88 [PDB1] Files:3 Time: 110s
Restart Phase #:89 [PDB1] Files:1 Time: 0s
Serial Phase #:90 [PDB1] Files:1 Time: 4s
Restart Phase #:91 [PDB1] Files:1 Time: 1s
Serial Phase #:92 [PDB1] Files:1 Time: 4s
Restart Phase #:93 [PDB1] Files:1 Time: 0s
Parallel Phase #:94 [PDB1] Files:4 Time: 30s
Restart Phase #:95 [PDB1] Files:1 Time: 0s
Serial Phase #:96 [PDB1] Files:1 Time: 3s
Restart Phase #:97 [PDB1] Files:1 Time: 1s
Serial Phase #:98 [PDB1] Files:1 Time: 22s
Restart Phase #:99 [PDB1] Files:1 Time: 0s
Serial Phase #:100 [PDB1] Files:1 Time: 3s
Restart Phase #:101 [PDB1] Files:1 Time: 1s
Serial Phase #:102 [PDB1] Files:1 Time: 2s
Restart Phase #:103 [PDB1] Files:1 Time: 1s

So what?

From what we see, the multitenant architecture, with consolidation of the system directory in only one place – the CDB$ROOT – we have no gain in upgrade. In the current implementation (12.2.0.1) the same work is done on all containers, with only minimal optimization for pluggable databases where we have metadata links instead of full object metadata.
In summary:

  • Upgrading by plug-in or remote clone is faster than upgrading the whole CDB because CDB has more containers, such as PDB$SEED
  • But upgrading a single PDB, whatever the method is, is not faster than upgrading a non-CDB

I’m talking about upgrade of the container here. Transportable tablespaces/database is a different thing.

More about the Multitenant internals, dictionary separation, metadata links and data links (was called object links in 12.1) at UKOUG TECH 2016 conference next month.

CaptureUpgradePres

 

Cet article 12cR2: How long to upgrade a PDB? est apparu en premier sur Blog dbi services.

12cR2: Upgrade by remote clone – workaround ORA-17630 in DBaaS

$
0
0

Easier than unplug/plug, you can move pluggable databases with remote cloning. It’s the same idea but you don’t have to manage the files yourself: the are shipped through database link. However, this uses the ‘remote file protocol’ and it fails with version mismatch:
ORA-17628: Oracle error 17630 returned by remote Oracle server
ORA-17630: Mismatch in the remote file protocol version client server

Remote cloning

I’ll describe the full operation of remote cloning in a future post. This is the error I got when I tried to remote clone from 12.1 to 12.2:
13:43:55 HP122A SQL> create pluggable database PDB1 from PDB1@HP121A@HP121A keystore identified by "Ach1z0#d" relocate;
create pluggable database PDB1 from PDB1@HP121A@HP121A keystore identified by "Ach1z0#d"
*
ERROR at line 1:
ORA-17628: Oracle error 17630 returned by remote Oracle server
ORA-17630: Mismatch in the remote file protocol version client server

Alert.log in target

The error is received from the remote side. There is not a lot in local alert.log
create pluggable database PDB1 from PDB1@HP121A@HP121A keystore identified by * relocate
Errors in file /u01/app/oracle/diag/rdbms/hp122a/HP122A/trace/HP122A_ora_29385.trc:
ORA-17628: Oracle error 17630 returned by remote Oracle server
ORA-17630: Mismatch in the remote file protocol version client server

Alert.log in source

More information about versions in the remote alert.log:
Errors in file /u01/app/oracle/diag/rdbms/hp121a/HP121A/trace/HP121A_ora_21344.trc:
ORA-17630: Mismatch in the remote file protocol version client 3 server 2

Patch

Fortunately, version mismatch of remote file protocol has already been a problem in previous versions with other features that have to transport files, and a patch exists to bypass this version checking:

Patch 18633374: COPYING ACROSS REMOTE SERVERS: ASMCMD-8016, ORA-17628, ORA-17630, ORA-06512

And you can download it at https://updates.oracle.com/download/18633374.html

My 12.1.0.2 DBaaS has the July 2016 PSU applied, as well as a merge of patches specific for the cloud:
[oracle@HP121 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
19469538;
24310028;
22366322;
20475845;
18043064;
21132297;
23177536;Database PSU 12.1.0.2.160719, Oracle JavaVM Component (JUL2016)
23054246;Database Patch Set Update : 12.1.0.2.160719 (23054246)

Today, there is no patch to download for this configuration. There is one for April 2016 PSU but there is still a conflict with patch 24310028

To be able to continue, I’ve removed following patches from the 12.1 source:
[oracle@HP121 ~]$ $ORACLE_HOME/OPatch/opatch rollback -id 23177536
[oracle@HP121 ~]$ $ORACLE_HOME/OPatch/opatch rollback -id 24310028

But then, the parameter “encrypt_new_tablespaces” that has been introduced by 24310028 is unknown:

SQL> startup
ORA-01078: failure in processing system parameters
LRM-00101: unknown parameter name 'encrypt_new_tablespaces'

You have to remove this one from the SPFILE. Basically it forces TDE when in the cloud, even when not specified in the DDL.

So what?

I hope this patch will be included in future DBaaS versions. Currently, the Oracle Public Cloud has no simple button to upgrade a service from 12.1 to 12.2 and the easiest way to do it should be remote cloning of PDB. But with those version mismatch and patch to apply, unplug/plug is probably easier.

 

Cet article 12cR2: Upgrade by remote clone – workaround ORA-17630 in DBaaS est apparu en premier sur Blog dbi services.

12cR2: Upgrade by remote clone with TDE in DBaaS

$
0
0

Upgrading from 12.1 to 12.2 is easy in Oracle Public Cloud DBaaS because you are in multitenant. Here is how to clone a 12.1 PDB to 12.2 service.

I’ve a service HP121 in 12.1.0.2 with one pluggable database PDB1 and a service HP122 in 12.2.0.1 with an empty CDB (only CDB$ROOT and PDB$SEED containers).

Export TDE key

The Oracle Public Cloud uses Transparent Data Encryption to secure the datafiles. When you move the pluggable databases you need to export/import the encryption keys.

Here is the key:

18:42:58 HP121 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- ---------------------------------------- ---------
FILE /u01/app/oracle/admin/HP121/tde_wallet/ AUTOLOGIN
 
18:42:58 HP121 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1';
 
KEY_ID
------------------------------------------------------------------------------
AQqCc8XWV09uvxkaw0Bm5XUAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

The instance uses an auto-login wallet and you cannot export the keys from that:

18:42:58 HP121 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1')
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

You need to open it with the password:

18:42:58 HP121 SQL>administer key management set keystore close;
keystore altered.
 
18:42:58 HP121 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
 
18:42:58 HP121 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- ---------------------------------------- ---------
FILE /u01/app/oracle/admin/HP121/tde_wallet/ PASSWORD

And then you can export it:

18:42:58 HP121 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
keystore altered.

Import TDE key

I copy the file /tmp/cdb2pdb1.p12 to the destination (scp) and then I can import it, giving the same ‘secret’ identifier. Here again i have to open the wallet with password because it cannot be imported when opened

18:43:04 HP122 SQL>administer key management set keystore close;
keystore altered.
18:43:04 HP122 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
18:43:04 HP122 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d";
keystore altered.

Database link

We need to create a database link to the source (don’t forget to open the port for the listener):

18:43:04 HP122 SQL>select dbms_tns.resolve_tnsname('//HP121/HP121.demnov.oraclecloud.internal') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('//HP121/HP121.DEMNOV.ORACLECLOUD.INTERNAL')
--------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=HP121.demnov.oraclecloud.internal)(CID=
(PROGRAM=oracle)(HOST=HP122.compute-demnov.oraclecloud.internal)(USER=oracle)))(
ADDRESS=(PROTOCOL=TCP)(HOST=10.196.202.47)(PORT=1521)))
 
18:43:04 HP122 SQL>create database link HP121@HP121 connect to system identified by "Ach1z0#d" using '//HP121/HP121.demnov.oraclecloud.internal';
Database link created.
 
18:43:04 HP122 SQL>select host_name from v$instance@HP121@HP121;
 
HOST_NAME
----------------------------------------------------------------
HP121.compute-demnov.oraclecloud.internal

Remote clone

You need to have the source PDB1 opened read-only, and the cloning is only one command:

18:43:09 HP122 SQL>create pluggable database PDB1 from PDB1@HP121@HP121 keystore identified by "Ach1z0#d";
Pluggable database created.

Upgrade

Now that you have the PDB you can open it (because you have imported the TDE key) but the dictionary is still in 12.1 so you have to run:

[oracle@HP122 ~]$ dbupgrade -c PDB1

This is described in previous post: http://blog.dbi-services.com/12cr2-how-long-to-upgrade-a-pdb/

 

Cet article 12cR2: Upgrade by remote clone with TDE in DBaaS est apparu en premier sur Blog dbi services.

Oracle 12c – Finding the DBID – The last resort

$
0
0

The DBID is a very important part for Oracle databases. It is an internal, uniquely generated number that differentiates databases. Oracle creates this number automatically as soon as you create the database.

During normal operation, it is quite easy to find your DBID. Whenever you start your RMAN session, it displays the DBID.

oracle@oel001:/home/oracle/ [OCM121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Nov 28 10:32:47 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OCM121 (DBID=3827054096)

RMAN>

Or you can just simply select it from your v$database view.

SQL> select DBID from v$database;

DBID
----------
3827054096

But what happens in case you have a restore/recovery scenario where you lost your database. In the NOMOUNT state, it is not possible to retrieve the DBID.

SQL> select DBID from v$database;
select DBID from v$database
                 *
ERROR at line 1:
ORA-01507: database not mounted

You can take a look into the alert.log or any other trace file in your DIAG destination, but you will not find a DBID there.

So, if the only thing that you have left is your RMAN Catalog, and your datafile copies in your FRA + Archivelogs, then you need the DBID beforehand, before you can restore/recover your database correctly.

There are three possibilities to get your DBID

  • You could check your RMAN backup log files, if you have set it up correctly
  • You could connect to your RMAN catalog and query the “DB” table from the catalog owner. Be careful, there might be more than one entry for your DB name, and then it might become difficult to get the correct one.  In my example, I have only one entry
    SQL> select * from db;
    
        DB_KEY      DB_ID REG_DB_UNIQUE_NAME             CURR_DBINC_KEY S
    ---------- ---------- ------------------------------ -------------- -
             1 3827054096 OCM121                                      2 N
  • And as the last resort, you can startup nomount (either with a backup pfile or with the RMAN dummy instance), and afterwards you can dump out the header of your datafile copies in your FRA

Dumping out the first block is usually enough, and besides that, you are not limited to the SYSTEM datafile. You can use any of your datafile copies in your FRA (like SYSAUX, USERS and so on) to dump out the block, like shown in the following example:

-- Dump the first block from the SYSTEM datafile
SQL> alter session set tracefile_identifier = dbid_system;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/SYSTEM.457.926419155' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_6459_DBID_SYSTEM.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

-- Dump the first block from the SYSAUX datafile		
SQL> alter session set tracefile_identifier = dbid_sysaux;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/SYSAUX.354.926417851' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_7035_DBID_SYSAUX.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

-- Dump the first block from the USERS datafile
SQL> alter session set tracefile_identifier = dbid_users;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/USERS.533.926419511' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_7064_DBID_USERS.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

As soon as you have your DBID, it is straight forward to do the rest. Connect to your target and RMAN catalog, set the DBID and then run your restore, recovery scripts.

rman target sys/manager catalog rman/rman@rman
set dbid=3827054096
run {
restore spfile from autobackup;
}

run {
restore controlfile ....
}

run {
restore database ....
recover database ....
}

Conclusion

Don’t forget to save your DBID with your RMAN backup jobs somewhere. Recovering a database at 3 o’clock in the morning with a missing DBID might become a nightmare.
Cheers,
William

 

 

Cet article Oracle 12c – Finding the DBID – The last resort est apparu en premier sur Blog dbi services.

Encryption in Oracle Public Cloud

$
0
0

Oracle Transparent Data Encryption is available without option on the Oracle Public Cloud: Standard Edition as well as Enterprise Edition (EE, EE-HP EE-EP, ECS). More than that, the DBaaS enforces TDE for any user tablespace even when not specifying in the CREATE TABLESPACE. It you are not familiar with TDE key management (wallets) then you have probably encountered ORA-28374: typed master key not found in wallet.
Rather than another tutorial on TDE I’ll try to explain it from the errors you may encounter when simply creating a tablespace.

I have created a new pluggable database PDB2 from the command line:

SQL> create pluggable database PDB2 admin user admin identified by "admin";
Pluggable database PDB2 created.
 
SQL> alter pluggable database PDB2 open read write;
Pluggable database PDB2 altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------- --------- ----------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
7 PDB2 READ WRITE NO

I go to the PDB2 container and try to create a tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace mytablespace;
 
Error starting at line 1 in command -
create tablespace mytablespace
Error report -
ORA-28374: typed master key not found in wallet
28374. 0000 - "typed master key not found in wallet"
*Cause: You attempted to access encrypted tablespace or redo logs with
a typed master key not existing in the wallet.
*Action: Copy the correct Oracle Wallet from the instance where the tablespace
was created.

So, this message is related with TDE wallet.

encrypt_new_tablespaces

I didn’t specify any encryption clause in the CREATE TABLESPACE command but it is activated by default by the following parameter:

SQL> show parameter encrypt_new_tablespaces
 
NAME TYPE VALUE
----------------------- ------ ----------
encrypt_new_tablespaces string CLOUD_ONLY

The values can be DDL (the old behavior where encryption must be defined in the CREATE TABLESPACE statement), ALWAYS (AES128 encryption by default), or CLOUD_ONLY which is the same as ALWAYS when the instance is on the Cloud, or as DDL if the instance is on-premises. The default is CLOUD_ONLY.
This parameter has been introduced in 12.2 and has been backported to 11.2.0.4 and 12.1.0.2 with bug 21281607 that is applied on any Oracle Public Cloud DBaaS instance.

So, one solution to create our tablespace is to set encrypt_new_tablespaces to DDL but as it is recommended to encrypt all user tablespaces, let’s continue with it.

ORA-28374: typed master key not found in wallet

So the error message means that I don’t have a master key in the wallet for my newly created PDB because in multitenant each PDB has it’s own master key (but there’s only one wallet for the CDB).
The wallet is opened:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

But empty (I’m still in the PDB2 container)

SQL> select * from v$encryption_keys order by creation_time;
no rows selected

SET KEY

So the idea is to set a key:

SQL> administer key management set key identified by "Ach1z0#d";

but:

Error starting at line 1 in command -
administer key management set key identified by "Ach1z0#d"
Error report -
ORA-28417: password-based keystore is not open
28417. 0000 - "password-based keystore is not open"
*Cause: Password-based keystore was not opened.
*Action: Close the auto login keystore, if required, and open a
password-based keystore.

Ok. An error because the wallet is not opened. Let’s try to open it:

SQL> administer key management set keystore open identified by "Ach1z0#d";
 
Error starting at line 1 in command -
administer key management set keystore open identified by "Ach1z0#d"
Error report -
ORA-28354: Encryption wallet, auto login wallet, or HSM is already open
28354. 0000 - "Encryption wallet, auto login wallet, or HSM is already open"
*Cause: Encryption wallet, auto login wallet, or HSM was already opened.
*Action: None.

Actually, the wallet is opened. We have seen that the opened wallet is AUTOLOGIN:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

On DBaaS an AUTOLOGIN wallet is used to be able to have the database automatically restarted without manual intervention. Without AUTOLOGIN wallet you have to provide the password.

But AUTOLOGIN wallet is limited to use it to access the tablespaces.
When administering the wallet, we need to provide the password manually:

We need to close the AUTOLOGIN one:

SQL> administer key management set keystore close;
Key MANAGEMENT succeeded.

Now that it is closed, we can try to open it and open it with the password:

SQL> administer key management set keystore open identified by "Ach1z0#d";
 
Error starting at line : 1 in command -
administer key management set keystore open identified by "Ach1z0#d"
Error report -
ORA-28417: password-based keystore is not open
28417. 0000 - "password-based keystore is not open"
*Cause: Password-based keystore was not opened.
*Action: Close the auto login keystore, if required, and open a
password-based keystore.

Oh… it is opened AUTOLOGIN once again:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

CDB$ROOT

You need to open the wallet with password from CDB$ROOT:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> administer key management set keystore close;
Key MANAGEMENT succeeded.
 
SQL> administer key management set keystore open identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

So here is the right way to start: in CDB$ROOT close the AUTOLOGIN wallet and open it with the password.

PDB

Now ready to go further in the PDB2.


SQL> alter session set container=PDB2;
Session altered.

The wallet is now closed for the PDB:

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- -------------- ------- ------------ ------------- ---------------- ------
FILE CLOSED UNKNOWN SINGLE UNDEFINED 7

Let’s open it manually:

SQL> administer key management set keystore open identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

We have no encryption key:

SQL> select * from v$encryption_keys order by creation_time;
no rows selected

Let’s do what we want to do from the get-go: create an encryption key for our PDB:

SQL> administer key management set key identified by "Ach1z0#d";
 
Error starting at line 1 in command -
administer key management set key identified by "Ach1z0#d"
Error report -
ORA-46631: keystore needs to be backed up
46631. 00000 - "keystore needs to be backed up"
*Cause: The keystore was not backed up. For this operation to proceed, the
keystore must be backed up.
*Action: Backup the keystore and try again.

Oh yes. Any change must be backed up. That’s easy:


SQL> administer key management set key identified by "Ach1z0#d" with backup;
Key MANAGEMENT succeeded.

Here we are. The key is there:


SQL> select * from v$encryption_keys order by creation_time;
 
KEY_ID TAG CREATION_TIME ACTIVATION_TIME CREATOR CREATOR_ID USER USER_ID KEY_USE KEYSTORE_TYPE ORIGIN BACKED_UP CREATOR_DBNAME CREATOR_DBID CREATOR_INSTANCE_NAME CREATOR_INSTANCE_NUMBER CREATOR_INSTANCE_SERIAL CREATOR_PDBNAME CREATOR_PDBID CREATOR_PDBUID CREATOR_PDBGUID ACTIVATING_DBNAME ACTIVATING_DBID ACTIVATING_INSTANCE_NAME ACTIVATING_INSTANCE_NUMBER ACTIVATING_INSTANCE_SERIAL ACTIVATING_PDBNAME ACTIVATING_PDBID ACTIVATING_PDBUID ACTIVATING_PDBGUID CON_ID
----------------------------------------------------- ---- --------------------------------------- --------------------------------------- -------- ----------- ----- -------- ----------- ------------------ ------- ---------- --------------- ------------- ---------------------- ------------------------ ------------------------ ---------------- -------------- --------------- --------------------------------- ------------------ ---------------- ------------------------- --------------------------- --------------------------- ------------------- ----------------- ------------------ --------------------------------- ------
AXP3BIrVW0+Evwfx7okZtcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.41.20.629496000 PM +00:00 28-NOV-16 08.41.20.629498000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL NO CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 7

All is perfect but the wallet is still opened with the password:

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------- ------------ ------------- --------------- -------
FILE OPEN PASSWORD SINGLE NO 7

In order to get back to the initial state, it is sufficient to close it (from the CDB$ROOT):


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> administer key management set keystore close;
 
Error starting at line 1 in command -
administer key management set keystore close
Error report -
ORA-28389: cannot close auto login wallet
28389. 00000 - "cannot close auto login wallet"
*Cause: Auto login wallet could not be closed because it was opened with
another wallet or HSM requiring a password.
*Action: Close the wallet or HSM with a password.

Ok. The ‘close’ command needs the password as it was not opened with AUTOLOGIN one.


SQL> administer key management set keystore close identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

It is immediately automatically re-opened with the AUTOLOGIN one:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- --------------------------------------- ------- ------------ ------------- ---------------- ------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ OPEN AUTOLOGIN SINGLE NO 1

and from the CDB$ROOT I can see all of them:

SQL> select * from v$encryption_keys order by creation_time;
 
KEY_ID TAG CREATION_TIME ACTIVATION_TIME CREATOR CREATOR_ID USER USER_ID KEY_USE KEYSTORE_TYPE ORIGIN BACKED_UP CREATOR_DBNAME CREATOR_DBID CREATOR_INSTANCE_NAME CREATOR_INSTANCE_NUMBER CREATOR_INSTANCE_SERIAL CREATOR_PDBNAME CREATOR_PDBID CREATOR_PDBUID CREATOR_PDBGUID ACTIVATING_DBNAME ACTIVATING_DBID ACTIVATING_INSTANCE_NAME ACTIVATING_INSTANCE_NUMBER ACTIVATING_INSTANCE_SERIAL ACTIVATING_PDBNAME ACTIVATING_PDBID ACTIVATING_PDBUID ACTIVATING_PDBGUID CON_ID
----------------------------------------------------- ---- --------------------------------------- --------------------------------------- -------- ----------- ----- -------- ----------- ------------------ ------- ---------- --------------- ------------- ---------------------- ------------------------ ------------------------ ---------------- -------------- --------------- --------------------------------- ------------------ ---------------- ------------------------- --------------------------- --------------------------- ------------------- ----------------- ------------------ --------------------------------- ------
ATxUk1G7gU/0v3Ygk1MbZj8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 27-NOV-16 09.02.18.050676000 PM +00:00 27-NOV-16 09.02.18.130705000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 CDB$ROOT 1 1 3D94C45E41CA19A9E05391E5E50AB8D8 CDB1 902797638 CDB1 1 4294967295 CDB$ROOT 1 1 3D94C45E41CA19A9E05391E5E50AB8D8 1
AWSs1Gr0WE86vyfWc123xccAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 27-NOV-16 09.02.18.089346000 PM +00:00 27-NOV-16 09.02.18.722365000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 PDB1 3 2687567370 424FA3D9C61927FFE053DA116A0A85F7 CDB1 902797638 CDB1 1 4294967295 PDB1 3 2687567370 424FA3D9C61927FFE053DA116A0A85F7 3
AfwqzZP/Rk+5v5WqiNK5nl0AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.36.43.980717000 PM +00:00 28-NOV-16 08.36.43.980720000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 PDB2 5 2602763579 42636D1380072BE7E053DA116A0A8E2D CDB1 902797638 CDB1 1 4294967295 PDB2 5 2602763579 42636D1380072BE7E053DA116A0A8E2D 5
AXP3BIrVW0+Evwfx7okZtcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.41.20.629496000 PM +00:00 28-NOV-16 08.41.20.629498000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL NO CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 7

As you can see I did two attempts with the PDB2 to write this blog post. The previous keys are all in the wallet.

I check that the AUTOLOGIN is opened in PDB2:


SQL> alter session set container=PDB2;
Session altered.
 ;
SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- -------------- ------- ------------ ------------- ---------------- ------
FILE OPEN AUTOLOGIN SINGLE NO 7

And finally I can create my tablespace


SQL> create tablespace mytablespace;
Tablespace MYTABLESPACE created.

Easy, isn’t it?

If you create your PDB with the DBaaS monitor interface all is done automatically with the ‘create PDB’ button:

  • Close the AUTOLOGIN wallet (from CDB$ROOT)
  • Open the wallet with password
  • Create the pluggable database and open it
  • Open the wallet from the PDB, with password
  • Set the masterkey for the PDB
  • Close the wallet to get it opened with AUTOLOGIN
 

Cet article Encryption in Oracle Public Cloud est apparu en premier sur Blog dbi services.

Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE

$
0
0

I have setup a DataGuard environment and followed the instructions from Oracle to create the Standby Redo Logs. The Standby Redo Logs have to be the same size as the Online Redo Logs. If not, the RFS process won’t attach Standby Redo Logs, and you should have at least one more of the Standby Redo Log Group as you have for your Online Redo Log Group per Thread.

For my single instance, this should be quite straight forward, and so I issued the following commands on the primary and standby.

alter database add standby logfile group 4 size 1073741824;
alter database add standby logfile group 5 size 1073741824;
alter database add standby logfile group 6 size 1073741824;
alter database add standby logfile group 7 size 1073741824;

After setting all up, I started the new cool Broker command “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” and surprisingly found, that the validation complains that I do have insufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       3                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       3                       Insufficient SRLs

After looking everything up on Primary and Standby, the number of Log Groups and the sizes looked ok. I do have 3 Online Redo Log Groups with 1G each, and I have 4 Standby Redo Log Groups with 1G each.

-- Standby

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1          0 UNUSED           1073741824
         1          3          0 UNUSED           1073741824
         1          2          0 UNUSED           1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5        552 ACTIVE     1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

-- Primary

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1        550 INACTIVE         1073741824
         1          2        551 INACTIVE         1073741824
         1          3        552 CURRENT          1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

 

The only strange thing, is that the Standby Redo Log Group 7, shows up with Thread 0, instead of Thread 1.
Did not even know, that a thread 0 exists. It always starts with 1, and in case of RAC, you might see Thread 2, 3 or more. But if you want to, you can perfectly create thread 0 without any issues. For what reasons, I don’t know.

SQL> alter database add standby logfile thread 0 group 8 size 1073741824;

Database altered.

Ok. Lets correct the Thread 0 thing, and then lets see want the “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” shows.

-- On Standby
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-OFF';
Succeeded.

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4        553 ACTIVE     1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         1          7          0 UNASSIGNED 1073741824
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-ON';
Succeeded.
		 
-- On Primary

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

And here we go. Now I have sufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       4                       Sufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       4                       Sufficient SRLs

 

Conclusion

Even on a single instance, use the thread number in your create Standby Redo Log statement.

alter database add standby logfile thread 1 group 4 size 1073741824;
alter database add standby logfile thread 1 group 5 size 1073741824;
alter database add standby logfile thread 1 group 6 size 1073741824;
alter database add standby logfile thread 1 group 7 size 1073741824;

Cheers,
William

 

Cet article Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE est apparu en premier sur Blog dbi services.

UKOUG Super Sunday

$
0
0

uk1

Today at the UKOUG Super Sunday in Birmingham, I had the opportunity to assist at interesting conferences.

The first presentation was about Oracle RAC internals and its new features in version 12.2.0.1 on Oracle Cloud. The main new features concern the cache fusion, the undo header hash table, the leaf nodes, and the hang manager.

In 12c release 2 in a RAC environment, the cache fusion automatically chooses an optimal path; the cache fusion collects and maintains statistics on the private network, and will use this information to find the optimal path network or disk to serve blocks. We can consider that flash storage will provide better acces time to data than the private network in case of high load.

In order to reduce remote lookups, each instance maintain a hash table of recent transactions (active and commited). So the Undo Header Hash table will improve the scalibility by eliminating remote lookups.

Flex Cluster and leaf nodes were implemented in 12cR1. With 12cR2, it is now possible to run read-only workload on instances running on leaf nodes.

Hang Manager has been introduced with 12cR2. It determines sessions holding resources on which sessions are waiting.  Hang Manager has the possibility to detect hangs across layers.

The second was about the use of strace, perf and gdb. This was a very funny presentation with no slides, only technical demos. It was talking on how to use strace, perf or gdb without being an expert. The speaker showed us the different analysis we can realize with strace gdb or perf in case we realize a sql query over a table in a file system tablespace or an ASM tablespace.

Using those tools allowed us to understand the mechanism of physical read and asynchronous I/O, and showed us the differences between asynchronous I/O and direct path read between ASM and file system.

It showed us that the use of strace and gdb is very simple but not recommended in a production environment.

The last session was talking about dba_feature_usage_statistics, and the speaker describes us the components behind the scene.

This view  as its name indicates it displays information about database feature usage statistics. The view gives an overview of each option pack that have been used in the database and are currently in use. It also provides information when the product was first used and when it was used for the last time.

It is not very easy to find information in the Oracle documentation about how this view is populated. But the speaker gave us important information about wrl$_dbu_usage_sample, wrl$_dbu_feature_usage and wrl$_dbu_feature_metadata which are important for the dba_feature_usage_statistics view.

He also showed us a method to refresh manually the view dba_feature_usage_statistics.

Tomorrow another day of interesting sessions is waiting for us !

 

 

Cet article UKOUG Super Sunday est apparu en premier sur Blog dbi services.


Multitenant internals – Summary

$
0
0

Today at UKOUG TECH16 conference I’m presenting the internals of the new multitenant architecture: 12c Multitenant: Not a Revolution, Just an Evolution. My goal is to show how it works, that metadata links and object links are not blind magic.
Here are the links to the blog posts I’ve published about multitenant internals.

Fichier 05.12.16 07 39 43
The dictionary separation, METADATA LINK and OBJECT LINK (now called DATA LINK): :
http://blog.dbi-services.com/multitenant-dictionary-what-is-stored-only-in-cdbroot/
http://blog.dbi-services.com/oracle-12c-cdb-metadata-a-object-links-internals/
http://blog.dbi-services.com/oracle-multitenant-dictionary-metadata-links/
http://blog.dbi-services.com/oracle-multitenant-dictionary-object-links/
http://blog.dbi-services.com/multitenant-internals-how-object-links-are-parsedexecuted/
http://blog.dbi-services.com/multitenant-internals-object-links-on-fixed-tables/
An exemple with the AWR views:
http://blog.dbi-services.com/12c-multitenant-internals-awr-tables-and-views/
How the upgrades should work:
http://blog.dbi-services.com/oracle-multitenant-dictionary-upgrade/
What about shared pool rowcache and library cache:
http://blog.dbi-services.com/oracle-multitenant-dictionary-rowcache/
http://blog.dbi-services.com/12c-multitenant-cursor-sharing-in-cdb/
And how to see when session switches to CDB$ROOT:
http://blog.dbi-services.com/oracle-12cr2-multitenant-containers-in-sql_trace/

If you are in Birmingham, I’m speaking on Monday and Wednesday.

CaptureUKOUGFeaturedSpeaker

 

Cet article Multitenant internals – Summary est apparu en premier sur Blog dbi services.

UKOUG 2016 Day 2

$
0
0

uk2

Today I assisted at a first session about one of my favorite tool: Upgrade to EM 13c now. The session was presented by Phil Gric from Red Stack Tech.

At the begining he described us the most common mistakes while implementing Enterprise Manager:

- EM 13c is an enterprise application

- It is a critical part of your infrastructure

- it is designed to help you

- EM 13c is not a glorified db console

- IT manager should not see EM as a job for DBA

He described us the main pre requisites before to realize an EM 13c upgrade ( for example disable optimizer_adaptive_features). He also talked about isssues such as the upgrade will create users with the sysman password, we should ensure that the repository password policy accept such a password.

There is also an issue while upgrading agent on AIX to 13.2 version. There is a problem securing the agent due to SHA encryption (Metalink Note 1965676.1).

To complete his presentation, he described us the main new features in EM 13c: export and import of incident rules, incident compression, always on monitoring, in emcli more than 300 new verbs and a general functionnality improved, system broadcast , comparaison and drift management.

He finally explained us why for him it is important to regularly upgrade to the last EM13c version: it is easy to upgrade, and the longer you wait, the closer it is to the next upgrade :=))

The second presentation was about the 12c upgrade : the good , the bad and the ugly presented by Niall Litchfield. He talked about his experiences about upgrading to 12c a very huge infrastructure composed of more than 100 servers, with database version from 10.1 to 11.2.0.3, with RAC or single instances.

His first advice was to read the Mike Dietrich documentation (Update, Migrate , Consolidate to 12c), and to have a look at the Oracle recommanded patch list.

A good reason to upgrade is because the support for 11g ends at teh end of the year, and the extended support is expensive.

The good news after this huge upgrade was that there has been no upgrade failures (tens of clusters, hundreds of servers and databases), a performance benchmark showed a 50 % improvement.

The bad and ugly news concern the number of patches. It also concern the JSON bundle patches which require database bundle patches. He also adviced us to turn off the optimizer_adaptive_features (recommanded also to be disabled with EM13c, PeopleSoft and EBS). Finally a last ugly point is the documentation, there is no one place to read the documenation but many. He also recommended to allow significant time for testing the database and the applications after the upgrade to 12c.

Then I assisted at a session talking about Oracle database 12c on Windows animated by Christian Shay of Oracle.

He showed us the database certification on 64-bit Windows. In a short resume Oracle 12..2 is certified on Windows server 2012, Windows Server 2012 R2, Windows 10 and Windows Server 2016, as Oracle 12.1 is certified on the same servers except Windows Server 2016.

In Windows 8 and Windows Server 2012, Microsoft has introduced the Group Managed service Account (GMSA), i.e. a domain level account which can be used by multiple servers in that domain to run their services under this account. A GMSA can be the Oracle Home user for Oracle Database Real Application Clusters (Oracle RAC), single instance, and client installations. It has similarities with the ‘oracle’ user on Linux, as you are able to connect on windows with this user and perform administrative tasks  like create database, install Oracle or upgrade databases.

In Windows 7 and Windows Server 2008 R2, Microsoft introduced virtual accounts. A virtual account can be the Oracle home user for Oracle Database single instance and client installations.

The recommandations are the following: for DB server (single instance) use virtual account to avoid password management (12.2), for 12.1 specify a Windows user account during installation. For RAC DB and Grid infrastructure, use a domain user or group managed service account, for a GMSA you do not need to provide the password for any database operation.

He also talked about large page support for windows. When large page support is enabled, the CU are able to access the Oracle database buffers im RAM more quickly. It will address the buffers in 2 MB page size instead of 4 KB increments.

Large pages can be used in two modes : Regular or Mixed mode. The regular one means all the SGA is attempted to be allocated in large pages. By the way if the amount of large pages is not available the database will not come up. Thats the reason using the mixed mode is perhaps better, if all the SGA cannot be allocated in large pages, the rest of the pages will be allocated with regular pages and the instance will come up.

I finished my UKOUG day by assisting at Franck Pachot’s session talking about 12c Mutltitenant (not a revolution but an evolution). He clearly explained us that we did not have to fear about 12c mutlitenant, from the begining of Oracle there has been a lot of new features a lot people feared, but now they are impelemented and work correctly. By the way the patch upgrade optimization is partially implemented, we will see how 12c multitenant will evolve in the next years.

 

 

 

 

 

Cet article UKOUG 2016 Day 2 est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 3

$
0
0

uk3

Today at UKOUG 2016, the Cloud has won against the sun :=)

The first sesssion I attended this morning was animated by Kamil Stawiarski from ORA 600 company: Securing the database againt Unauthorized attacks, but the real title was Oracle Hacking Session.

The session was amazing, as usual with Kamil, no slides , only technical demos :=))

He first showed us that after creating a standard user in an Oracle database with the classical privileges connect, resource and create any index, and using a simple function he created, the standard user could receive the DBA privilege.

The seconf demonstration was about DirtyCow (a computer vulnerability under Linux that allows remote execution of non-privileged code to achieve remote root access on a computer). He showed us how easy it is to get connected root under Linux.

In the last demo he showed us how it is possible to read the data from a particular table directly from the data file, only by using one of his C program and the data_object_id of the table.

He finished his session by asking himself why a lot of money is wasted to protect data, and why it should not be more intelligent to spend less money and to write correct applications with correct privileges.

The second session was more corporate: Oracle database 12cR2, the overview by Dominic Giles from Oracle. He talked us about Oracle 12cR2 on the cloud; What is available now: Exadata Express Cloud Server and Database Cloud Service. Comming soon: Exadata Cloud Machine.

Then he talked about the new features of Oracle database 12cR2:

Performances: The main idea for 12cR2 is: go faster, he gave us some examples: a high compression rate of indexes (subject to licensing option of course) which might result in I/O improvement and significantly space savings.

Security: Oracle 12cR2 introduces online encryption of existing data files. There is also the posiibility of full encryption of internal database structures such as SYSTEM SYSAUX or UNDO. Also a Database Vault simulation mode which defines and tests security protection profiles through application lifecycle.

Developpers: AL32UTF8 is the default character set for databases. Object name for tables or columns can now be 128 bits long.

Manageability: PDB number per container increased from 252 to 4096. The PDB are optimized for RAC. And interesting it will be possible to realize PDB hot clones, PDB refresh and PDB relocate without downtime.

Availability: a lot of improvements for RAC: RAC reader nodes, ASM flex disk groups, Autonomous Health Framework (identifies issues, notifies with corrective actions). For active dataguard, diagnostic tuning and SQL plan advisor will be available on standby side, no user disconnection on failover, high speed block comparaison between primary and standby database. And finally there will be the possibility to use SSL redo transport to be more secure.

Finally, I attended at the last session of the day, but one the most active essentially because of the speaker’s talent and of course the subject: Upgrade to the next generation of Oracle Database; live and uncensored !

He talked us about the different ways to upgrade to 12.1.0.2 or 12.2.0.2 abording subjects like extended support, direct upgrade and DBUA.

A new upgrade script is available : preupgrade.jar executes checks in source environment, generates detailed recommendations, generates also fixup scripts and last but not least is rerunnable :=))

He showed us that the upgrade process is faster and has less downtime, and we have the possibility to run databse upgrade in parallel (by using catctlpl.pl with the -n 8 option for example). It deals with non CDBs and CDBs. During his upgrade from 11.2.0.4 to 12.1.0.2 he interrupted the upgrade process by typing CTRL-C during the upgrade process to 12.1.0.2 … and he proved that the process upgrade is rerunnable by running catctl.pl with the -R option :=)

He is not a great fan of DBUA for multiple reasons : for him it is hard to debug, the parallel option is by default to cpu_count, the progress bar is impredictive and sometimes we have to wait a lot without knowing what’s happening in the source database, we have to be careful with datapatch in 12.1 version. For me the only advantage is the timezone  automatic upgrade by using dbua.

Well this was another exciting day at UKOUG 2016, tomorrow is the last day with other interesting sessions and an OEM round table :=)

 

Cet article UKOUG 2016 DAY 3 est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 4

$
0
0

uk4

Today is the last day at UKOUG in Birmingham; the first session I attended this morning was presented by Julian Dyke about installing and upgrading Oracle 12c release 2 Grid infrastructure and RAC.

He had the possibility to test the installation and upgrade phases at Oracle during 5 days at Oracle last spring. The following tests were done:

single instance : install 12.2.0.1, create database with dbca, upgrade 12.1.0.2 to 12..2.0.1 with dbua

RAC: install 12.2.0.2 grid infrastructure, install 12.2.0.1 RDBMS software, create ASM disk groups (ASMCA), create 12.2.0.2 RAC database (DBCA) , upgrade 12.1.0.2 Grid infrastructure to 12.2.0.1 (gridSetup.sh), upgrade 12.1.0.2 RAC database to 12.2.0.1.

He showed us the main different screenshots describing the installation phases and told us that they did not meet a lot of problems during their installation or upgrade  phases. To upgrade the Grid infrastructure, it is important to run the CVU connected as grid user for example :

runcluvfy.sh -src_crshome=/u00/app/12.1.0.2 -dest_crshome=/u00/app/12.2.0.1
 -dest_version=12.2.0.1 -fixupnoexec

Then after you have the possibility to resolve any issues detected using the generated fixup script.

In his opinion, the use of DBUA is sufficiently robust to use for most upgrades, expecially when the upgrade concerns non critical databases, or databases with fast recovery times or databases on virtual machines. By the way he also mentioned that Oracle is still recommending using scripts for upgrades of large or business critical databases.

He encountered some isssues concerning the upgrade phase for Grid Infrastructure. In particular with the memory_target parameter setting because the ASM and GIMR instances use more memory than in 12.1.0.2, he received the classical ORA-00845 error message. He also encountered problems with invalid objects  and had to extend the root file system of his virtual machine.

Then I attended to Franck Pachot’s session about Statistics Gathering, Best Practices  and Statistic Advisor:

uk5

His session described us his findings and recommendations about how to gather statistics, with a lot of technical demonstrations done on the Cloud. A lot of cases were shown, for example volatile tables, preferences for partitioned tables. index gathering statistics.

He showed us the Oracle 12c release 2 statistics Advisor which might be a useful tool, I will check if it is available in Enterprise Manager 13.2.

He finished his by giving us hsi own recommendations: use automatic job for most of the tables, customize the statistics gathering for volatile tables, gather statistics for tables that you load, and important customize the maintenance window for the gathering statistics job.

Finally I wanted to attend at the OEM round table, but unfortunately the session has been canceled :=((

UK6

Well,this was a very interesting week with a lot of exchanges and sharing experiences with other Oracle DBA. hope to come back at UKOUG next year !

 

 

 

Cet article UKOUG 2016 DAY 4 est apparu en premier sur Blog dbi services.

Oracle 12c – RMAN and Unified Auditing – Does it really work?

$
0
0

The new Oracle Unified Auditing feature, audits RMAN operation per default as soon as
you relink your Oracle binary and start your instance. A quite cool new feature, because it allows me
to audit RMAN operation out of the box. For example, someone could create a RMAN backup to ‘/tmp’ and then copy it to somewhere else. And I would like to know that. ;-)

Oracle added 5 columns to the unified_audit_trail view only for RMAN, to find out what RMAN operation was done on the database. The new columns are the following.

  • RMAN_SESSION_RECID
  • RMAN_SESSION_STAMP
  • RMAN_OPERATION
  • RMAN_OBJECT_TYPE
  • RMAN_DEVICE_TYPE

Due to the Oracle documentation, the column description is the following.

RMAN_SESSION_RECID

RMAN session identifier. Together with RMAN_SESSION_STAMP uniquely identifies an RMAN job
(note that this is not same as user session ID; the value is a recid in controlfile that identifies RMAN job)

RMAN_SESSION_STAMP

Timestamp for the session

RMAN_OPERATION

The RMAN operation executed by the job. One row will be added for each distinct operation within an RMAN session. For example, a backup job would contain BACKUP in the RMAN_OPERATION column.

RMAN_OBJECT_TYPE

Type of objects involved for backup or restore/recover or change/delete/crosscheck commands. It contains one of the following values. If RMAN command does not satisfy one of them,  then preference is given in order, from top to bottom of the list:

  • DB FULL
  • RECVR AREA
  • DB INCR
  • DATAFILE FULL
  • DATAFILE INCR
  • ARCHIVELOG
  • CONTROLFILE
  • SPFILE

RMAN_DEVICE_TYPE

Device involved in the RMAN job. It may be DISK or SBT_TAPE or *  (An * indicates that more than one location is involved). For a backup job, it will be the output device type. For other commands (such as restore or crosscheck),  it will be the input device type.

Ok. Let’s start with a first test. Just for the protocol, I am using here 12cR1 PSU Oct 2016.

First of all, I am activating the “immediate-write” feature, meaning that audit records are written immediately,  and not to the audit buffer first.

SQL> select parameter, value from v$option where parameter like '%Unified%';

PARAMETER              VALUE
---------------------- --------
Unified Auditing       TRUE

-- Modify OUA to use the immediate-write mode

SQL> BEGIN
  2  DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_PROPERTY(
  3  DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
  4  DBMS_AUDIT_MGMT.AUDIT_TRAIL_WRITE_MODE,
  5  DBMS_AUDIT_MGMT.AUDIT_TRAIL_IMMEDIATE_WRITE);
  6  END;
  7  /

PL/SQL procedure successfully completed.

SQL> select * from DBA_AUDIT_MGMT_CONFIG_PARAMS where PARAMETER_NAME = 'AUDIT WRITE MODE';

PARAMETER_NAME                   PARAMETER_VALUE        AUDIT_TRAIL
-------------------------------- ---------------------- ----------------------------
AUDIT WRITE MODE                 IMMEDIATE WRITE MODE   UNIFIED AUDIT TRAIL

 

Ok. Cool. So far so good. Let’s start a RMAN backup job now.

 

oracle@dbidg01:/home/oracle/ [DBIT121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Dec 9 15:59:44 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)

RMAN> backup database plus archivelog delete input;
Starting backup at 09-DEC-2016 16:03:41
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=22 RECID=6 STAMP=930153822
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:03:43
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:03:44
piece handle=/u03/fast_recovery_area/DBIT121_SITE1/backupset/2016_12_09/o1_mf_annnn_TAG20161209T160342_d4okyh1t_.bkp tag=TAG20161209T160342 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2016_12_09/o1_mf_1_22_d4okyfo5_.arc RECID=6 STAMP=930153822
Finished backup at 09-DEC-2016 16:03:44

Starting backup at 09-DEC-2016 16:03:44
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_example_d4fjz1fz_.dbf
input datafile file number=00001 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_system_d4fjt03j_.dbf
input datafile file number=00003 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_sysaux_d4fjrlvs_.dbf
input datafile file number=00004 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_undotbs1_d4fjvtd1_.dbf
input datafile file number=00006 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_users_d4fjvqb1_.dbf
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:03:44
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:05:19
piece handle=/u03/fast_recovery_area/DBIT121_SITE1/backupset/2016_12_09/o1_mf_nnndf_TAG20161209T160344_d4okyjny_.bkp tag=TAG20161209T160344 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:35
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:05:20
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:05:21
piece handle=/u03/fast_recovery_area/DBIT121_SITE1/backupset/2016_12_09/o1_mf_ncsnf_TAG20161209T160344_d4ol1jnj_.bkp tag=TAG20161209T160344 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-DEC-2016 16:05:21

Starting backup at 09-DEC-2016 16:05:21
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=23 RECID=7 STAMP=930153921
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:05:21
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:05:23
piece handle=/u03/fast_recovery_area/DBIT121_SITE1/backupset/2016_12_09/o1_mf_annnn_TAG20161209T160521_d4ol1ktz_.bkp tag=TAG20161209T160521 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2016_12_09/o1_mf_1_23_d4ol1kpj_.arc RECID=7 STAMP=930153921
Finished backup at 09-DEC-2016 16:05:23

RMAN>

 

After my RMAN Backup has finished, I open another session and checked the unified_audit_trail, but nothing is there.

SQL> select EVENT_TIMESTAMP, ACTION_NAME, RMAN_SESSION_RECID,
  2  RMAN_SESSION_STAMP, RMAN_OPERATION, RMAN_OBJECT_TYPE, RMAN_DEVICE_TYPE
  3  from unified_audit_trail where ACTION_NAME like '%RMAN%' order by 1;

no rows selected

Now I do a clean exit of my RMAN session, and here we go. Now I have an audit entry, saying that a RMAN backup to disk took place. Perfect, this is exactly what I wanted to see.

...
RMAN> exit

Recovery Manager complete.


SQL> select EVENT_TIMESTAMP, ACTION_NAME, RMAN_SESSION_RECID,
  2  RMAN_SESSION_STAMP, RMAN_OPERATION, RMAN_OBJECT_TYPE, RMAN_DEVICE_TYPE
  3  from unified_audit_trail where ACTION_NAME like '%RMAN%' order by 1;

EVENT_TIMESTAMP              ACTION_NAME    RMAN_SESSION_RECID RMAN_SESSION_STAMP RMAN_OPERATION       RMAN_OBJECT_TYPE     RMAN_
---------------------------- -------------- ------------------ ------------------ -------------------- -------------------- -----
09-DEC-16 04.08.10.532931 PM RMAN ACTION                    22          930153584 Backup               DB Full              Disk

 

This brings me to an idea. What happens if a Hacker logs into my system, starts a RMAN backup, and kills his own RMAN session, after the backup has finished? Sounds crazy, but Hackers are usually very creative.

Ok. The Hacker logs in now, and because the Hacker is smart, he gives his RMAN backup a TAG, so it is easier to delete it afterwards.

oracle@dbidg01:/home/oracle/ [DBIT121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Dec 9 16:09:58 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)

RMAN> alter system archive log current;

using target database control file instead of recovery catalog
Statement processed

RMAN> backup archivelog all format '/tmp/%U' TAG 'HACKER';

Starting backup at 09-DEC-2016 16:11:58
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=24 RECID=8 STAMP=930154279
input archived log thread=1 sequence=25 RECID=9 STAMP=930154318
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:11:59
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:12:00
piece handle=/tmp/0ern21qf_1_1 tag=HACKER comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-DEC-2016 16:12:00

 

At this point, still no further entry in the unified_audit_trail.

SQL> r
  1  select EVENT_TIMESTAMP, ACTION_NAME, RMAN_SESSION_RECID,
  2  RMAN_SESSION_STAMP, RMAN_OPERATION, RMAN_OBJECT_TYPE, RMAN_DEVICE_TYPE
  3* from unified_audit_trail where ACTION_NAME like '%RMAN%' order by 1

EVENT_TIMESTAMP              ACTION_NAME    RMAN_SESSION_RECID RMAN_SESSION_STAMP RMAN_OPERATION       RMAN_OBJECT_TYPE     RMAN_
---------------------------- -------------- ------------------ ------------------ -------------------- -------------------- -----
09-DEC-16 04.08.10.532931 PM RMAN ACTION                    22          930153584 Backup               DB Full              Disk

Meanwhile, the Hacker copies the data away, and because the Hacker is a good boy, he cleans up everything afterwards. :-)

RMAN> delete noprompt backuppiece tag=HACKER;

using channel ORA_DISK_1

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
14      14      1   1   AVAILABLE   DISK        /tmp/0ern21qf_1_1
deleted backup piece
backup piece handle=/tmp/0ern21qf_1_1 RECID=14 STAMP=930154319
Deleted 1 objects

At the moment, there is still nothing new in the unified_audit_trail. Now, to avoid entries into the unified_audit_trail view, the hacker kills his own session.

oracle@dbidg01:/tmp/ [DBIT121] ps -ef | grep rman | grep -v grep
oracle    8829  2839  0 16:09 pts/1    00:00:00 rman target /
oracle@dbidg01:/tmp/ [DBIT121] kill -9 8829


...
RMAN> Killed

And now the 1Million Dollar question … do we have a new entry or not?

SQL> r
  1  select EVENT_TIMESTAMP, ACTION_NAME, RMAN_SESSION_RECID,
  2  RMAN_SESSION_STAMP, RMAN_OPERATION, RMAN_OBJECT_TYPE, RMAN_DEVICE_TYPE
  3* from unified_audit_trail where ACTION_NAME like '%RMAN%' order by 1

EVENT_TIMESTAMP              ACTION_NAME    RMAN_SESSION_RECID RMAN_SESSION_STAMP RMAN_OPERATION       RMAN_OBJECT_TYPE     RMAN_
---------------------------- -------------- ------------------ ------------------ -------------------- -------------------- -----
09-DEC-16 04.08.10.532931 PM RMAN ACTION                    22          930153584 Backup               DB Full              Disk

No, no new entry. This entry is still the one from my regular RMAN backup with the clean exit.

Conclusion

Don’t rely too much on the unified_audit_trail records, in case you want to audit RMAN backups.

Cheers,
William

 

 

Cet article Oracle 12c – RMAN and Unified Auditing – Does it really work? est apparu en premier sur Blog dbi services.

Viewing all 190 articles
Browse latest View live