Quantcast
Channel: Archives des Oracle 12c - dbi Blog
Viewing all 190 articles
Browse latest View live

Adaptive Plans and cost of inactive branches

$
0
0

Here are the details about an execution plan screenshot I’ve tweeted recently because the numbers looked odd. It’s not a big problem, or maybe not a problem at all. Just something surprising. I don’t like when the numbers don’t match and then I try to reproduce and get an explanation, just to be sure there is not something hidden that I misunderstood.

Here is a similar test case joining two small tables DEMO1 and DEMO2 with specific stale statistics.

Hash Join

I start by forcing a full table scan to get a hash join:

select /*+ full(DEMO2) */ * from DEMO1 natural join DEMO2
Plan hash value: 3212315601
------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 130 (100)| 0 |00:00:00.01 | 3 | | | |
|* 1 | HASH JOIN | | 1 | 200 | 130 (1)| 0 |00:00:00.01 | 3 | 1696K| 1696K| 520K (0)|
| 2 | TABLE ACCESS FULL| DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 | | | |
| 3 | TABLE ACCESS FULL| DEMO2 | 0 | 100K| 127 (1)| 0 |00:00:00.01 | 0 | | | |
------------------------------------------------------------------------------------------------------------------------------

The cost of DEMO1 full table scan is 3. The cost of DEMO2 full table scan is 127. That’s a total of 130 (the cost of the hash join itself is negligible here)

Nested Loop

When forcing an index access, a nested loop will be used:

select /*+ index(DEMO2) */ * from DEMO1 natural join DEMO2
Plan hash value: 995663177
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 203 (100)| 0 |00:00:00.01 | 3 |
| 1 | NESTED LOOPS | | 1 | 200 | 203 (0)| 0 |00:00:00.01 | 3 |
| 2 | NESTED LOOPS | | 1 | 200 | 203 (0)| 0 |00:00:00.01 | 3 |
| 3 | TABLE ACCESS FULL | DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 |
|* 4 | INDEX UNIQUE SCAN | DEMOPK | 0 | 1 | 0 (0)| 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 0 | 1 | 1 (0)| 0 |00:00:00.01 | 0 |
--------------------------------------------------------------------------------------------------------------

The cost of the index access is 1 and as it expected to run 200 loops the total cost is 200. With the full table scan of DEMO1 the total is 203.

Adaptive plan

Here is an explain plan to see the initial plan with active and inactive branches:

SQL> explain plan for
2 select /*+ */ * from DEMO1 natural join DEMO2;
SQL> select * from table(dbms_xplan.display(format=>'adaptive'));
Plan hash value: 3212315601
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 200 | 6400 | 130 (1)| 00:00:01 |
| * 1 | HASH JOIN | | 200 | 6400 | 130 (1)| 00:00:01 |
|- 2 | NESTED LOOPS | | 200 | 6400 | 130 (1)| 00:00:01 |
|- 3 | NESTED LOOPS | | | | | |
|- 4 | STATISTICS COLLECTOR | | | | | |
| 5 | TABLE ACCESS FULL | DEMO1 | 200 | 1000 | 3 (0)| 00:00:01 |
|- * 6 | INDEX UNIQUE SCAN | DEMOPK | | | | |
|- 7 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 1 | 27 | 127 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL | DEMO2 | 100K| 2636K| 127 (1)| 00:00:01 |
------------------------------------------------------------------------------------------

The active branches (full table scan) have the correct cost: 127 + 3 = 130

However, it’s not the case with inactive ones: no estimations for ‘INDEX UNIQUE SCAN’ and it seems that the ‘TABLE ACCESS BY INDEX ROWID’ get the cost from the full table scan (here 127).

It’s just an observation here. I’ve no explanation about it and I’ve no idea about the consequences except the big surprise when you see the numbers. I guess that the cost of the inactive branches is meaningless. What is important is that the right cost has been used to determine the inflection point.

The index access having a cost of 1, the cost of the nested loop will be higher than full table scan (estimated to 127) when there are more than 127 loops. This is what we see from the 10053 trace:
SQL> host grep ^DP DEMO14_ora_19470_OPTIMIZER.trc
DP: Found point of inflection for NLJ vs. HJ: card = 127.34

Now, as I have no rows in the tables, the nested loop branch will be activated in place of the hash join. So if we display the plan once it is resolved, we will see the lines with an unexpected cost:

Plan hash value: 995663177
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 130 (100)| 0 |00:00:00.01 | 3 |
| 1 | NESTED LOOPS | | 1 | 200 | 130 (1)| 0 |00:00:00.01 | 3 |
| 2 | NESTED LOOPS | | 1 | | | 0 |00:00:00.01 | 3 |
| 3 | TABLE ACCESS FULL | DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 |
|* 4 | INDEX UNIQUE SCAN | DEMOPK | 0 | | | 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 0 | 1 | 127 (1)| 0 |00:00:00.01 | 0 |
--------------------------------------------------------------------------------------------------------------

I think it’s harmless, just a bit misleading. 127 is not the cost of the index access. It’s the cost of the full table scan.
I had this surprise when trying to understand why the optimizer choose a full scan instead of index access. This is probably the only reason why I look at the cost: use hints to force the plan that I think is better, in order to understand where the optimizer thinks it is more expensive.

 

Cet article Adaptive Plans and cost of inactive branches est apparu en premier sur Blog dbi services.


Multitenant thin provisioning: PDB snapshots on ACFS

$
0
0

Database on ACFS is a long story. At first, the main reason for ASM was to bypass a filesystem layer that is not required by the database. ACFS was for the non-database files that had to be accessed by all cluster nodes. But then, storage vendors and other providers came with snapshots, compression and thin provisioning and Oracle had to answer: they implemented those storage features in ACFS and allowed database files on it.

When you create a database on an ODA X5, datafiles are created on an ACFS mount. There is only one ACFS mount for many databases. You probably want to snapshot at database level, but ACFS snapshots are only at filesystem level. To avoid that any write on the filesystem is copied when you need the snapshot for a single database, they implemented this way: At installation, the ACFS mount is created and a snapshot is taken when empty. Then each database created will create a snapshot. This means that in each snapshot you access only to one database. There is no overhead and no additional copies because the master is empty.
Then came multitenant where you can snapshot at PDB level for thin cloning (create PDB from … snapshot copy). But multitenant cannot be created on a snapshot. CDB must be at root level on ACFS. In ODA X6, an ACFS filesystem is created for each database. But then, when you thin clone a PDB, a snapshot is taken for the whole database. But this one is not empty: any write will have additional copy and overhead.

There’s more info about ACFS, copy-on-write which is not copy-on-write but redirect-on-write, and performance overhead in the excellent presentation and demo from Ludovico Caldara. Here I’ll show the snapshot overhead in multitenant when writing to the master, the clone, and the others.

PDB snapshots on ACFS

I start with a brand new CDB on ACFS with no snapshots:

[oracle@rac1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb3 16G 5.0G 9.9G 34% /
tmpfs 7.3G 631M 6.7G 9% /dev/shm
/dev/xvdb1 477M 118M 330M 27% /boot
/dev/xvdc1 69G 22G 44G 34% /u01
/dev/asm/data-52 25G 6.6G 19G 27% /u02
/dev/asm/fra-401 18G 3.4G 15G 19% /u03
/dev/asm/redo-358 19G 8.2G 11G 44% /u04

[oracle@rac1 ~]$ acfsutil snap info /u02
number of snapshots: 0
snapshot space usage: 0 ( 0.00 )

This is what is created by the Oracle Public Cloud for a RAC DBaaS.

I have a PDB1 pluggable database.

I create another one, PDBx, which will be totally independent.

SQL> create pluggable database PDBx admin user admin identified by "Ach1z0#d";
Pluggable database created.
SQL> alter pluggable database PDBx open;
Pluggable Database opened

I create a thin clone pluggable database database PDB2,using PDB1 as the master:

SQL> create pluggable database PDB2 from PDB1 snapshot copy;
Pluggable database created.
SQL> alter pluggable database PDB2 open;
Pluggable Database opened

Here are my pluggable databases:

SQL> select pdb_name,GUID from dba_pdbs;
 
PDB_NAME GUID
-------- --------------------------------
PDB$SEED 3360B2A306C60098E053276DD60A9928
PDB1 3BDAA124651F103DE0531ADBC40A5DD3
PDBX 3BDCCBE4C1B64A5AE0531ADBC40ADBB7
PDB2 3BDCCBE4C1B74A5AE0531ADBC40ADBB7

The PDB2 being a snapshot clone, it has created a snapshot on the /u02 which is the ACFS filesystem where datafiles are:

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 3588096 ( 3.42 MB )

Space usage is minimal here as no write happened yet.

datafiles

Here are the datafiles of my CDB to see if PDB2 are on the snapshot:

RMAN> report schema;
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 780 SYSTEM YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_system_cwxwcgz4_.dbf
2 260 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/ORCL/3360B2A306C60098E053276DD60A9928/datafile/o1_mf_system_cwxwbzrd_.dbf
3 1030 SYSAUX NO /u02/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_cwxw98jl_.dbf
4 760 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/ORCL/3360B2A306C60098E053276DD60A9928/datafile/o1_mf_sysaux_cwxwdof3_.dbf
7 545 UNDOTBS1 YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_cwxwdl6s_.dbf
8 200 UNDOTBS2 YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs2_cwxwrw7y_.dbf
9 370 PDB1:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_system_cwxx3fb0_.dbf
10 800 PDB1:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_sysaux_cwxx3fbl_.dbf
11 5 USERS NO /u02/app/oracle/oradata/ORCL/datafile/o1_mf_users_cwxxop2q_.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_users_cwxxopm9_.dbf
49 370 PDBX:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_system_cwy6688l_.dbf
50 800 PDBX:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_sysaux_cwy6688r_.dbf
51 5 PDBX:USERS NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_users_cwy6688z_.dbf
52 370 PDB2:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_system_cwy6725s_.dbf
53 800 PDB2:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_sysaux_cwy67261_.dbf
54 5 PDB2:USERS NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_users_cwy67268_.dbf

The PDB2 datafiles are actually symbolic links to the snapshot:


[oracle@rac1 ~]$ ls -l /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile
/u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile:
total 62484
lrwxrwxrwx 1 oracle oinstall 142 Sep 6 19:28 o1_mf_sysaux_cwy67261_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_sysaux_cwxx3fbl_.dbf
lrwxrwxrwx 1 oracle oinstall 142 Sep 6 19:28 o1_mf_system_cwy6725s_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_system_cwxx3fb0_.dbf
-rw-r----- 1 oracle oinstall 63971328 Sep 6 19:28 o1_mf_temp_cwy67267_.dbf
lrwxrwxrwx 1 oracle oinstall 141 Sep 6 19:28 o1_mf_users_cwy67268_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_users_cwxxopm9_.dbf

So, you have a snapshot on the/u02 which contains all the CDB datafiles but only the PDB2 datafiles will be read/written on the snapshot (though the symbolic links). The other CDB files are included in the snapshot, but without any reason. Nothing will read or write them. They are there only because ACFS cannot snapshot a folder or a set of file. It’s only a filesystem level.

wite on master

For the moment, the snapshot is small: the blocks are shared.

If I write 100MB on the master (PDB1), those blocks will be copied in order to persist the old image for the snapshot:

SQL> alter session set container=PDB1
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 105025536 ( 100.16 MB )

Snapshot size increased by the volume that has been written, which is expected: old image is required by PDB2.

write on thin clone

If I write to the clone, copy has to happen as well:

SQL> alter session set container=PDB2
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 211275776 ( 201.48 MB )

So, because I’ve written 200MB to blocks involved in the snapshots, the snapshot size is now 200MB.

However, look at the way I did it with truncate and insert. I’m writing on the same blocks as I did when writing on PDB1. To be sure I checked it from DBA_EXTENTS and had the same result in both pdbs:
SQL> select file_id,listagg(block_id,',')within group(order by block_id),blocks from dba_extents where segment_name='DEMO' and segment_type='TABLE' group by file_id,blocks;
 
FILE_ID LISTAGG(BLOCK_ID,',')WITHINGROUP(ORDERBYBLOCK_ID) BLOCKS
---------- ------------------------------------------------------------------------------------------------------------------------------------------------------ ----------
64 33024,33032,33040,33048,33056,33064,33072,33080,33088,33096,33104,33112,33120,33128,33136,33144 8
64 33152,33280,33408,33536,33664,33792,33920,34048,34176,34304,34432,34560,34688,34816,34944,35072,35200,35328,35456,35584,35712,35840,35968,36096,36224, 128
36352,36480,36608,36736,36864,36992,37120,37248,37376,37504,37632,37760,37888,38016,38144,38272,38400,38528,38656,38784,38912,39040,39168,39296,39424,
39552,39680,39808,39936,40064,40192,40320,40448,40576,40704,40832,40960,41088 1024
64 41216,42240,43264,44288,45312

So why do I have additional 100MB on my snapshots? Writing to the original blocks would be sufficient as they have been redirected to new ones by the write from PDB1? But because the ACFS snapshot, the previous image is kept. In addition to the current state of PDB1 and PDB2, the snapshot also keeps the blocks from PDB1 as they were at the time of the PDB2 clone. Who needs it?

Ok. This is not a big issue if we consider that you usually don’t write on the master, because it’s the master.

write on other PDB

Remember that multitenant is for consolidation. You don’t use a CDB only for the master and its clones. You may want to host other PDBs. If we had a snapshot for PDB1 and PDB2, writes to other files should not be concerned: no write overhead and no additional storage. However, because the snapshot was made on the whole filesystem, any write to it will copy the blocks, even those that are not concerned by the thin cloned PDB. Here I’m writing 100MB to PDBx which has nothing in common with PDB1 nor PDB2:

SQL> alter session set container=PDBx
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 311214080 ( 296.79 MB )

This is an additional 100MB. Before image of PDBx has been saved, for no reason because we will never read this previous image.

snapshot keeps growing

after a few minutes without any user activity, snapshot has grown further:

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 332947456 ( 317.52 MB )

On the /u02 filesystem, there is all the CDB files, SYSTEM, UNDO, controlfile, etc. They have activity and they are copied when written, once again for no reason.

drop thin clone

Only when I drop the PDB2 this space is released:

SQL> alter pluggable database pdb2 close;
Pluggable database altered.
SQL> drop pluggable database pdb2 including datafiles;
Pluggable database dropped.

[oracle@rac1 ~]$ acfsutil snap info /u02
number of snapshots: 0
snapshot space usage: 0 ( 0.00 )

So what?

There’s no bug here. It works as designed because ACFS snapshot is at filesystem level. If you want to use multitenant thin provisioning, the recommendation is to dedicate a CDB for the master and its clones. Any other PDB will have their writes copied to the snapshot for no reason. Writes to common files: UNDO (in 12.1), SYSTEM, SYSAUX will also be concerned. The duration of the clones should be short life, refreshed frequently. And of course, thin cloning is not for production. Very few snapshot/compression/clone technologies can be used in production. Look at storage vendors solutions for that (XtremIO for example).

 

Cet article Multitenant thin provisioning: PDB snapshots on ACFS est apparu en premier sur Blog dbi services.

Oracle 12cR2 SQL new feature: LISTAGG overflow

$
0
0

LISTAGG was a great feature introduced in 11g: put rows into line with a simple aggregate function. 12cR2 adds an overflow clause to it.

What happens when you have so many rows that the LISTAGG result is too long?

SQL> select listagg(rownum,',')within group(order by rownum) from xmltable('1 to 10000');
select listagg(rownum,',')within group(order by rownum) from xmltable('1 to 10000')
*
ERROR at line 1:
ORA-01489: result of string concatenation is too long

An error at runtime, and we don’t like runtime errors.

If you want to manage the overflow, it’s not easy: run a first query that sums the length and then calculate how much can fit

SQL> select v.*,4000-size_current from (
2 select n,
3 sum(length(n||',')) over(order by n rows between unbounded preceding and current row)-1 size_current,
4 sum(length(n||',')) over(order by n rows between unbounded preceding and 1 following)-1 size_next
5 from (select rownum n from xmltable('1 to 10000'))
6 ) v
7 where size_current between 4000-50 and 4000;
 
N SIZE_CURRENT SIZE_NEXT 4000-SIZE_CURRENT
---------- ------------ ---------- -----------------
1012 3952 3957 48
1013 3957 3962 43
1014 3962 3967 38
1015 3967 3972 33
1016 3972 3977 28
1017 3977 3982 23
1018 3982 3987 18
1019 3987 3992 13
1020 3992 3997 8
1021 3997 4002 3
 
9 rows selected.

Here you can see that values above 1020 will not fit in a VARCHAR2(4000).

In 12.2 you can manage the overflow in two ways

You can choose to raise a runtime error:

SQL> select listagg(rownum, ',' on overflow error)within group(order by rownum) from xmltable('1 to 10000');
select listagg(rownum, ',' on overflow error)within group(order by rownum) from xmltable('1 to 10000')
*
ERROR at line 1:
ORA-01489: result of string concatenation is too long

But you can also choose to truncate the result:

SQL> select listagg(rownum, ',' on overflow truncate '' without count)within group(order by rownum) from xmltable('1 to 10000');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE''WITHOUTCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,1016,1017,1018,1019,1020,1021,

You may want to add some characters to show that it has been truncated:


SQL> select listagg(rownum, ',' on overflow truncate '...' without count)within group(order by rownum) from xmltable('1 to 10000');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE'...'WITHOUTCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,1016,1017,1018,1019,1020,...

And you may even show the number of values that are not displayed:


SQL> select listagg(rownum, ',' on overflow truncate '...' with count)within group(order by rownum) from xmltable('1 to 10000');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE'...'WITHCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,...(8985)

The nice thing is that the truncation is adapted to the information displayed:


SQL> select listagg(rownum, ',' on overflow truncate 'blah blah blah...' with count)within group(order by rownum) from xmltable('1 to 10000');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE'BLAHBLAHBLAH...'WITHCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,blah blah blah...(8988)

The “,blah blah blah…()” takes 20 characters, the count may take up to 24 characters, so the truncated value cannot be larger than 4000-20-24=3956. From the first query we run we see that we have to truncate after the value “2012”. There’s no dynamic evaluation of the count size.

If all the values fit, then it’s not truncated. In the first query we have seen that values up to 1021 takes 3997 characters:


SQL> select listagg(rownum, ',' on overflow truncate 'blah blah blah...' with count)within group(order by rownum) from xmltable('1 to 1021');
 
LISTAGG(ROWNUM,','ONOVERFLOWTRUNCATE'BLAHBLAHBLAH...'WITHCOUNT)WITHINGROUP(ORDERBYROWNUM)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,
103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,
178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,
253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,
328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,
403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,
478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,
553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,
628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,
703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,
778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,
853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,885,886,887,888,889,890,891,892,893,894,895,896,897,898,899,900,901,902,903,904,905,906,907,908,909,910,911,912,913,914,915,916,917,918,919,920,921,922,923,924,925,926,927,
928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,943,944,945,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,976,977,978,979,980,981,982,983,984,985,986,987,988,989,990,991,992,993,994,995,996,997,998,999,1000,1001,10
02,1003,1004,1005,1006,1007,1008,1009,1010,1011,1012,1013,1014,1015,1016,1017,1018,1019,1020,1021

In summary

After the delimiter string you can add:

  • ON OVERFLOW ERROR which is the default. Same behavior as in previous releases.
  • ON OVERFLOW TRUNCATE with a string that is added in case of truncation (default: ‘…’) and optionally WITH COUNT (or WITHOUT COUNT which is the default)

The full syntax is in the documentation

 

Cet article Oracle 12cR2 SQL new feature: LISTAGG overflow est apparu en premier sur Blog dbi services.

Oracle Database 12.2 – PDBaaS

$
0
0

It’s official, Larry Ellison made the annoucement at first keynote and the database product team at oracle has released the version and the documentation publicly. Oracle Database Exadata Express Cloud Service is the ‘Cloud First’ environment for 12.2

Documentation

Documentation is there: Cloud > Platform > Data Management > Exadata Express
The 12.2 new features available in that ‘Cloud First’ are documented here

Cloud First

We knew it, 12.2 comes ‘Cloud First’ which means that you cannot download it but you can use it on a Cloud Service. This is in my opinion a very good idea. We will not upgrade all our databases to 12.2 so it’s beter to test it and cloud services are good for that.
However the way it is released is quite limited:

  • There is no free trial. You have to pay for the minimum service to test it (175$/month)
  • Features are very limited because the service is a PDB, not a full database

PDBaaS

This Oracle Database Exadata Express Cloud Service is a fully managed service, wich means that you are not the database administrator. Oracle manages the system, creates and administrate the database. You are a user.
Actually, when you create a service, a Pluggable Database is provisioned for you and you access only this PDB. It addition to that, for security reason, all features that may interact with the other PDBs or the system, are locked down. For example, you cannot use Data Pump because it writes files on the server. All limitations are documented here.
If you wonder how those limitations are implemented, it’s a new 12.2 multitenant feature called lockdown profiles, and resource manager that can isolate PDB memory. I presented that yesterday at Oracle Open World and there is more information about it in new book to come.

Options

Features are limited but you have most of options available: In-Memory, Data Mining, Advanced Compression and Hybrid Columnar Compression, Data Redaction, etc. And it’s an Exadata behind so you have SmartScan.

You can think of it as the ‘Schema as a Service’, but with a PDB instead of a schema.

You access to it only through SQL*Net (encrypted) and can move data to and from using SQLDeveloper.

Shapes

When you see ‘Exadata’, ‘In-Memory’, and all those options, you probably think about a service for very big database and high CPU resources. But it is not. This service is for evaluation of the 12.2, testing, developement, training on very small databases (few hundred of GB). And only one OCPU (which is an intel core with two threads). It’s hard to imagine more than one user on this. Maximum memory being 5GB it’s also hard to imagine In-Memory here.

So the goal is clearly to test feature, not to run workloads. You can go live with it only if your production is not critical at all (database is backed up daily).

Express

The ‘Express’ part is the simplicity. Prices are easy to calculate:

  • 175$/month for 20GB of storage and one OCPU. This is ‘X20′ service.
  • Next level is the ‘X50′ service at 750$/month, so x2.5 times the storage for x4.2 the price. Still one OCPU.
  • Highest level is ‘X50IM’ at 950$/month, which is the same but with larger memory.

Non-Metered

It is a non-metered service: whether you use it or not you pay per month. But don’t think you can do whatever you want within that month as the transfer of data is limited. You can transfer the volume of the database only a few times per month.

So what?

The utilization is simple: you don’t need a DBA. This is the main point: automation and fast provisioning.
Developers will love that. Giving them full options is a good marketing idea. Once the application is designed to use In-Memory, Compression, etc. theses options will be required for production as well.

Today, developers need more agility and are often slowed down by the operations. And that’s a major reason why they go to other products that they can install and use themselves easily: Postgres, Cassandra, MongoDB, etc. Oracle Database is to fat for that: look at the time you need to create a database, catalog, catproc, etc. A first answer was the Oracle XE edition which is easy to install anywhere. Now with this Express Cloud Service Oracle gives to possibility to provision a small database in minutes which requires no further administration.
Actually, this is the whole idea behind the multitenant architecture: consolidate all those system objects created by catalog/catprocg into a common location (CDB$ROOT) and have light PDBs with only user data.

Final remark. Currently 12.2 is available on on that service but there are no doubts that a full 12.2 will come within the next months.

 

Cet article Oracle Database 12.2 – PDBaaS est apparu en premier sur Blog dbi services.

Oracle 12cR2 Long Identifiers

$
0
0

This morning during Gerald Venzl presentation of “What’s New for Developers in the Next Generation of Oracle Database” at Oracle Open World, one feature has been acclaimed by a full room: 12.2 show the end of identifiers limited to 30 characters.

12.1

We knew it would happen because in 12.1 all data dictionary views have 128 bytes length character strings:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> desc dba_objects
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(128)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(128)

but that’s only the dictionary metadata. Impossible to reach that limit:

SQL> create table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" as select * from dual;
create table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" as select * from dual
*
ERROR at line 1:
ORA-00972: identifier is too long

It is only annoying as the default column format do not fit in the screen:

SQL> select owner,object_name from dba_objects where object_type='TABLE';
 
OWNER
------------------------------------------------------------------------------------------------------------------------
OBJECT_NAME
------------------------------------------------------------------------------------------------------------------------

12.2

In 12.2 you can create longer identifiers:

SQL> create table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" as select * from dual;
Table created.
SQL> alter table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" add XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX number;
Table altered.

But be careful, the limit is in bytes – not in characters. If we have multibytes characters, the limit can be reached earlier:

SQL> alter table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" add X€XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX number;
alter table "ThisIsAVeryLongNameThatIsAllowedInTwelveTwoC" add X€XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX number
  *
ERROR at line 1:
ORA-00972: identifier is too long

So what?

The goal is not to abuse and put the longest names possible. But allowing more that 30 characters can be easier to match table names with Java class names for example.

Oracle 7

I wanted to show that those 30 characters limits was very old, so I ran a 7.3.3 oracle version that lies on my laptop and displayed the same describe of DBA_OBJECTS:
CaptureLongIdentifier
Identifiers were 30 characters there. But look at object name: those 128 bytes are there for more than 20 years!

 

Cet article Oracle 12cR2 Long Identifiers est apparu en premier sur Blog dbi services.

Oracle 12cR2 Optimizer Adaptive Statistics

$
0
0

When 12.1 came out, the major issue we encountered after migration was related to the new adaptive features of the optimizer: automatic reoptimization, SQL Plan Directives and the resulting dynamic sampling. Of course, Oracle product managers listen to feedbacks, ensure to provide workarounds or fixes and make things better for next release. Let’s see what has been announce on this topic for 12.2

Fixing vs. workarounds

Unfortunately, in most case, when a problem is encountered people put priority on it only until the issue appears as “solved”, and then close the problem. However for stability and reliability, this is not enough. There are two phases in problem resolution:

  1. Make broken things working again as soon as possible
  2. Ensure that the solution addresses the root cause and is in the same scope as the problem

If you stop after the first point, you don’t have a solution. You have a workaround, and two things will happen soon or later:

  • The problem will come back again
  • New problems will appear as side effects

12.1

So, when you upgrade to 12c from 11.2.0.4 for example, the easiest way to fix a regression is to set optimizer_features_enable=’11.2.0.4′. But when you do that, you did only the first step. Nothing is fixed. Actually, when doing that you didn’t even finish your upgrade job.
I’ve already blogged about how to fix an adaptive statistics issue and keep the fix in the same scope as the problem by:

so there are many combinations that depend on your context.

One parameter do not fit all

It’s easy to disable all new adaptive features and claim that the 12c optimizer is full of bugs. However there are two things that I’m sure:

  • The developers of the optimizer know their stuff at least 100x better than I do
  • They want to bring nice features rather than trying to break things

And they do something very nice: each individual feature can be enabled or disabled by a parameter. So there are lot of parameters. Some of them are undocumented just because at release time they don’t think they should have a different value other than default, except special situations guided by the support. But one set of default value cannot fit all environments. Are you doing OLTP or BI? OLTP likes stability, BI likes adaptive optimization. And probably your database has both OLTP and reporting workloads, and maybe at the same time. This is the first reason why one set of parameter cannot fit all. There’s another one you should think about before blaming the optimizer. Maybe they bring features that helps to make good applications even better. Maybe the set of default value is not chosen to fit the worst application design…
Let’s come back to the OLTP vs. BI example. Adaptive features are enabled by default for BI. You may spend more time on parsing in order to get the optimal execution plan. But then you complain that your OLTP spends more time on parsing… But you are not supposed to parse on OLTP! The overhead of adaptive features should not be a blocking problem if you parse your queries once and then execute them.

I tend to blog on encountered issues rather that on thinks that do not raise any problem. Because my job is to solve problems rather that stay looking at what works well. I’ve encountered a lot of issues with those adaptive features. But I have seen lot of application that had no problem at all when upgraded to 12c. When you disable the adaptive features, do you workaround an optimizer problem, or your application design problem?

12.2

In 12.1 only optimizer_adaptive_features is documented but it disables too many features. You may want to disable SQL Plan Directive and their consequences. But you probably want to keep adaptive plans as they are awesome and less prone of bad side effects. So in 12.2 this parameter has been split into two parameters: OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS

In addition to that, only OPTIMIZER_ADAPTIVE_PLANS is set to true by default. OPTIMIZER_ADAPTIVE_STATISTICS is false so by default you will not have the following 12c features: SQL Plan Directives, Statistics feedback, Performance feedback, Adaptive dynamic sampling for parallel query.

Here are the features enabled by OPTIMIZER_ADAPTIVE_PLANS:

optimizer_adaptive_plans TRUE FALSE
_optimizer_nlj_hj_adaptive_join TRUE FALSE
_px_adaptive_dist_method CHOOSE OFF
_optimizer_strans_adaptive_pruning TRUE FALSE

If you want more information about them, I’ve written articles about adaptive join, adaptive PX distribution and adaptive star transformation bitmap pruning

Here are the features enabled by OPTIMIZER_ADAPTIVE_STATISTICS:

optimizer_adaptive_statistics FALSE TRUE
_optimizer_dsdir_usage_control 0 126
_optimizer_use_feedback_for_join FALSE TRUE
_optimizer_ads_for_pq FALSE TRUE

As you can see there is no “_optimizer_gather_feedback” here so the cardinality feedback coming from 11g is still there when you disable adaptive statistics. You may like it or not, and maybe want to disable cardinality feedback as well if you don’t want plans that change.

What if you already have some SPDs? as “_optimizer_dsdir_usage_control” is 0 they will not be used. And they will be dropped automatically after 53 weeks of no usage.

 

Cet article Oracle 12cR2 Optimizer Adaptive Statistics est apparu en premier sur Blog dbi services.

OOW 2016: nouveautés base de donnée

$
0
0

Voici quelques infos sur ce qui a été annoncé ou présenté à l’Oracle Open World. L’info est relayée un peu partout principalement en anglais, donc voici un résumé à l’attention des francophones

Oracle Database 12c Release 2

Soyons clair, la base de donnée n’est pas le sujet principal de l’Open World. Comme prévu, c’est une sortie ‘Cloud First’ mais la version rendu publique lundi est une version limitée.
Si vous avez utilisé le ‘Schema as a Service’ c’est un peu la même idée sauf qu’il s’agit de ‘PDB as a Service’ ici. En multitenant, la consolidation par Schema est remplacée par la consolidation par PDB qui a l’avantage de présenter virtuellement une base complète, avec ses objects publics, ses multiples schemas, etc.
Donc pas d’accès d’administration: c’est un service “managed” – administré par Oracle.
Le multitenant permet de donner des droits DBA sur une PDB tout en empêchant d’interagir avec le reste du système. Ce sont des nouvelles fonctionnalités de la 12.2, entre autres les “lockdown profiles” qui ont été développées dans ce but.
Le service s’appelle “Exadata Express Cloud Service” car il tourne sur Exadata (donc compression HCC et bientôt SmartScan). La plupart des options sont d’ailleurs disponibles (In-Memory, Advanced Compression,…)
“Express” est pour la facilité et rapidité de provisonning: quelques minutes. Le but est qu’un développeur puisse en 5 minutes créer un service base de donnée facilement accessible (par SQL*Net encrypté). L’idée c’est qu’il soit aussi facile pour un développeur de créer une base Oracle que de créer des bases Postgres, Cassandra, MongoDB,…
Et bien sûr si on met toutes les option, le développeur va les utiliser et elles deviendront nécessaires en production.

Il y aura bientôt un Data Center en Europe. Pour le moment, c’est seulement aux USA. Le prix est attractif (CHF 170 par mois) mais la base est assez limitée en terme de CPU, stockage et mémoire. C’est principalement pour du développement et du bac à sable.

Donc la 12c Release 2 pour le moment n’est disponible que sous la forme de PDBaaS sur Exadata Express Cloud Service:

EXCS

Avant la fin de l’année, on devrait avoir la 12.2 en DBaaS (non-managed que l’on connait actuellement sir le PaaS Oracle) et la version General Availability viendra ensuite, probablement en 2017

 

Cet article OOW 2016: nouveautés base de donnée est apparu en premier sur Blog dbi services.

Oracle 12cR2: IS_ROLLING_INVALID in V$SQL

$
0
0

In a previous post I published a test case to show when a cursor is not shared anymore after a rolling invalidation. Basically the dbms_stats marks the cursor as ‘rolling invalid’ and the next execution marks it as ‘rolling invalid executed’. Looking at 12cR2 there is a little enhancement in V$SQL with an additional column displays those states.

Note that 12cR2 full documentation is not yet available, but you can test this on the Exadata Express Cloud Service.

I set the invalidation period to 5 seconds instead of 5 hours to show the behavior without waiting

17:43:52 SQL> alter system set "_optimizer_invalidation_period"=5;
System altered.

I’ll run a statement with dbms_sql in order to separate parse and execute phases

17:43:53 SQL> variable c number
17:43:53 SQL> exec :c := dbms_sql.open_cursor;
PL/SQL procedure successfully completed.
17:43:53 SQL> exec dbms_sql.parse(:c, 'select (cast(sys_extract_utc(current_timestamp) as date)-date''1970-01-01'')*24*3600 from DEMO' , dbms_sql.native );
PL/SQL procedure successfully completed.

Here is the cursor from V$SQL including the new IS_ROLLING_INVALID column:

17:43:53 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 0 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 N

Statement is parsed (one parse call + load) but IS_ROLLING_INVALID is N

Now I execute it:

17:43:53 SQL> exec dbms_output.put_line( dbms_sql.execute(:c) );
0
PL/SQL procedure successfully completed.
 
17:43:53 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 1 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 N

Statement has one execution.

I’m now gathering statistics with default rolling invalidation:

17:43:53 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
 
17:43:53 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 1 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 Y

The cursor is now marked as rolling invalid (IS_ROLLING_INVALID=”Y”) but wait, this is not a “Y”/”N” boolean, there’s another possible value.

I execute the statement again (no parse call, only execution):

17:43:53 SQL> exec dbms_output.put_line( dbms_sql.execute(:c) );
0
PL/SQL procedure successfully completed.
 
17:43:53 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 X

Cursor is now marked as rolling invalid executed (“X”) and this is where the rolling window starts (which I’ve set to 5 seconds instead of 5 hours)

I wait 5 seconds and the cursor has not changed:

17:43:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 X
 

I execute it again (no parse call, only re-execute the cursor):

17:43:58 SQL> exec dbms_output.put_line( dbms_sql.execute(:c) );
0
PL/SQL procedure successfully completed.

For this execution, a new child has been created:

17:43:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 X
0 1 0 1 2016-09-25/17:43:53 2016-09-25/17:43:57 25-SEP-16 17:43:57 N

So rolling invalidation do not require a parse call. Execution can start the rolling window and set the invalidation timestamp, and first execution after this timestamp creates a new child cursor.

I’ll now test what happens with parse calls only.

I set a longer rolling window (2 minutes) here:

17:43:58 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
 
17:43:58 SQL> alter system set "_optimizer_invalidation_period"=120;
System altered.

The last child has been marked as rolling invalid but not yet executed in this state:

17:43:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------ ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 17:43:52 X
0 1 0 1 2016-09-25/17:43:53 2016-09-25/17:43:57 25-SEP-16 17:43:57 Y

From a new session I open another cursor:

17:43:58 SQL> connect &_user./demo@&_connect_identifier
Connected.
17:43:58 SQL> exec :c := dbms_sql.open_cursor;
PL/SQL procedure successfully completed.

And run several parse calls without execute, one every 10 seconds:

17:43:58 SQL> exec for i in 1..12 loop dbms_sql.parse(:c, 'select (cast(sys_extract_utc(current_timestamp) as date)-date''1970-01-01'')*24*3600 from DEMO' , dbms_sql.native ); dbms_lock.sleep(10); end loop;
PL/SQL procedure successfully completed.

So two minutes later I see that I have a new child created during the rolling window:

17:45:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTI IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- --------- ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 X
0 1 3 1 2016-09-25/17:43:53 2016-09-25/17:43:57 25-SEP-16 Y
0 1 9 0 2016-09-25/17:43:53 2016-09-25/17:44:27 25-SEP-16 N

Here, at the third parse call (17:44:27) during the invalidation window, a new child cursor has been created. The old one is still marked as rolling invalid (“Y”), but not ‘rolling invalid executed’ (“X”) because it has not been executed.

So it seems that both parse or execute are triggering the rolling invalidation, and the IS_ROLLING_INVALID displays which one.

An execute will now execute the new cursor:

17:45:58 SQL> exec dbms_output.put_line( dbms_sql.execute(:c) );
 
PL/SQL procedure successfully completed.
 
17:45:58 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTI IS_ROLLING_INVALID
------------- ---------- ----------- ---------- ------------------- ------------------- --------- ------------------
0 1 1 2 2016-09-25/17:43:53 2016-09-25/17:43:53 25-SEP-16 X
0 1 3 1 2016-09-25/17:43:53 2016-09-25/17:43:57 25-SEP-16 Y
0 1 9 1 2016-09-25/17:43:53 2016-09-25/17:44:27 25-SEP-16 N

Of course, when new cursors have been created we can see the reason in V$SQL_SHARED_CURSOR:

17:45:58 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(2)</reason><size>0x0</size><details>already_processed</details></Ch
ildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invali
date Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1472658232<
/invalidation_window><ksugctm>1472658237</ksugctm></ChildNode>
 
1 <ChildNode><ChildNumber>1</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(2)</reason><size>0x0</size><details>already_processed</details></Ch
ildNode><ChildNode><ChildNumber>1</ChildNumber><ID>33</ID><reason>Rolling Invali
date Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1472658266<
/invalidation_window><ksugctm>1472658268</ksugctm></ChildNode>
 
2

The last child cursor has been created at 5:44:28 (invalidation_window=1472658268) because invalidation timestamp (ksugctm=1472658266)

So what?

We love Oracle because it’s not a black box. And it’s good to see that they continue in this way by exposing in V$ views information that can be helpful for troubleshooting.

Rolling invalidation has been introduced for dbms_stats because we have to gather statistics and we don’t want hard parse storms after that.
But remember that invalidation can also occur with DDL such as create, alter, drop, comment, grant, revoke.

You should avoid running DDL when application is running. However, we may have to do some of those operations online. It would be nice to have the same rolling invalidation mechanisms and it seems that it will be possible:


SQL> show parameter invalid
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_invalidation string IMMEDIATE
 
SQL> alter session set cursor_invalidation=XXX;
ERROR:
ORA-00096: invalid value XXX for parameter cursor_invalidation, must be from among IMMEDIATE, DEFERRED

That’s interesting. I’ll explain which DDL can use that in a future blog post.

 

Cet article Oracle 12cR2: IS_ROLLING_INVALID in V$SQL est apparu en premier sur Blog dbi services.


Oracle 12c and RMAN automatic datafile creation

$
0
0

A lot of new features popped up with RMAN 12c, like  Expansion of Multi-section support, or Simplified cross-platform migration and many many more. However, in this post I would like to highlight a small, but very helpful new feature which I demonstrated during my session at the SOUG day.

The automatic datafile creation

In earlier versions of Oracle, you might run into situations where you create a new tablespace, and some objects into it, and then, all of a sudden, the DB crashes or the datafile is lost, without ever being backed up.

This is where the follwing command comes into play:

alter database create datafile <NAME>;

However, in Oracle 12c, this is done automatically for you, in case you use RMAN.

Lets create a tablespace and afterwards resize the datafile.

SQL> create tablespace dbi datafile '/home/oracle/rman/dbi01.dbf' size 16M;

Tablespace created.

SQL> alter database datafile '/home/oracle/rman/dbi01.dbf' resize 32M;

Database altered.

Now lets create a table in the new tablespace.

SQL> create table t_dbi tablespace dbi as select * from dba_objects;

Table created.

SQL> select count(*) from t_dbi;

  COUNT(*)
----------
    120130

Afterwards, we simulate an error and then check what Oracle says.

$ echo destroy > /home/oracle/rman/dbi01.dbf
SQL> alter system flush buffer_cache; 

System altered. 

SQL> select count(*) from t_dbi; 
select count(*) from t_dbi 
* 
ERROR at line 1: 
ORA-03135: connection lost contact Process ID: 25538 
Session ID: 387 Serial number: 55345

From the alert.log

Errors in file /u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/OCM121_ora_25904.trc:
ORA-01157: cannot identify/lock data file 11 - see DBWR trace file
ORA-01110: data file 11: '/home/oracle/rman/dbi01.dbf'

Ok. Datafile is gone, and we have no backup. Lets do a preview to see, how Oracle would restore the datafile.

RMAN> restore ( datafile 11 ) preview;

Starting restore at 30-SEP-2016 15:46:27
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=355 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=126 device type=DISK

datafile 11 will be created automatically during restore operation
using channel ORA_DISK_1
using channel ORA_DISK_2

List of Archived Log Copies for database with db_unique_name OCM121
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
519     1    529     A 30-SEP-2016 12:23:00
        Name: +FRA/OCM121/ARCHIVELOG/2016_09_30/thread_1_seq_529.454.923931563

recovery will be done up to SCN 7346834
Media recovery start SCN is 7346834
Recovery must be done beyond SCN 7346834 to clear datafile fuzziness
Finished restore at 30-SEP-2016 15:46:28

As you can see in the preview output, Oracle will create the datafile automatically for us. Ok. Lets try it.

RMAN> restore ( datafile 11 );

Starting restore at 30-SEP-2016 15:48:29
using channel ORA_DISK_1
using channel ORA_DISK_2

creating datafile file number=11 name=/home/oracle/rman/dbi01.dbf
restore not done; all files read only, offline, or already restored
Finished restore at 30-SEP-2016 15:48:31

RMAN> recover datafile 11;

Starting recover at 30-SEP-2016 15:49:17
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery
media recovery complete, elapsed time: 00:00:02

Finished recover at 30-SEP-2016 15:49:20

RMAN> alter database datafile 11 online;

Statement processed

Ready. We can now access our table again without running the “alter database create datafile” command during the restore. And the resize to 32M was also done for us. However, the resize came during the recovery part.

$ ls -l dbi01.dbf
-rw-r----- 1 oracle asmadmin 33562624 Sep 30 15:52 dbi01.dbf

From my point of view, a small, but quite helpful feature.

One last remark. I have done my demo with Oracle 12cR1 with July 2016 PSU. In that version, the RMAN PREVIEW command has a bug, which says that you cannot recover into a consistent state, even if you can.  After applying the following OnOff patch on top of July PSU 2016, the RMAN PREVIEW command works as expected.

Patch 20315311: RMAN-5119: RECOVERY CAN NOT BE DONE TO A CONSISTENT STATE

Cheers,

William

 

 

 

 

Cet article Oracle 12c and RMAN automatic datafile creation est apparu en premier sur Blog dbi services.

Oracle 12cR2: DDL deferred invalidation

$
0
0

In a previous post I described the new V$SQL views about rolling invalidation. I did the example with dbms_stats which is able to do rolling invalidation since 11g. But more is coming with 12.2 as you can use rolling invalidation for some DDL.

When you do some DDL on an object, dependent cursors are invalidated. On a busy database it’s something to avoid. I’ve seen recently a peak of hanging sessions during 10 minutes and the root cause was an ALTER TABLE ADD COMMENT. This do not change anything in the execution plan, but the cursors are invalidated. And doing that on a central table can trigger a hard parse storm.

For the example, I’ll reduce the rolling invalidation window to 25 seconds as I don’t want to wait hours:
SQL> alter system set "_optimizer_invalidation_period"=25;
System altered.

Currently 12.2 is available only on Oracle Database Express Cloud Service, but please don’t ask me how I was able to set an underscore parameter there. However you can reproduce the same by waiting 5 hours instead of 25 seconds.

I create a simple table and prepare my query to check cursor information:

SQL> create table DEMO as select * from dual;
Table created.
 
SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
no rows selected
 

I run a simple query and check the cursor:

SQL> SELECT * FROM DEMO;
 
D
-
X
 
SQL>
SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 1 1 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 N

1 parse call, cursor loaded, hard parsed and executed.

I create an index on the table with the DEFERRED INVALIDATION new syntax:


SQL> create index DEMO on DEMO(dummy) deferred invalidation;
Index created.

I see that it is flagged as rolling invalid:

SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 1 1 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 Y

When I run it, a timestamp is set within the rolling invalidation window (5 hours by default, but here 25 seconds):

SQL> SELECT * FROM DEMO;
 
D
-
X
 
SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 X

As you see, this Y/N flag has a third value to show that it has been executed after being rolling invalidated.

I wait 30 seconds:

SQL> host sleep 30

From that point, the invalidation timestamp has been reached so a new execution will create a new child cursor:

SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 X
 
SQL> SELECT * FROM DEMO;
 
D
-
X
 
SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 X
SELECT * FROM DEMO 0 1 1 1 2016-09-17/01:04:16 2016-09-17/01:04:45 01:04:45 N

You must be careful here. If you used to check the INVALIDATIONS column, then you may miss the rolling ones. INVALIDATIONS is for parent cursors. IS_ROLLING_INVALID is for invalidated child cursors.

Note that, of course, until the invalidation, the new index will not be used by those cursors. So if you create the index to solve an performance issue you may prefer to invalidate the cursors.

Same can be done with index rebuild:


SQL> alter index DEMO rebuild deferred invalidation;
Index altered.
 
SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 X
SELECT * FROM DEMO 0 1 1 1 2016-09-17/01:04:16 2016-09-17/01:04:45 01:04:45 Y
 
SQL> SELECT * FROM DEMO;
 
D
-
X
 
SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 X
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:45 01:04:46 X

Of course, rolling invalidation can happen only for the cursors that do not use the index.

With the same restriction, you can do it when you set an index unusable


SQL> alter index DEMO unusable deferred invalidation;
 
Index altered.
 
SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 X
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:45 01:04:46 X
 
SQL> SELECT * FROM DEMO;
 
D
-
X
 
SQL> select sql_text,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%';
 
SQL_TEXT INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE IS_ROLLING_INVALID
------------------- ------------- ---------- ----------- ---------- ------------------- ------------------- ----------- --------------------
SELECT * FROM DEMO 0 1 2 2 2016-09-17/01:04:16 2016-09-17/01:04:16 01:04:16 X
SELECT * FROM DEMO 0 1 3 3 2016-09-17/01:04:16 2016-09-17/01:04:45 01:04:46 X

You get the same behavior if you drop the index: cursors that do not use it are no invalidated immediately.

From my tests, you can add DEFERRED INVALIDATION when you MOVE TABLE, but invalidation is immediate. Only when moving partitions, the rolling invalidation occurs.

An alternative to specify DEFERRED INVALIDATION in the DDL is to set it as the default:


SQL> show parameter cursor_invalidation
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_invalidation string IMMEDIATE
 
SQL> alter session set cursor_invalidation=deferred;
 
Session altered.
 

But remember, not all DDL will do rolling invalidation, even when the syntax is accepted.

 

Cet article Oracle 12cR2: DDL deferred invalidation est apparu en premier sur Blog dbi services.

Oracle 12c and RMAN switch datafile to copy, is it really so easy?

$
0
0

Oracle incrementally updating backups are used quite often because they are easy to setup and restoring a datafile is very fast. It is very fast because you are not really restoring a datafile, you are switching to the copy, in case something happens. But how do I switch back to the original destination with minimal downtime and with minimal impact on the system?

A quite common scenario is that we have 3 diskgroups, +DATA, +FRA and +REDO with different performance characteristics, like the following:

  • +DATA Diskgoup is on fast storage (10k rpm)
  • +FRA Diskgroup is on medium storage (7200 rpm)
  • +REDO Diskgroup is on very fast storage (15k rpm)

Loosing a 8T datafile

 

 

 

 

 

 

 

 

In case we loose now a bigfile with 8TB on the +DATA diskgroup, what options do we  have to restore it. Ok. Lets ask the Oracle Data Recovery Advisor first.  Oracle came up with the following script.

oracle@oel001:/home/oracle/rman/ [OCM121] cat /u00/app/oracle/diag/rdbms/ocm121/OCM121/hm/reco_3905236091.hm
   # restore and recover datafile
   restore ( datafile 9 );
   recover datafile 9;
   sql 'alter database datafile 9 online';

The script does his job, no question about it, but it means, that Oracle would copy 8TB from the +FRA to +DATA and afterwards maybe applying an inc1 and some archivelogs. If we do run this script, we wait for 4h. (suppose that we are copying with 600MB per second, which is very good)

In case your Database has a Standby in a DataGuard configuration, Oracle comes up with the following suggestion.

oracle@oel001:/home/oracle/rman/ [OCM121] cat /u00/app/oracle/diag/rdbms/ocm121/OCM121/hm/reco_4177443733.hm
   # restore from standby and recover datafile
   restore ( datafile 9 from service "sty121" );
   recover datafile 9;
   sql 'alter database datafile 9 online';

Again, it will work, but now Oracle tries to get the 8TB datafile over the network from the standby, which makes it even slower. Unfortunately, the “switch datafile to copy” was not build in, into the Recovery Advisor.

Ok. Lets do it manually and switch to the datafile copy. That takes only a few minutes, and this is the reason why we have incrementally updating backups. To make it as fast as possible.

RMAN> switch datafile 9 to copy;

datafile 9 switched to datafile copy "+FRA/OCM121/DATAFILE/monster.346.919594959"

RMAN> recover datafile 9;

RMAN> sql 'alter database datafile 9 online';

 

Now we restored and recoverd the 8TB datafile, and users can start working again on that bigfile tablespace. But due to the fact that the +FRA has only medium storage, your application might run slower than before.

Be careful, another issue might pop up after you are already on your datafile copy in the +FRA. If now a backup kicks it (scheduled by cron or something else), then Oracle has to create another 8TB in the +FRA as a new base for his incrementally updating backups, which makes your application even slower and even worse, you might run out of space.

An easy restore might end up now in a quite complex scenario. So what do we do now. First of all, we have to make sure that backups are not scheduled during our restore/recovery, and then we can manually create a new datafile copy in +DATA (of course, after the situation was corrected which lead to the datafile loss). In case you are running 12c, you can use the new feature “Multisection Backup for Datafile Copies”.

RMAN> backup section size 1T as copy datafile 9 format ='+DATA' tag clonefile9;

 

Now, a small downtime kicks in, when we have to take the datafile offline, switch to our new one in +DATA, recover it, and take it online again.

RMAN> sql 'alter database datafile 9 offline';

RMAN> switch datafile 9 to copy;

RMAN> recover datafile 9;

RMAN> sql 'alter database datafile 9 online';

 

Uffff … we are ready now, and users can work with the application again which is on the fast storage in +DATA. But wait a second, if we start our RMAN backup again, then Oracle does not regognize  the datafile copy in +FRA anymore as a valid copy for incrementally updataing backups. So, Oracle has to create another 8TB in the +FRA.

Now comes the 1Million $ question. How can we avoid this? We need to tag the datafile copy in +FRA as a valid starting point for incrementally updating backups.

RMAN> catalog datafilecopy '+fra/ocm121/datafile/MONSTER.346.919594959' level 0 TAG 'incr_update';

cataloged datafile copy
datafile copy file name=+FRA/ocm121/datafile/monster.346.919594959 RECID=33 STAMP=919603336

Oracle has the very useful command “catalog” for situations like this. Take care, that you specify “level 0″ and the correct “tag”, otherwise the datafile copy will not be regognized.

Now we are really ready, and we can start the RMAN incremetally backups again, like we did beforehand.

To summarize it:

  • Take care of your backups during the restore, it might makes the situation even worse.
  • Make use of the new feature “Multisection Backup for Datafile Copies”. It can speed up the creation of your datafile copies quite heavily.
  • Use the “catalog” command to tag your datafile copy correctly. It avoids the creation of another 8TB.

Cheers,

William

 

Cet article Oracle 12c and RMAN switch datafile to copy, is it really so easy? est apparu en premier sur Blog dbi services.

Oracle 12c – Managing RMAN persistent settings via SQL

$
0
0

RMAN persistent settings can be managed in two different ways.

  • Via the RMAN interface
    – e.g. RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
  • Via SQL
    – e.g. VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG(‘BACKUP OPTIMIZATION’,’ON’);

There are several scenarios when it might be helpful to use the SQL way. I will show 3 of them:

  • Automation
  • Reset to default
  • Rebuilding the RMAN persistent settings after losing all controlfiles (no catalog)

Let’s take a look at the first scenario. For example, when you have an automated way to run SQL’s against all of your databases and you want to change the RMAN retention from 3 days to 4 days for all of your databases. Then you could run the following.

SQL> select conf#, name, value from v$rman_configuration where name = 'RETENTION POLICY';

CONF# NAME                             VALUE
----- -------------------------------- ----------------------------------------------------------------------------------------
    1 RETENTION POLICY                 TO RECOVERY WINDOW OF 3 DAYS


SQL> EXECUTE DBMS_BACKUP_RESTORE.DELETECONFIG(CONF# => 1);

PL/SQL procedure successfully completed.

SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 4 DAYS');

PL/SQL procedure successfully completed.


SQL> select conf#, name, value from v$rman_configuration where name = 'RETENTION POLICY';

CONF# NAME                             VALUE
----- -------------------------------- ----------------------------------------------------------------------------------------
    1 RETENTION POLICY                 TO RECOVERY WINDOW OF 4 DAYS

	
-- The new value is, of course, immediately reflected via the RMAN interface as well

RMAN> SHOW RETENTION POLICY;

RMAN configuration parameters for database with db_unique_name OCM121 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;

 

The second useful scenario might be, to reset the whole RMAN config with one shot, instead of running several clear commands like the following, “RMAN> CONFIGURE BACKUP OPTIMIZATION CLEAR;” , simply run the RESETCONFIG.

SQL> EXECUTE DBMS_BACKUP_RESTORE.RESETCONFIG;

PL/SQL procedure successfully completed.

-- After executing this command, the v$rman_configuration view is empty, which means that all
-- RMAN persistent settings are default.

SQL> select conf#, name, value from v$rman_configuration;

no rows selected

 

And last but not least, to restore the RMAN persistent settings via SQL, in case you have lost all of your controlfiles and no RMAN catalog is in place.

One little side note, in case you have a RMAN catalog. The RMAN sync from the controlfile to the catalog is usually unidirectional, meaning that the controlfile is always the master and it syncs the information to the catalog. However, there are exceptions were it is bidirectional. One of it is, when you recreate the controlfile manually, then RMAN is able to get the last RMAN persistent settings from the catalog and applies it to the controlfile.

However, if you don’t have a catalog, dump out the RMAN persistent settings into SQL, simply by backing up the controlfile to trace.

SQL> alter database backup controlfile to trace as '/tmp/cntrl.trc';

Database altered.

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 4 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RMAN OUTPUT','TO KEEP FOR 14 DAYS');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','DISK');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' PARMS  ''SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/nfs/OCM121)''');
-- Configure RMAN configuration record 8
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 9
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO APPLIED ON ALL STANDBY');

And if you run into the severe situation of losing all controlfiles, you can restore the RMAN persistent settings quite quickly. Especially useful, when you have configured complex Media Manager settings.
Cheers,
William

P.S. Managing RMAN persistent settings via SQL is not a 12c feature. It exists for quite a long time.

 

Cet article Oracle 12c – Managing RMAN persistent settings via SQL est apparu en premier sur Blog dbi services.

12c GTT private statistics and cursor invalidation

$
0
0

Short summary of this post is that rolling invalidation do not occur when you gather statistics on Global Temporary Tables in 12c that have session statistic scope (which is the default) and this may cause too many hard parses. I’m sharing all details and comments are welcome.

Update28-OCT-16

I’ve written this having in mind a specific case I encountered. But what I said here is too wide: not all cursors are invalidated, but only those that have been created on same session private statistics. Thanks to Andrew Sayer (see comments) and to Mark from Oracle Support for their tests with cursors created by other sessions.

When you gather statistics on a table, the goal is to get new plan if statistics have changed, so you can expect cursor invalidation. However, invalidating immediately all cursors that have a dependency with the table may cause a hard parse storm and this is why by default rolling invalidation occurs: invalidation of cursor will be planned randomly in a time window that follows next execution. 12c comes with a new feature, global temporary table private stats where execution plans are not shared between sessions. And there’s another feature where statistics gathering is automatic when you bulk insert into an empty table.

In both cases, by default, invalidation is not rolling but immediate. Let’s see examples.

No GTT -> rolling invalidation

Here is an example with a regular table to show rolling invalidation:

21:14:36 SQL> create
21:14:36 2 table DEMOGTT1 as select * from dual;
Table created.
 
21:14:38 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT1');
PL/SQL procedure successfully completed.
 
21:14:39 SQL> insert into DEMOGTT1 select * from dual;
1 row created.
 
21:14:39 SQL> alter session set optimizer_mode=first_rows;
Session altered.
 
21:14:39 SQL> insert into DEMOGTT1 select * from dual;
1 row created.
 
21:14:39 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 000000008368BB98 3223759815 1 1 N 0
VALID 1 000000008368BB98 3223759815 1 1 N 0
 
21:14:40 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT1',no_invalidate=>null);
PL/SQL procedure successfully completed.
 
21:14:41 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 000000008368BB98 3223759815 1 1 Y 0
VALID 1 000000008368BB98 3223759815 1 1 Y 0
 
21:14:41 SQL> select table_name,scope,last_analyzed from user_tab_statistics where table_name='DEMOGTT1';
 
TABLE_NAME SCOPE LAST_ANA
------------------------------ ------- --------
DEMOGTT1 SHARED 21:14:40

Statistics on non-GTT are shared and the dbms_stats using default no_invalidate do rolling invalidation.

GTT with session private stats -> immediate invalidation

Here is the same example with a Global Temporary Table:

21:13:06 SQL> create
21:13:06 2 global temporary
21:13:06 3 table DEMOGTT1 as select * from dual;
Table created.
...
21:13:09 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 000000008096DF10 3223759815 1 1 N 0
VALID 1 000000008096DF10 3223759815 1 1 N 0
 
21:13:10 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT1',no_invalidate=>null);
PL/SQL procedure successfully completed.
 
21:13:11 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
INVALID_UNAUTH 0 000000008096DF10 3223759815 1 1 N 1
INVALID_UNAUTH 1 000000008096DF10 3223759815 1 1 N 1
 
21:13:11 SQL> select table_name,scope,last_analyzed from user_tab_statistics where table_name='DEMOGTT1';
 
TABLE_NAME SCOPE LAST_ANA
------------------------------ ------- --------
DEMOGTT1 SHARED
DEMOGTT1 SESSION 21:13:10

By default, gathered statistics are private to the session and you see that all cursors have been invalidated immediately. Next execution will need to hard parse.

GTT with shared stats -> no invalidation

When setting shared statistics on the GTT we come back to the 11g behavior:

21:28:52 SQL> create
21:28:52 2 global temporary
21:28:52 3 table DEMOGTT1 as select * from dual;
Table created.
 
21:28:52 SQL> exec dbms_stats.set_table_prefs(user,'DEMOGTT1','GLOBAL_TEMP_TABLE_STATS','SHARED');
PL/SQL procedure successfully completed.
...
21:28:55 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 0000000079782A08 3223759815 1 1 N 0
VALID 1 0000000079782A08 3223759815 1 1 N 0
 
21:28:56 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT1',no_invalidate=>null);
PL/SQL procedure successfully completed.
 
21:28:57 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 0000000079782A08 3223759815 1 1 Y 0
VALID 1 0000000079782A08 3223759815 1 1 Y 0
 
21:28:57 SQL> select table_name,scope,last_analyzed from user_tab_statistics where table_name='DEMOGTT1';
 
TABLE_NAME SCOPE LAST_ANA
------------------------------ ------- --------
DEMOGTT1 SHARED 21:28:56

No invalidation: this is rolling invalidation

GTT with session stats but “_optimizer_use_gtt_session_stats”=false

Here is an exemple when disabling the private statistics feature:

21:15:36 SQL> create
21:15:36 2 global temporary
21:15:36 3 table DEMOGTT1 as select * from dual;
Table created.
 
21:15:36 SQL> alter session set "_optimizer_use_gtt_session_stats"=false;
Session altered.
...
21:15:38 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 000000007A373A08 3223759815 1 1 N 0
VALID 1 000000007A373A08 3223759815 1 1 N 0
 
21:15:39 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT1',no_invalidate=>null);
PL/SQL procedure successfully completed.
 
21:15:41 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 000000007A373A08 3223759815 1 1 N 0
VALID 1 000000007A373A08 3223759815 1 1 N 0
 
21:15:41 SQL> select table_name,scope,last_analyzed from user_tab_statistics where table_name='DEMOGTT1';
 
TABLE_NAME SCOPE LAST_ANA
------------------------------ ------- --------
DEMOGTT1 SHARED
DEMOGTT1 SESSION 21:15:40

No invalidation here as in previous versions. But interesting thing is that I still have session statistics. The setting just disables its usage. But then, there were no invalidation and no rolling invalidation. Not sure how to interpret that…

Invalidation with online statistics gathering

In all those examples I’ve used dbms_stats with default no_invalidate. But in 12c statistics gathering can occur automatically during bulk insert. Let’s try that:

...
21:38:50 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 000000007A9D8860 3223759815 1 1 N 0
VALID 1 000000007A9D8860 3223759815 1 1 N 0
 
21:38:51 SQL> truncate table DEMOGTT1;
Table truncated.
21:38:52 SQL> insert /*+ append */ into DEMOGTT1 select * from dual;
1 row created.
 
21:38:53 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
INVALID_UNAUTH 0 000000007A9D8860 3223759815 1 1 N 1
INVALID_UNAUTH 1 000000007A9D8860 3223759815 1 1 N 1
21:38:53 SQL> select table_name,scope,last_analyzed from user_tab_statistics where table_name='DEMOGTT1';
 
TABLE_NAME SCOPE LAST_ANA
------------------------------ ------- --------
DEMOGTT1 SHARED
DEMOGTT1 SESSION 21:38:52

Same behaviour here. The online statistics gathering has gathered private statistics and invalidated all cursors.

NO_INVALIDATE=true

We can explicitly disable invalidation with no_invalidate=>true:

...
21:43:25 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 0000000075873D60 3223759815 1 1 N 0
VALID 1 0000000075873D60 3223759815 1 1 N 0
 
21:43:28 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT1',no_invalidate=>true);
PL/SQL procedure successfully completed.
 
21:43:29 SQL> select object_status,child_number,address,hash_value,parse_calls,executions,is_rolling_invalid,invalidations from v$sql where sql_id='1pmuu9z02day7';
 
OBJECT_STATUS CHILD_NUMBER ADDRESS HASH_VALUE PARSE_CALLS EXECUTIONS IS_ROLLING_INVALID INVALIDATIONS
------------------- ------------ ---------------- ---------- ----------- ---------- ------------------ -------------
VALID 0 0000000075873D60 3223759815 1 1 N 0
VALID 1 0000000075873D60 3223759815 1 1 N 0
 
21:43:29 SQL> select table_name,scope,last_analyzed from user_tab_statistics where table_name='DEMOGTT1';
 
TABLE_NAME SCOPE LAST_ANA
------------------------------ ------- --------
DEMOGTT1 SHARED
DEMOGTT1 SESSION 21:43:28

Here, as requested, private statistics has been gathered but without cursor invalidation. However I’ll have new hard parse for my query because private statistics prevent sharing another cursor, but it’s not an invalidation of all cursors. The other sessions will continue to re-use their plan.

So what?

With those new features, we have the famous parsing dilemma again: do we want to avoid too many hard parses and share cursor with the risk of executing an execution plan that has been optimized for different data? Or do we prefer to optimize each query at the risk of more CPU consumption and shared pool contention? Given that 12c comes with adaptive dynamic sampling that can make hard parse longer, and sometimes very very long, all those new features may be gauged carefully.

If you want to avoid hard parses, you should set preferences to SHARED statistics and then gather statistics when the GTT is filled with the data you want to optimize for, and then lock it. If you don’t, then you are back to the problem that private statistics tries to solve: sharing a plan optimized for few rows and executed on thousands.

 

Cet article 12c GTT private statistics and cursor invalidation est apparu en premier sur Blog dbi services.

Oracle 12c – When did my row change?

$
0
0

My good old DEPT table always had 4 rows with 4 different departments. However, I have noticed that a new row was inserted into the DEPT table and the row 50 popped up, and I would like to know when it happened?

Before:

SQL> select  DEPTNO, DNAME, LOC from DEPT;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

After:

SQL> select  DEPTNO, DNAME, LOC from DEPT;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON
        50 IT             L.A.

Unfortunately, I have no auditing in place, and supplemental logging is also not activated. So, auditing is not an option and LogMiner also not. I’m kind of running out of options to get the timestamp when the department 50 was inserted.

My last resort is the ORA_ROWSCN pseudocolumn. Due to the Oracle documentation, for each row, ORA_ROWSCN returns the conservative upper bound system change number (SCN) of the most recent change to the row. This pseudocolumn becomes useful, in case we want to determine approximately when a row was last updated.

ORA_ROWSCN is a pseudocolumn of any table that is not fixed or external, and be careful, it is not 100% precise, because Oracle tracks SCNs by transaction committed for the block in which the row resides. However, it is better then nothing for my use case.

SQL> select ora_rowscn, DEPTNO, DNAME, LOC from DEPT;

ORA_ROWSCN     DEPTNO DNAME          LOC
---------- ---------- -------------- -------------
   9708226         10 ACCOUNTING     NEW YORK
   9708226         20 RESEARCH       DALLAS
   9708226         30 SALES          CHICAGO
   9708226         40 OPERATIONS     BOSTON
   9708226         50 IT             L.A.

To convert now the SCN to a Timestamp, we can use the SCN_TO_TIMESTAMP function. It converts the SCN to a Timestamp with a precision of +/- 3 seconds.

SQL> select scn_to_timestamp(9708226) as timestamp from dual;

TIMESTAMP
---------------------------------------------------------------------------
27-OCT-16 11.40.07.000000000 AM

Due to the ORA_ROWSCN pseudocolumn, now I know at least, that the block where the department 50 was inserted was changed at the “27-OCT-16 11.40.07″.

If you want it more accurate, you need a feature called Row Level Dependency Tracking. This feature is activated by the keyword ROWDEPENDENCIES during the CREATE TABLE.

CREATE TABLE "SCOTT"."DEPT_RLDT" 
   (	"DEPTNO" NUMBER(2,0), 
	"DNAME" VARCHAR2(14 BYTE), 
	"LOC" VARCHAR2(13 BYTE), 
	 CONSTRAINT "PK_DEPT_RLDT" PRIMARY KEY ("DEPTNO")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS" ROWDEPENDENCIES ;

With this feature, each row in the table has a system change number (SCN) that represents a time greater than or equal to the commit time of the last transaction that modified the row.

You cannot change this setting after the table is created. Meaning, there is no “alter table … ROWDEPENDENCIES”.

This setting is useful primarily to allow for parallel propagation in replication environments, however, it can also be used to find out quickly the time, when a row was change.

Like always in life, nothing comes for free. The drawback is, that it increases the size of each row by 6 bytes. That’s why the default is NOROWDEPENDENCIES.

But once this feature is enable, like in the following example for my DEPT_RLDT table, you will get
a different SCN number for the new commited row (department 50).

SQL> select ora_rowscn, DEPTNO, DNAME, LOC from DEPT_RLDT;

ORA_ROWSCN     DEPTNO DNAME          LOC
---------- ---------- -------------- -------------
   9701315         10 ACCOUNTING     NEW YORK
   9701315         20 RESEARCH       DALLAS
   9701315         30 SALES          CHICAGO
   9701315         40 OPERATIONS     BOSTON
   9708244         50 IT             L.A.

SQL>

SCN 9701315 for deptno 10,20,30,40 and 9708244 for deptno 50. That’s cool.

SQL> select scn_to_timestamp(9708244) as timestamp from dual;

TIMESTAMP
---------------------------------------------------------------------------
27-OCT-16 11.40.24.000000000 AM

Due to the SCN to a Timestamp precision of +/- 3 seconds, it is still not 100%, but very close to it. Another important point is that the ORA_ROWSCN represents an upper bound SCN. We just know it has not changed after this SCN but may have been changed earlier.

So … does the ORA_ROWSCN replace auditing? No, not at all. But it can help to answer questions like the following: Please tell me when ROW xyz has been modified the last time and recover the table (using Flashback techniques) to that point.
Cheers,
William

 

Cet article Oracle 12c – When did my row change? est apparu en premier sur Blog dbi services.

Oracle 12c – When the RMAN Dummy Instance does not start up …

$
0
0

Not too often, but sometimes you might run into a situation when you lose everything, your DB Files, your Controlfiles and even your spfile. In situations like that, you need to restore first your spfile, then your controlfile and then the rest.

For restoring the spfile, RMAN has a chicken/egg issue. To be able to restore the spfile, RMAN needs at least a running instance, but how do we start the instance without having the spfile? There are several methods to do it, one of it, is to let RMAN itself create a dummy instance.

But if you think your situation can’t get worse than the following happens. Oracle raises an ORA-04031 error and even the Dummy Instance does not start.

oracle@oel001:/home/oracle/ [rdbms121] export ORACLE_SID=LAX
oracle@oel001:/home/oracle/ [LAX] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Oct 28 11:45:08 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u00/app/oracle/product/12.1.0.2/dbs/initLAX.ora'

starting Oracle instance without parameter file for retrieval of spfile
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of startup command at 10/28/2016 11:45:15
RMAN-04014: startup failed: ORA-04031: unable to allocate 111264 bytes of shared memory 
("shared pool","unknown object","sga heap(1,0)","KEWS sesstat values")

 

The error message means, that the SGA allocated for the dummy instance is to small. This is where the environment variable ORA_RMAN_SGA_TARGET comes into play. The environment variable ORA_RMAN_SGA_TARGET sets the SGA to a value in Megabytes which can be used by RMAN to start the Dummy Instance. In the following example to 1024 MB.

oracle@oel001:/home/oracle/ [LAX] export ORA_RMAN_SGA_TARGET=1024
oracle@oel001:/home/oracle/ [LAX]

oracle@oel001:/home/oracle/ [LAX] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Oct 28 13:12:13 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u00/app/oracle/product/12.1.0.2/dbs/initLAX.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area    1073741824 bytes

Fixed Size                     2932632 bytes
Variable Size                293601384 bytes
Database Buffers             771751936 bytes
Redo Buffers                   5455872 bytes

RMAN>

Behind the scenes, Oracle starts a Dummy Instance with the following parameters, and creates all necessary directories in the DIAG destination.

sga_target               = 1G
compatible               = "12.1.0.2.0"
_dummy_instance          = TRUE
remote_login_passwordfile= "EXCLUSIVE"
_diag_adr_trace_dest='/u00/app/oracle/diag/rdbms/dummy/LAX/trace'
core_dump_dest='/u00/app/oracle/diag/rdbms/dummy/LAX/cdump'
db_name='DUMMY' 
  
oracle@oel001:/u00/app/oracle/diag/rdbms/dummy/LAX/ [LAX] pwd
/u00/app/oracle/diag/rdbms/dummy/LAX
oracle@oel001:/u00/app/oracle/diag/rdbms/dummy/LAX/ [LAX] ls
alert  cdump  hm  incident  incpkg  ir  lck  log  metadata  metadata_dgif  metadata_pv  stage  sweep  trace

 

Now it worked and I can retrieve my spfile.

run {
restore spfile to pfile '/tmp/initLAX.ora' for db_unique_name='LAX' from
'+fra/lax/autobackup/2016_10_28/s_894893708.1292.894893713';
}

From now on, I can use the correct parameter file to continue with the other steps. Another option would have been, to search through your alert.log for “System parameters with non-default values”. Whenever you startup your instance, Oracle dumps out the non-default parameter values into the alert.log. Those values can be used to manually create an init.ora file and then the spfile. The drawback of using the values from alert.log is, that the values might be very old. In case the instance was not bounced for several months or longer (not so unusual), then you miss all the new setting since then.

...
...
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =51
LICENSE_MAX_USERS = 0
SYS auditing is enabled
NOTE: remote asm mode is local (mode 0x1; from cluster type)
NOTE: Using default ASM root directory ASM
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options.
ORACLE_HOME = /u00/app/oracle/product/12.1.0.2
System name:    Linux
Node name:      oel001
Release:        2.6.32-642.6.1.el6.x86_64
Version:        #1 SMP Tue Oct 4 15:19:03 PDT 2016
Machine:        x86_64
System parameters with non-default values:
  processes                = 300
  _disable_highres_ticks   = TRUE
  event                    = "10720 trace name context forever, level 0x10000000"
  event                    = "10795 trace name context forever, level 2"
  sga_max_size             = 1536M
  use_large_pages          = "ONLY"
  shared_pool_size         = 256M
  _high_priority_processes = "LGWR"
  _highest_priority_processes= "LGWR"
  filesystemio_options     = "SETALL"
  sga_target               = 1536M
  control_files            = "+DATA/LAX/CONTROLFILE/current.265.918392661"
  control_files            = "+FRA/LAX/CONTROLFILE/current.256.918392661"
  control_file_record_keep_time= 32
  db_block_size            = 8192
  compatible               = "12.1.0.2.0"
  log_archive_format       = "%t_%s_%r.dbf"
  db_create_file_dest      = "+DATA"
  db_create_online_log_dest_1= "+DATA"
  db_recovery_file_dest    = "+FRA"
  db_recovery_file_dest_size= 32G
  undo_tablespace          = "UNDOTBS1"
  undo_retention           = 3600
  db_securefile            = "PERMITTED"
  remote_login_passwordfile= "EXCLUSIVE"
  db_domain                = ""
  dispatchers              = "(PROTOCOL=TCP) (SERVICE=LAXXDB)"
  local_listener           = "LAX_LISTENER"
  session_cached_cursors   = 512
  parallel_max_servers     = 80
  audit_file_dest          = "/u00/app/oracle/admin/LAX/adump"
  audit_trail              = "DB"
  cell_offload_processing  = FALSE
  db_name                  = "LAX"
  open_cursors             = 300
  pga_aggregate_target     = 512M
  _disable_directory_link_check = TRUE
  diagnostic_dest          = "/u00/app/oracle"
...
...

In the end, you have different possibilities to restore the spfile. Either with the RMAN Dummy Instance, or via the alert.log. In case of the Dummy Instance, you might need to play around with the ORA_RMAN_SGA_TARGET environment variable.

By the way … with Oracle 10.2.0.5 your chance to hit the ORA-04031 error during the Dummy Instance startup was much higher, because the default was only 152MB.

oracle@oel001:/home/oracle/ [LAX] rman target /

Recovery Manager: Release 10.2.0.5.0 - Production on Fri Oct 28 11:26:09 2016

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u00/app/oracle/product/10.2.0.5/dbs/initLAX.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area     159383552 bytes

Fixed Size                     2094736 bytes
Variable Size                 67111280 bytes
Database Buffers              83886080 bytes
Redo Buffers                   6291456 bytes

RMAN>

 

Cheers,
William

 

 

Cet article Oracle 12c – When the RMAN Dummy Instance does not start up … est apparu en premier sur Blog dbi services.


Oracle 12c – Automatic Control File Backups

$
0
0

By default, automatic control file backups are disabled (even with 12c), maybe for performance reasons.

RMAN> SHOW CONTROLFILE AUTOBACKUP;

RMAN configuration parameters for database with db_unique_name OCM121 are:
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default

And also good to know, the autobackup after structural changes does not occur for databases in NOARCHIVELOG mode. So, if your database is running in NOARCHIVELOG mode, you will never see any impact, independent of controlfile autobackup is on or off.
However, my database is running in ARCHIVELOG mode and at the moment the controlfile autobackup feature is disabled.
But even when the auto backup feature is disabled, RMAN will still back up the current controlfile and the server parameter file whenever any backup command includes datafile 1 from the data files that belong to the target database. In an Oracle database, data file 1 is always part of the system tablespace, which contains the data dictionary.

But I have heard that it is highly recommended to enable automatic controlfile backups. It will ensure that the critical controlfile is backed up regularly following a backup or structural change to the database. Once you configure automatic controlfile backup, RMAN will automatically back up your target database controlfile, as well as the current server parameter file, when any of the following events occurs:

  • Successful completion of either a backup or the copy command
  • After a create catalog command from the RMAN prompt is successfully completed
  • Any structural changes to the database modify the contents of the control file

Any changes to the physical structure of your database, even if they are made through SQL*Plus, will trigger a controlfile auto backup, e.g.

  • adding a tablespace or data file
  • dropping a data file
  • placing a tablespace offline or online
  • adding an online redo log, and renaming a data file

So, to follow the recommendation, I will enable the automatic backup of the controlfile. At the moment I have no backup of the controlfile, which is not good at all.

RMAN>  list backup of controlfile;

specification does not match any backup in the repository

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored

But nothing happens. What is going on here. Is it a bug, a feature or something else.
Hold on. Then … all of a sudden, the backup of the controlfile popps up.

RMAN> list backup of controlfile;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
131     Full    9.89M      DISK        00:00:04     28-OCT-2016 15:42:23
        BP Key: 314   Status: AVAILABLE  Compressed: NO  Tag: TAG20161028T154219
        Piece Name: +FRA/OCM121/AUTOBACKUP/2016_10_28/s_926437339.526.926437343
  Control File Included: Ckp SCN: 10102765     Ckp time: 28-OCT-2016 15:42:19

 

To understand what is happening here, the answer is like always. It depends. :-) In case you are running an Oracle database with 11gR2 or higher, then it is a feature. Anything below would be a bug.

Beginning with Oracle 11gR2, the controlfile autobackup deferral feature has been implemented. In order to increase performance, the controlfile autobackup creation after structural changes, has been deferred. In previous releases, one controlfile autobackup is created with each DDL command that makes structural changes in the database and we can see in the alert.log a message about controlfile autobackup creation after each DDL command executed.
This can provoke serious performance problems when multiple structural changes are made together. Starting from Oracle Database Release 11g Release 2, RMAN takes only one control file autobackup when multiple structural changes contained in a script have been applied (for example, adding tablespaces, altering the state of a tablespace or datafile,
adding a new online redo log, renaming a file, and so on) during a specified time.

But what does time mean here exactly. It is 1 minute, 1 hour, or 1 day?

The deferral time is controlled by an underscore parameter that defaults to 300 seconds (5 minutes). The parameter is the following:

_controlfile_autobackup_delay=300

The minimum value for that parameter is 0, which simulates the behavior before 11gR2. The maximum value in 12c is (1024*1024*1024*2)-1, which is 2147483647 seconds. However, I don’t see any practical value for setting the value that high.

In 11gR2 or higher, the controlfile autobackups are created by MMON slaves after few minutes (5 minutes per default) of the structural changes, which increases performance. It is also expected that no message about controlfile autobackup creation will appear in the alert.log.
However, there will be one MMON slave trace file with the controlfile creation information, that will be a file named: SID__m000_<OS_PID>.trc
Ok. Let’s try to simulate the old behavior by setting the autobackup delay to 0, and to create another tablespace afterwards.

SQL> alter system set "_controlfile_autobackup_delay"=0;

System altered.

SQL> create tablespace unpatient2 datafile size 16M;

Tablespace created.

And now the controlfile autobackup is created immediately.

RMAN> list backup of controlfile;

using target database control file instead of recovery catalog

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
131     Full    9.89M      DISK        00:00:04     28-OCT-2016 15:42:23
        BP Key: 314   Status: AVAILABLE  Compressed: NO  Tag: TAG20161028T154219
        Piece Name: +FRA/OCM121/AUTOBACKUP/2016_10_28/s_926437339.526.926437343
  Control File Included: Ckp SCN: 10102765     Ckp time: 28-OCT-2016 15:42:19

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
132     Full    9.89M      DISK        00:00:05     28-OCT-2016 17:41:16
        BP Key: 315   Status: AVAILABLE  Compressed: NO  Tag: TAG20161029T113939
        Piece Name: +FRA/OCM121/AUTOBACKUP/2016_10_29/s_926509179.514.926509183
  Control File Included: Ckp SCN: 10271426     Ckp time: 28-OCT-2016 17:41:11

So, do I recommend to set the RMAN controlfile autobackup to ON. Yes absolutely. And do I recommend to set the controlfile autobackup delay to 0. No, probably not. I think, that the 5-minute interval is a quite good compromise. You just need to be aware, that it exists.

Another hint is, that you should not rely too much on the view V$RMAN_BACKUP_JOB_DETAILS. In this view, the autobackups should be populated, whenever a autobackup happend. But in my case the AUTOBACKUP_DONE is always set to NO.

SQL> select start_time,end_time,status,autobackup_done, AUTOBACKUP_COUNT from
2 V$RMAN_BACKUP_JOB_DETAILS where autobackup_done = 'YES';

no rows selected

There is a patch available from Oracle: “Patch 18074513: V$RMAN_BACKUP_JOB_DETAILS VIEWS COLUMN AUTOBACKUP_DONE DOESNOT GET POPULATED”, but it is not available for every platform and every version.

Better use the RMAN “list backup of controlfile;” command. That one is much more reliable.

Cheers,
William

 

 

 

 

 

Cet article Oracle 12c – Automatic Control File Backups est apparu en premier sur Blog dbi services.

Oracle 12c – Difference between ‘%F’ #default and ‘%F’

$
0
0

Do you see any differences between these two RMAN SHOW commands?

RMAN> SHOW CONTROLFILE AUTOBACKUP FORMAT;

RMAN configuration parameters for database with db_unique_name OCM121 are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

and

RMAN> SHOW CONTROLFILE AUTOBACKUP FORMAT;

RMAN configuration parameters for database with db_unique_name OCM121 are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

 

They are the same, except that the first output has the “# default” at the end and the second one doesn’t.

Whenever you see a “# default” at the end of a RMAN show command, it simply means that this is the default value which comes out of the box, and even if you change it to the same value, the “# default” at the end disappears.

Sometimes, Oracle is funny. :-) The strange thing about it, is that ‘%F’ #default and ‘%F’ are not the same. Due to the Oracle documentation, the format specification for ‘%F’ means the following:

‘%F’ combines the DBID, day, month, year, and sequence into a unique and repeatable generated name.
This variable translates into c-IIIIIIIIII-YYYYMMDD-QQ, where:

  • IIIIIIIIII stands for the DBID. The DBID is printed in decimal so that it can be easily associated with the target database.
  • YYYYMMDD is a time stamp in the Gregorian calendar of the day the backup is generated
  • QQ is the sequence in hexadecimal number that starts with 00 and has a maximum of ‘FF’ (256)

HINT: %F is valid only in the CONFIGURE CONTROLFILE AUTOBACKUP FORMAT command.

Ok. Let’s take a look into the ASM directory, where my controlfile autobackups are located.

ASMCMD> pwd
+fra/ocm121/autobackup/2016_10_29
ASMCMD> ls -l
Type        Redund  Striped  Time             Sys  Name
AUTOBACKUP  UNPROT  COARSE   OCT 29 11:00:00  Y    s_926509179.514.926509183
AUTOBACKUP  UNPROT  COARSE   OCT 29 12:00:00  Y    s_926511850.517.926511853

Regarding the format specification ‘%F’, I see no DBID, no time stamp and also no sequence number. Looks like, that the format specification is not valid for the autobackups located in ASM.

So … with the RMAN Default (‘%F'; # default), RMAN will send the autobackup to the flash recovery area, if it is used, and in my case, it sends it to ASM. However, the format specification has nothing to do with the ‘%F’ described in the documentation. I do have something totally different.

Ok. Let’s change the controlfile autobackup format to exactly the same thing like it was before and check what happens. Afterwards, lets quickly force a controlfile autobackup creation.

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
new RMAN configuration parameters are successfully stored

SQL> ALTER SYSTEM SET "_controlfile_autobackup_delay"=0;

System altered.

SQL> create tablespace DBI datafile size 16M;

Tablespace created.

A new controlfile should have been created right away, but in the ASM directory we see no new controlfile autobackup.

ASMCMD> ls -l
Type        Redund  Striped  Time             Sys  Name
AUTOBACKUP  UNPROT  COARSE   OCT 29 11:00:00  Y    s_926509179.514.926509183
AUTOBACKUP  UNPROT  COARSE   OCT 29 12:00:00  Y    s_926511850.517.926511853

Where did Oracle put my new controlfile autobackup? I just changed the same value with the same value. After a little bit of looking around, I have noticed that a new controlfile autobackup was created in the good old $ORACLE_HOME/dbs/

oracle@oel001:/u00/app/oracle/product/12.1.0.2/dbs/ [OCM121] ls -rtl c-*
-rw-r----- 1 oracle asmadmin 10387456 Oct 31 10:03 c-3827054096-20161031-00

And furthermore, the format specification looks now exactly, how it is documented, incl. the DBID, the time stamp
and so on.

So, with ‘%F’ non-default, Oracle will put it in $ORACLE_HOME/dbs on UNIX and to %ORACLE_HOME%\Database in case you are on Windows. In the end, the following configurations look the same, but they are totally different.

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

Now you could say, I don’t care if Oracle puts it into ASM or into $ORACLE_HOME/dbs. In case you are running RAC you should care, because it makes a major difference. If one host crashes, you can’t access your controlfile autobackups anymore. And what about crosschecks? If you run now a crosscheck from host B, it does not see the controlfile autobackups from host A, and after a while you will end up with a big mess.

Conclusion. Sometimes, the same value is not the same value. ;-) And for RAC and RAC One Node, better stay with the default, which is ‘%F'; #default.
Cheers,
William

 

Cet article Oracle 12c – Difference between ‘%F’ #default and ‘%F’ est apparu en premier sur Blog dbi services.

Oracle 12cR2 PL/SQL new feature: TNSPING from the database

$
0
0

Database links are resolved with the server TNS_ADMIN configuration (sqlnet.ora and tnsnames.ora). You can use tnsping to check the resolution, but it supposes that you are on the server and have set the same environment as the one which started the database.
In 12.2 you have a new package to check that: DBMS_TNS. It’s the kind of little new features that make our life easier.

The easy way to verify a connection string is to use tnsping. Here is an example with an EZCONNECT resolution:

[oracle@SE122 ~]$ tnsping //10.196.234.38/CDB1.opcoct.oraclecloud.internal
TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 08-NOV-2016 17:45:34
Copyright (c) 1997, 2016, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
Used EZCONNECT adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))
OK (0 msec)

The full connection description is displayed here before contacting the listener.

This resolution is valid only with a specific TNS configuration (which is here /u01/app/oracle/product/12.2.0/dbhome_1/network/admin). However, you may have different configurations (using the TNS_ADMIN environment variable) and if it’s not set consistently, you may have different results.
Basically:

  • When you connect locally to the server (no SQL*Net, no listener), the Oracle session inherits the client environment
  • When you connect remotely to a service statically registered on the listener, the Oracle session inherits the environment which started the listener
  • When you connect remotely to a service dynamically registered on the listener, the Oracle session inherits the environment which started the database

DBMS_TNS

So here is this new package:

SQL> desc dbms_tns
FUNCTION RESOLVE_TNSNAME RETURNS VARCHAR2
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
TNS_NAME VARCHAR2 IN

And you can run it when connected to the database to see how the name is resolved:

SQL> select dbms_tns.resolve_tnsname('&_connect_identifier') from dual;
old 1: select dbms_tns.resolve_tnsname('&_connect_identifier') from dual
new 1: select dbms_tns.resolve_tnsname('//10.196.234.38/CDB1.opcoct.oraclecloud.internal') from dual
 
DBMS_TNS.RESOLVE_TNSNAME('//10.196.234.38/CDB1.OPCOCT.ORACLECLOUD.INTERNAL')
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))

The resolution is done without attempting to contact the listener. This ip address do not exist on my network:

select dbms_tns.resolve_tnsname('//10.1.1.1/XX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('//10.1.1.1/XX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XX)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.1.1.1)(PORT=1521)))

As you can see, the client identification is send here (PROGRAM and HOST).

Demo

I’ll use this new feature to prove my assumption above about which environment is used when connecting locally or through dynamic or static service.

I create 3 directories with different names for the SERVICE_NAME in order to see which one is used:


mkdir -p /tmp/tns_lsnr ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_lsnr/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_lsnr/tnsnames.ora
mkdir -p /tmp/tns_sess ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_sess/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_sess/tnsnames.ora
mkdir -p /tmp/tns_inst; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_inst/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_inst/tnsnames.ora

In addition, I’ll need a listener configuration with a static service, let’s call it STATIC:


cat > /tmp/tns_lsnr/listener.ora <<END
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$HOSTNAME)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=$ORACLE_HOME)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
END

Here’s a summary of the different configurations:


$ tail /tmp/tns*/*
 
==> /tmp/tns_inst/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_inst/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_lsnr/listener.ora <==
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=SE122.compute-opcoct.oraclecloud.internal)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=/u01/app/oracle/product/122EE)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
====> /tmp/tns_lsnr/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_lsnr/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_sess/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_sess/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

I start the listener and the instance with their own environment, and set the session one to another:


export TNS_ADMIN=/tmp/tns_lsnr ; lsnrctl start
export TNS_ADMIN=/tmp/tns_inst ; sqlplus / as sysdba <<< startup
export TNS_ADMIN=/tmp/tns_sess

Now it’s time to use this new DBMS_TNS when connecting locally, through the dynamic service (CDB1) and through the static service (STATIC):


SQL> connect system/oracle
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected locally the TNS_ADMIN from my shell environment running sqlplus is used.


SQL> connect system/oracle@//localhost/CDB1
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to dynamic service, the TNS_ADMIN used to startup the instance is used.


SQL> connect system/oracle@//localhost/STATIC
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to static service, the TNS_ADMIN used to startup the listener is used.

So what?

You should use a consistent environment setting in order to be sure that all sessions will use the same name resolution. But if you have a doubt about it, DBMS_TNS can help to troubleshoot. It’s better than DBMS_SYSTEM.GET_ENV as it does the name resolution rather than just showing the environment variables.

Want to know quickly where all database links are going? Here it is:

SQL> select username,dbms_tns.resolve_tnsname(host) from cdb_db_links;

 

Cet article Oracle 12cR2 PL/SQL new feature: TNSPING from the database est apparu en premier sur Blog dbi services.

Oracle 12cR2 multitenant: Local UNDO

$
0
0

Pluggable Databases are supposed to be isolated, containing the whole of user data and metadata. This is the definition of dictionary separation coming with multitenant architecture: only system data and metadata are at CDB level. User data and metadata are in separate tablespaces belonging to the PDB. And this is what makes the unplug/plug available: because PDB tablespaces contain everything, you can transport their datafiles from one CDB to another.
However, if they are so isolated, can you explain why

  • You cannot flashback a PDB?
  • You need an auxiliary instance for PDB Point-In-Time recovery?
  • You need to put the PDB read-only before cloning it?


There is something that is not contained in your PDB but is at CDB level, and which contains user data. The UNDO tablespace is shared:

CaptureLocalUndo001

You cannot flashback a PDB because doing so requires to rollback the ongoing transactions at the time you flashback. Information was in UNDO tablespace at that time, but is not there anymore.

It’s the same idea with Point-In-Time recovery of PDB. You need to restore the UNDO tablespace to get those UNDO records from the Point-In-Time. But you cannot restore it in place because it’s shared with other PDBs that need current information. This is why you need an auxiliary instance for PDBPITR in 12.1

To clone a PDB cannot be done with ongoing transactions because their UNDO is not in the PDB. This is why it can be done only when the PDB is read-only.

12.2 Local UNDO

In 12.2 you can choose to have one UNDO tablespace per PDB, in local undo mode, which is the default in DBCA:

CaptureLocalUndo000

With local undo PDBs are truly isolated even when opened with ongoing transactions:

CaptureLocalUndo002

Look at the ‘RB segs’ column from RMAN report schema:

[oracle@OPC122 ~]$ rman target /
 
Recovery Manager: Release 12.2.0.1.0 - Production on Tue Nov 8 18:53:46 2016
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: CDB1 (DBID=901060295)
 
RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 880 SYSTEM YES /u02/app/oracle/oradata/CDB1/system01.dbf
3 710 SYSAUX NO /u02/app/oracle/oradata/CDB1/sysaux01.dbf
4 215 UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/undotbs01.dbf
5 270 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/CDB1/pdbseed/system01.dbf
6 560 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/CDB1/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/app/oracle/oradata/CDB1/users01.dbf
8 180 PDB$SEED:UNDOTBS1 NO /u02/app/oracle/oradata/CDB1/pdbseed/undotbs01.dbf
9 270 PDB1:SYSTEM YES /u02/app/oracle/oradata/CDB1/PDB1/system01.dbf
10 590 PDB1:SYSAUX NO /u02/app/oracle/oradata/CDB1/PDB1/sysaux01.dbf
11 180 PDB1:UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/CDB1/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /u04/app/oracle/oradata/temp/temp01.dbf
2 64 PDB$SEED:TEMP 32767 /u04/app/oracle/oradata/temp/temp012016-10-04_11-34-07-330-AM.dbf
3 100 PDB1:TEMP 100 /u04/app/oracle/oradata/CDB1/PDB1/temp012016-10-04_11-34-07-330-AM.dbf

You have an UNDO tablespace in ROOT, in PDB$SEED and in each user PDB.

If you have a database in shared undo mode, you can move to local undo mode while in ‘startup migrate’. PDBs when opened will have an UNDO tablespace created. You can also create an UNDO tablespace in PDB$SEED.

Yes, in 12.2, you can open the PDB$SEED read/write for this purpose:


18:55:59 SQL> alter pluggable database PDB$SEED open read write force;
 
Pluggable database altered.
 
18:56:18 SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ WRITE NO
3 PDB1 READ WRITE NO
18:56:23 SQL> alter pluggable database PDB$SEED open read only force;
 
Pluggable database altered.

But remember this is only allowed for local undo migration.

The recommandation is to run in local undo mode, even in Single-Tenant.

More about it in the 12cR2 Multitenant book:

 

Cet article Oracle 12cR2 multitenant: Local UNDO est apparu en premier sur Blog dbi services.

Oracle 12cR2 multitenant containers in SQL_TRACE

$
0
0

In multitenant you session can switch between containers. For example, since 12.1, a common user can switch explicitly between CDB$ROOT and any PDB with the ‘ALTER SYSTEM SET CONTAINER’. Any user connected to a PDB will also have it session switching implicitely when querying through metadata links and data links (new name for object links). In 12.1 there are no ways to trace this. This is fixed in 12.2

I set sql_trace and get the tracefile name:

SQL> select value tracefile from v$diag_info where name='Default Trace File';
 
TRACEFILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_6307.trc
 
SQL> alter session set sql_trace=true;
Session altered.

The container ID is CON_ID=1 because I’m connected to the root:


SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:08:17.968360+02:00 (CDB$ROOT(1))
*** SESSION ID:(14.25101) 2016-09-04T16:08:17.968399+02:00
*** CLIENT ID:() 2016-09-04T16:08:17.968405+02:00
*** SERVICE NAME:(SYS$USERS) 2016-09-04T16:08:17.968410+02:00
*** MODULE NAME:(sqlplus@VM115 (TNS V1-V3)) 2016-09-04T16:08:17.968415+02:00
*** ACTION NAME:() 2016-09-04T16:08:17.968420+02:00
*** CLIENT DRIVER:(SQL*PLUS) 2016-09-04T16:08:17.968425+02:00
*** CONTAINER ID:(1) 2016-09-04T16:08:17.968430+02:00

In 12.1 you had no more information about the container in the trace file. This is improved in 12.2

Explicit ALTER SYSTEM SET CONTAINER

I’ll run a simple query, then change to container PDB (which is CON_ID=3 here) and run again a query:

SQL> select * from dual;
 
D
-
X
 
SQL> alter session set container=PDB;
Session altered.
 
SQL> select * from dual;
 
D
-
X

The lines with starting with ‘***’ followed by a timestamp are not new. But now we also have the container name (here CON_NAME=PDB) and container ID (CON_ID=3):

SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:09:54.397448+02:00 (PDB(3))
*** CONTAINER ID:(3) 2016-09-04T16:09:54.397527+02:00

You get those line for each ALTER SESSION SET CONTAINER and you have the CON_NAME and CON_ID of the PDB: (PDB(3))

Implicit switch though data link

I’m still in PDB and I’ll query a data link view: DBA_PDBS. Data link views (previously called ‘object link’ views) query data from the CDB$ROOT even when you are in a PDB. DBA_PDBS show information from pluggable databases, which are stored in CDB$ROOT (because they must be available before the PDB is opened).

SQL> select count(*) from dba_pdbs;
 
COUNT(*)
----------
1
 

The execution of the query had to switch to CDB$ROOT (CON_ID=1) to get the rows and switch back to PDB (CON_ID=3):


SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:09:54.406379+02:00 (CDB$ROOT(1))
*** 2016-09-04T16:09:54.406676+02:00 (PDB(3))

If you look at the detail you will see that my query is parsed in my container:

=====================
PARSING IN CURSOR #139807307349184 len=29 dep=0 uid=0 oct=3 lid=0 tim=203051393258 hv=2380449338 ad='896cae38' sqlid='3cngtnf6y5jju'
select count(*) from dba_pdbs
END OF STMT
PARSE #139807307349184:c=0,e=53,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,plh=1333657383,tim=203051393256

I think the following is to check that the table behind the data link view are valid in the PDB even if we don’t want to query them. This is only a parse call:

=====================
PARSING IN CURSOR #139807307295488 len=46 dep=1 uid=0 oct=3 lid=0 tim=203051393450 hv=1756598280 ad='7b5dfd58' sqlid='5ucyn75nb7408'
SELECT * FROM NO_OBJECT_LINK("SYS"."DBA_PDBS")
END OF STMT
PARSE #139807307295488:c=0,e=26,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=810534000,tim=203051393449
CLOSE #139807307295488:c=0,e=7,dep=1,type=1,tim=203051393490

Then when I execute my query:

EXEC #139807307349184:c=0,e=246,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,plh=1333657383,tim=203051393539

my session switches to root:

*** 2016-09-04T16:09:54.406379+02:00 (CDB$ROOT(1))

and the recursive query is parsed and executed in CDB$ROOT:
=====================
PARSING IN CURSOR #139807307379504 len=170 dep=1 uid=0 oct=3 lid=0 tim=203051393687 hv=1291428476 ad='895c6940' sqlid='g34kja56gm8mw'
SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) */ CON_ID FROM NO_OBJECT_LINK("SYS"."DBA_PDBS") "DBA_PDBS" WHERE "DBA_PDBS"."CON_ID"=0 OR "DBA_PDBS"."CON_ID"=3
END OF STMT
PARSE #139807307379504:c=0,e=44,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2042216988,tim=203051393685
EXEC #139807307379504:c=0,e=48,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2042216988,tim=203051393790
FETCH #139807307379504:c=0,e=20,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,plh=2042216988,tim=203051393826
STAT #139807307379504 id=1 cnt=1 pid=0 pos=1 obj=0 op='RESULT CACHE 8p3h095ufc042f32tf05b23qf3 (cr=0 pr=0 pw=0 str=1 time=18 us)'
STAT #139807307379504 id=2 cnt=0 pid=1 pos=1 obj=0 op='NESTED LOOPS (cr=0 pr=0 pw=0 str=0 time=0 us cost=2 size=16 card=1)'
STAT #139807307379504 id=3 cnt=0 pid=2 pos=1 obj=161 op='TABLE ACCESS BY INDEX ROWID CONTAINER$ (cr=0 pr=0 pw=0 str=0 time=0 us cost=1 size=11 card=1)'
STAT #139807307379504 id=4 cnt=0 pid=3 pos=1 obj=163 op='INDEX UNIQUE SCAN I_CONTAINER2 (cr=0 pr=0 pw=0 str=0 time=0 us cost=0 size=0 card=1)'
STAT #139807307379504 id=5 cnt=0 pid=2 pos=2 obj=36 op='INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 str=0 time=0 us cost=1 size=5 card=1)'
CLOSE #139807307379504:c=0,e=4,dep=1,type=1,tim=203051393959

You note that result cache is used for optimization and query is run with NO_OBJECT_LINK() to prevent further data links if any.

Then, my session switches back to my PDB:

*** 2016-09-04T16:09:54.406676+02:00 (PDB(3))

and execution of my query finishes:

FETCH #139807307349184:c=0,e=375,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=2,plh=1333657383,tim=203051393981
STAT #139807307349184 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=0 pr=0 pw=0 str=1 time=544 us)'
STAT #139807307349184 id=2 cnt=1 pid=1 pos=1 obj=0 op='DATA LINK FULL DBA_PDBS (cr=0 pr=0 pw=0 str=1 time=525 us cost=1 size=1300 card=100)'
FETCH #139807307349184:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1333657383,tim=203051394259
CLOSE #139807307349184:c=0,e=10,dep=0,type=1,tim=203051397922

you see that the execution plan is explicit: ‘DATA LINK FULL’ in 12.2 (it was FIXED TABLE FULL X$OBLNK$ in 12.1)

_diag_cdb_logging

This new behaviour is controlled by an underscore parameter:

SQL> alter system set "_diag_cdb_logging"=thisIsMyWayToGetHelp;
alter system set "_diag_cdb_logging"=thisIsMyWayToGetHelp
*
ERROR at line 1:
ORA-00096: invalid value THISISMYWAYTOGETHELP for parameter _diag_cdb_logging,
must be from among long, short, off

By default on 12.2 the parameter is set to SHORT and writes the traces as above.
SQL> alter system set "_diag_cdb_logging"=SHORT;

If you set it to OFF, you have same behavior as in 12.1: a ‘*** CONTAINER ID:’ line is displayed for explicit SET CONTAINER but no more information.

When set to LONG you get the CON_UID which may be useful for traces that cover plug/unplug operations:

SQL> select con_id,name,dbid,con_uid,guid from v$containers;

CON_ID NAME DBID CON_UID GUID
---------- -------- ---------- ---------- --------------------------------
1 CDB$ROOT 893728006 1 3817ED090B9766FDE0534440E40ABD67
2 PDB$SEED 1943618461 1943618461 3A29D20830E760B7E053734EA8C047BB
3 PDB 4128224117 4128224117 3A2C965DE81E15A8E053734EA8C023AC
 
SQL> host grep "^\*\*\*" &tracefile
*** 2016-09-04T16:50:43.462870+02:00 (PDB(3/4128224117))
*** CONTAINER ID:(3) 2016-09-04T16:50:43.463067+02:00
*** 2016-09-04T16:50:43.493035+02:00 (CDB$ROOT(1/1))
*** 2016-09-04T16:50:43.495053+02:00 (PDB(3/4128224117))

If you want more information about CON_ID, CON_UID, GUID, and a lot more about multitenant, the Oracle Database 12c Release 2 Multitenant (Oracle Press) 1st Edition by Anton Els (Author), Vit Spinka (Author), Franck Pachot (Author) goes into all details.

 

Cet article Oracle 12cR2 multitenant containers in SQL_TRACE est apparu en premier sur Blog dbi services.

Viewing all 190 articles
Browse latest View live