I think people see databases as hard disks, a store everything, persist everything -that might be needed for debugging later- kind of place. Absolutely everything, no discrimination whatsoever. Yes, databases are here to store (store is the magic word), are here to remember everything that a program or a person will query later. But this 'store everything' in the database approach does have a cost.What I mean by cost is this, when you have large CLOBs in a table which contains data such as XML which frequently changes (is being updated) by and XML update intensive application - i.e. used for object persistence -, lots of redo data which needs to be archived is generated. Thus, lots of log switches occur and things can get out of control easily if there is a high velocity of OLTP transactions constantly updating this CLOB.
The Test
I was testing this excessive redo generation on an Oracle 11g test database with 5 MB redo log size and on a CLOB data type which I updated 10,000 times with LOGGING and with NOLOGGING.My test produced the following results and the test script is at the end of this post
CLOBs with LOGGING 18 log switches 88 mb or redo
CLOBs with NOLOGGING 2 log switches 12 mb of redo
That is, quite a bit of redo!
I have been reading on this blog http://www.liberidu.com/blog/?p=264 about Oracle 11g Secure Files and how this new feature can help us "Deduplicate" the redundant information we store (XML) in CLOBs and achieve better performance and less redo logging. Other features like Automatic Compression and Deduplication of LOBs in Oracle 11g also suggest to help. Secure Files seems to be the way to go if storing XML in CLOBs.
My conclusion is, if you have CLOBs storing XML for persistence of your application objects in your database and these are large CLOBs updated frequently, you are better off not LOGGING them, otherwise be ready for surprises.
The test script
Use it on a test database only. The redo log sizes must be quite small to see the effects, in my case 5MB to see the effects.
--- ### CREATE THE TABLE
--- DROP AND RECREATE THE ABOVE TABLE AND
--- CHANGE THE STORAGE PARAMETERS ON THE ABOVE LINE to NOCACHE AND NOLOGGING
--- to try the NOLOGGING VERSION OF THE BLOB
create table thelobshouse
(
house_id number primary key,
theclob clob
)
lob(theclob) store as theclob_seg (cache logging)
/
--- ### INSERT 1 TEST RECORD
insert into thelobshouse values( 1, ' lobb ')
/
commit
/
--- ### CREATE THE PROCEDURE BELOW WHICH UPDATES THE BLOB
---
CREATE OR REPLACE PROCEDURE trashmylobz(p_id IN NUMBER, p_text IN VARCHAR2)
AS
l_clob CLOB;
BEGIN
SELECT theclob
INTO l_clob
FROM thelobshouse
WHERE house_id = p_id FOR
UPDATE;
FOR i IN 1 .. 10000
LOOP
-- dbms_lob.writeappend(l_clob, LENGTH(p_text), p_text);
update thelobshouse set theclob = rpad(p_text, 200000 ,p_text);
END LOOP;
COMMIT;
END;
/
---
--- ### RUN THE ABOVE PROCEDURE AND UPDATE THE LOB
---
exec trashmylobz(1, ' * ');
---
Scripts you will need to do the observations
1. You can use the Redo Log Generation and Log Switch SQL Script found on this blog to see the effects of this test to your blob. Get the script here: http://kubilaykara.blogspot.com/2008/02/redo-log-generation.html
2. Use the below Script to find out how much redo your session generated.
set lines 500
column username format a10
column program format a20
SELECT TO_CHAR(se.logon_time,'dd-mon-yy hh24:mi')
logtime, (value/1048576) MB
FROM gv$sesstat st, gv$statname sn, gv$session se
WHERE sn.name = 'redo size'
AND sn.statistic# = st.statistic#
AND st.sid = se.sid
AND se.sid = ( SELECT sid FROM V$SESSION
WHERE audsid = SYS_CONTEXT('userenv','sessionid'))
AND value 0
/
Anyway, here is what hapened to my redo logs when the above test is run on a test database with a 5 MB redo log size
NOLOGGING case
(Using script 2 above)
SID USERNAME LOGTIME PROGRAM MB
---------- ---------- --------------- -------------------- ----------
123 KUBI 01-feb-08 13:50 sqlplus@kongo (TNS 13.8038025
V1-V3)
Look at hour 14 on 01-FEB-08 only 2 redo log switches and 13MB of redo with the NOLOGGING case
DAY 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23
--------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
25-JAN-08 10 0 1 0 1 0 1 0 1 0 1 0 0 1 1 1 0 1 0 1 0 1 8 0
26-JAN-08 10 0 1 0 1 0 7 0 1 0 2 1 2 0 2 0 1 1 1 0 1 1 1 1
27-JAN-08 9 1 0 1 0 1 6 1 0 1 2 0 2 0 2 0 1 0 2 0 1 0 2 1
28-JAN-08 9 0 1 0 1 0 1 0 1 1 0 1 2 0 1 0 1 0 1 0 1 0 7 1
29-JAN-08 9 1 0 1 0 1 1 0 1 0 1 0 3 0 1 0 1 1 0 1 0 1 7 1
30-JAN-08 10 1 0 1 1 0 1 0 1 0 1 0 3 0 1 0 1 1 0 1 0 1 7 1
31-JAN-08 11 1 0 1 0 1 1 0 1 0 1 0 0 1 1 2 0 1 1 0 1 0 9 1
01-FEB-08 8 0 1 1 0 1 0 1 0 1 0 1 2 1 2 0 0 0 0 0 0 0 0 0
LOGGING case
(Using script 1 above:
SID USERNAME LOGTIME PROGRAM MB
---------- ---------- --------------- -------------------- ----------
125 KUBI 01-feb-08 13:45 sqlplus@kongo 88.6263351
Look at hour 14 on 01-FEB-08, 18 (20-2) redo log switches and 88MB of redo with the LOGGING case
DAY 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23
--------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
25-JAN-08 10 0 1 0 1 0 1 0 1 0 1 0 0 1 1 1 0 1 0 1 0 1 8 0
26-JAN-08 10 0 1 0 1 0 7 0 1 0 2 1 2 0 2 0 1 1 1 0 1 1 1 1
27-JAN-08 9 1 0 1 0 1 6 1 0 1 2 0 2 0 2 0 1 0 2 0 1 0 2 1
28-JAN-08 9 0 1 0 1 0 1 0 1 1 0 1 2 0 1 0 1 0 1 0 1 0 7 1
29-JAN-08 9 1 0 1 0 1 1 0 1 0 1 0 3 0 1 0 1 1 0 1 0 1 7 1
30-JAN-08 10 1 0 1 1 0 1 0 1 0 1 0 3 0 1 0 1 1 0 1 0 1 7 1
31-JAN-08 11 1 0 1 0 1 1 0 1 0 1 0 0 1 1 2 0 1 1 0 1 0 9 1
01-FEB-08 8 0 1 1 0 1 0 1 0 1 0 1 2 1 20 0 0 0 0 0 0 0 0 0
20 Log Switches, with Logging Enabled! That is trashing!
Further Reading
* Oracle 11g Secure Files LOBs
* Thanks to Marco Gralike who pointed out this as well.
Also read Jonathan Gennick great post on OTN regarding deduplication and securefiles here: http://www.oracle.com/technology/oramag/oracle/07-sep/o57securefiles.html
1 comment:
Thank you for this very useful tip mate
Post a Comment