Pages

Saturday, 20 January 2007

Shared_pool latching

Trying to get answer for this from Gurus like Thomas Kyte and Jonathan Lewis.
SHARED POOL LATCHING in LIBRARY CACHE. Why is it bad?


Shared pool is where the SQL code and the data dictionary lives. Isn’t this structure ‘mostly’ READ_ONLY? Why latch/lock? Why do you care to latch? What will be corrupted in the library cache? Why does Oracle have to “protect” it, as you say above?

To stop latching/parsing and the bad scalability which comes with it, Tom Kyte says we must use bind variables, yes I agree, but why latch the shared pool? What will that other “nasty” user do to your SQL statement in the shared pool apart from sharing the SQL with you.

Shared Pool is not like the Data Buffer where data lives . In the Data Buffer I understand the reason latching/locking (shared locks, exclusive locks during updates, selects) is bad, and why it inhibits concurrency, ‘Consistent Gets’ ACID theory of transactions, locks, transactions trying to concurrently modify same data, phantom reads and dirty reads etc, they all come to mind, fine.
But why in SHARED POOL, it is just SQL text, what is going to be corrupt? Why does it have to be concurrent and use latches (serialization devices) ?

Wiating Response from :

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::
P11_QUESTION_ID:1513005503967#58524800346079273

http://jonathanlewis.wordpress.com/2007/01/05/bind-variables/

Monday, 15 January 2007

Recovering after SHUTDOWN ABORT, or CRASH during OS hot-backup

What to do if the hotbackup database crashes during the hotbackup?
When this happens the tablespaces are going to be in an inconsistent state, the database will not OPEN and a STARTUP will give an error similar to:
ORA-01113: file 12 needs media recoveryORA-01110: data file 12: ‘/oracle/dbs/tbs_41.f’

In this situation what you can do is limited to 2 things:

1. Recover the database.
2. Mount the database, then run ALTER DATABASE END BACKUP


1. Recover the database.

Before you use the following quick steps make sure you read the link as well, provided at the bottom of this page. Also use this step when you are not sure if a RESTORE was made after the crash, perhaps by the backup operator!? If there was a restore then the RECOVER command will bring the database up to date as well.

1. Mount your database. For example, enter:

SQL> STARTUP MOUNT

2. Recover your database as normal. For example, enter:

SQL> RECOVER DATABASE

3. Use the V$BACKUP view to confirm that there are no active datafiles:

SQL> SELECT * FROM V$BACKUP WHERE STATUS = 'ACTIVE';

FILE# STATUS CHANGE# TIME
———- —————— ———- ———

0 rows selected.

4. Open the database for business.

SQL> ALTER DATABASE OPEN;


2. Mount the database, then run ALTER DATABASE END BACKUP.

When you get the above STARTUP error message probably the database will be in MOUNT state already. You can choose to carry one with the commands below immediately after the message or choose to do a SHUTDOWN ABORT and then STARTUP MOUNT. To take tablespaces out of backup mode simultaneously, the database must be in MOUNT state:

1. Mount but do not open the database. For example, enter:

SQL> STARTUP MOUNT

2. If performing this procedure manually (that is, not as part of a crash recovery script), query the V$BACKUP view to list the datafiles of the tablespaces that were being backed up before the database was restarted:

SQL> SELECT * FROM V$BACKUP WHERE STATUS = 'ACTIVE';FILE# STATUS CHANGE# TIME
———- —————— ———- ———
12 ACTIVE 20863 25-NOV-00
13 ACTIVE 20863 25-NOV-00
20 ACTIVE 20863 25-NOV-00
3 rows selected.

3. Issue the ALTER DATABASE END BACKUP statement to take all datafiles currently in backup mode out of backup mode. For example, enter:

SQL> ALTER DATABASE END BACKUP;

You can use this statement ONLY and ONLY when the database is mounted but not open. If the database is open, use ALTER TABLESPACE … END BACKUP or ALTER DATABASE DATAFILE … END BACKUP for each affected tablespace or datafile and that is what happens during the hot backup.

4. Open the database for business.

SQL> ALTER DATABASE OPEN;

After this you should take a new backup of the db immediately!
Source and further reading at this Oracle Link:http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96572/osbackups.htm#9817

Friday, 12 January 2007

DBMS_STATS package

Use DBMS_STATS to gather Oracle dictionary statistics. DBMS_STATS is the preferred and recommended method as it is more complete that the ANALYZE command which soon (read on http://asktom.oracle.com) will soon be depreciated. DBMS_STATS gathers more information about the objects it works on.

1. To gather table statistics with DBMS_STATS

SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'KUBILAY', tabname => 'PART_USER')

2. To gather table statistics with DBMS_STATS with estimate percent

SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'KUBILAY', tabname => 'USERS', estimate_percent =>50)

3. To gather schema statistics with estimate percent

SQL> EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('KUBILAY', estimate_percent =>50)

4. To gather a partition’s statistics in full

SQL> EXECUTE dbms_stats.gather_table_Stats( ownname => 'SCOTT', tabname => 'SOME_TABLE', partname => 'P0705');

* To gather statistics on a partition without calculating statistics for indexes, i.e. if the table partition is really big and there are lots of indexes and you want to skip them, use this:


EXECUTE dbms_stats.gather_table_stats
(
ownname=>'SCOTT',
tabname=>'SOME_TABLE',
partname=>'SOME_PARTITION',
estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE,
cascade=>FALSE,
degree=>DBMS_STATS.AUTO_DEGREE,
granularity=>'PARTITION'
);


This will work exclusively and only on the partition specified, it will ignore all indexes of the partition!



The Oracle Way

Faster and neater, with more accurate estimates if you use Oracle recommended procedures, and parameters see below. These parameters have a mind of their own and they decided how much they should estimate and if they should run the operation in parallel or not.

1. On Schema with DBMS_STATS

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname =>'MY_SCHEMA', ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, DEGREE => DBMS_STATS.AUTO_DEGREE);

2. On Table with DBMS_STATS

SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'SESSIONS', ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, DEGREE => DBMS_STATS.AUTO_DEGREE);

I find using Oracle Recommended values (i.e. degree=>DBMS_STATS.AUTO_DEGREE) always makes the statistics gathering faster and more effective.


The documentation for the full package is here: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461

Monday, 8 January 2007

Add Oracle range partitions quickly

Find the partition with the maximum HIGH_VALUE and see if you need to add new RANGE partitions which use date partitioning.

I hear Oracle 11g will do this automatically. About time!

USER's partitions

select table_name, partition_name, high_value from user_Tab_partitions a
where partition_position in (
select max(partition_position) from user_tab_partitions b
where b.table_name=a.table_name
group by table_name)


You can enhance this script further by investigating all partitions in your database when logged in as SYSDBA and find out the maxiumum partition HIGH_VALUE for all user schemas (except SYS,SYSTEM) in your database as follows:

DBA (All) partitions

SELECT a.table_owner,
a.table_name ,
a.partition_position ,
b.high_value
FROM dba_tab_partitions b,
(SELECT table_owner ,
table_name ,
MAX(partition_position) partition_position
FROM
(SELECT p.table_owner ,
p.table_name ,
p.partition_position,
p.high_value
FROM dba_tab_partitions p
WHERE p.table_owner NOT IN ('SYS','SYSTEM')
ORDER BY p.partition_position DESC,
p.table_name
)
GROUP BY table_owner,
table_name
) a
WHERE a.table_owner = b.table_owner
AND a.table_name = b.table_name
AND a.partition_position = b.partition_position
order by a.table_owner

Saturday, 6 January 2007

Change timestamp

Changing he timestamp of a file in linux

$ touch -t 200605101200 myfile.txt

Email Publishing is cool on blogger.

This thing put me back in blogging!

Just forward your email to the email address you create in blogger and get them published automatically. Now that is a cool use of SQL INSERT, wow.