Pages

Thursday, 6 December 2007

Upgrade an Oracle 10g database to Oracle 11g using dbua

I am trying to upgrade my Oracle installation and a database on my Ubuntu 7.10 Desktop from Oracle (10.2.0.1) to Oracle 11g (11.1.0.6.0) using dbua. During this very messy and sticky job, I hit couple of problems, looked around in blogs and here is how I got it all sorted.



1. Install New Oracle Software in a seperate directory in a new home

I have downloaded the Oracle 11g (11.1.0.6.0) version from Oracle here , then I kept my old $ORACLE_BASE directory as: /usr/local/oracle and created a new $ORACLE_HOME on my Desktop like this /usr/local/oracle/product/11.1.0.6.0/. I installed the Oracle 11g version in there using OUI (Oracle Universal Installer). Hit lots of errors, as I am using an uncertified OS version (Ubuntu 7.10). I implemented different workarounds, most of them just ignoring all the errors Still installs!

Blogs which I found very useful while doing this were:


http://www.pythian.com/blogs/549/installing-oracle-11g-on-ubuntu-linux-704



http://www.dizwell.com/prod/node/1046



http://advait.wordpress.com/2007/09/07/upgrading-to-oracle-database-11g/




Although the bloggers above indicate gentle ways of dealing with installing Oracle 11g on uncertified OS (like Ubuntu), I followed the crude way and “Ignored All” warning messages until I have completed the installation of Oracle 11g softwarre. Next in step 2 I talk about upgrading my existing Oracel 10g database using dbua (Database Upgrade Assistant).


2. Upgrading existing databases to the New Oracle 11g (11.1.0.6.0) version using dbua

Now is time to do the upgrade. Setting all ENV parameters ORACLE_HOME and ORACLE_BASE correctly to point to the new Oracle 11g software (don’t forget oratab), I fired up dbua Database Upgrade Assistant and started upgrading my database.

ADVAIT’S ORACLE BLOG http://advait.wordpress.com/2007/09/07/upgrading-to-oracle-database-11g/ explains the command line method for upgrading databases from Oracle 10g 10.2.x.x version to Oracle 11g 11.1.0.6.0. Very good. Well done Advait!

It solved all my critical upgrade issues which I have encountered using dbua. Especially the timezone 4 bug identified as 396387.1 in metalink is very well explained in Advait’s blog. This one hits versions 10.2.x.x, most of them.

I have ignored lots of INVALID object warnings from dbua and went ahead with the upgrade. Had a bumpy ride with Enterprise Manager Configuration not succeeding (says failed), skipped that as well as I could do it later. I have managed to upgrade and got the Oracle 11g prompt:


SQL*Plus: Release 11.1.0.6.0 - Production on Thu Dec 6 22:40:21 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
SQL> conn / as sysdba
Connected.
SQL> select name from v$database;
NAME
---------
TESTDB01


Tuesday, 13 November 2007

SQL*Plus takes 100% of CPU time

This is a bug, a funny one, which I experienced on Linux x86 boxes where SQL*Plus would just hang, and do nothing...


That is, if the Linux x86 box has an uptime of more than 200 days, then SQL*Plus would just hang!

I experienced it on a linux x86 Oracle 10g Client installation where I had the client loging in everyday via SQL*Plus to remote boxes and do things. Suddenly it stopped. I checked top on the box where the Oracle 10g client was installed and saw SQL*Plus eating up the box alive! How strange!??

Check the stack trace:

strace $ORACLE_HOME/bin/sqlplus -V 2>&1 |less

you will see lots of jibberish...


Immediately checked metalink I found the Bug 4612267.

Note: 338461.1: SQL*Plus With Instant Client 10.2.0.1 Hangs, When System Uptime Is More Than 248 Days

Oracle support confirmed this, they asked me to patch the installation with applying a one-off patch they gave me on top of the 10.2.0.1.0 Linux Client.

Wednesday, 7 November 2007

Table of Dates

Use a PL/SQL collection TYPE to generate a date list as a database table which you can then use as time series table in your data warehouse.


1. Create a DATE_TABLE type.

SQL> create or replace TYPE "DATE_TABLE" AS TABLE OF DATE;

2. Create the DATE_RANGE function to manipulate (load) the above type.

.


CREATE OR REPLACE FUNCTION date_range(from_dt IN DATE, to_dt IN DATE)
RETURN date_table AS
a_date_table date_table := date_table();
cur_dt DATE := from_dt;
BEGIN
WHILE cur_dt <= to_dt
LOOP
a_date_table.extend;
a_date_table(a_date_table.COUNT) := cur_dt;
cur_dt := cur_dt + 1;
END LOOP;
RETURN a_date_table;
END date_range;


more about this function here:

http://www.adp-gmbh.ch/ora/sql/table_cast.html

3. Use the TYPE as a date table which you can then join with other tables.


SELECT column_value DAYS
FROM TABLE(CAST(date_range('18-FEB-1981', '25-FEB-1981') AS date_table))

/

DAYS
-------------------------
18-FEB-81
19-FEB-81
20-FEB-81
21-FEB-81
22-FEB-81
23-FEB-81
24-FEB-81
25-FEB-81



Then use the "Date Table" to join it with SCOTT.EMP and get the employees which were hired during the period


SELECT d.DAYS,e.ename,e.hiredate
FROM emp e,

(SELECT column_value DAYS
FROM TABLE(CAST(date_range('18-FEB-1981', '25-FEB-1981') AS
date_table))) d

WHERE d.DAYS = e.hiredate(+)
/



DAYS ENAME HIREDATE
------------------------- ---------- -------------------------
18-FEB-81
19-FEB-81
20-FEB-81 ALLEN 20-FEB-81
21-FEB-81
22-FEB-81 WARD 22-FEB-81
23-FEB-81
24-FEB-81
25-FEB-81

Tuesday, 6 November 2007

Change the location of archivelogs

Use one of the following commands to change, online, the location of your archivelogs.



1. How to change the current location for archivelogs:

SQL> alter system set log_archive_dest_1='LOCATION=/u20/oracle/data1';

System altered.


If you want to use the Flash Recovery Area (FRA) as your archivelog destination, wise to use really with RMAN as it will delete obsolete archivelogs based on the backup retention policy, then use this command. Otherwise you have to set up manual ways of deleting old obsolete archivelogs.

2. How to Use Flash Recovery Area (FRA) as your archivelog location:

(First set up flash recovery area)

SQL> alter system set  log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' ;

System altered.


The parameter USE_DB_RECOVERY_FILE_DEST is very good. It keeps things tidy. Together with the autobackup and backupset directories in the FRA it will also create a directory called archivelogs and will put the archivelogs in there!

Given the FRA as:

/usr/local/oracle/flash_recovery_area$

You get:


oracle@machine:/usr/local/oracle/flash_recovery_area/TESTDW0$ ls -lrt
total 12
drwxr-x--- 9 oracle oinstall 4096 2007-11-07 07:45 backupset
drwxr-x--- 9 oracle oinstall 4096 2007-11-07 07:45 autobackup
drwxr-x--- 4 oracle oinstall 4096 2007-11-07 07:45 archivelog


Very cool...

NOTE:

When you decide to use FRA for your archivelogs, you might want to move the existing archivelogs into FRA after you have set the destination as above. When you do that you will need to let RMAN know where the archivelogs are? That is where is their new location you can achieve that with the command

RMAN> catalog archivelog '/your/new/location/thearchivelogfile'; 


Docs say this: http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmcatdb002.htm#sthref941

Otherwise you will probably get ane error like:

... specification does not match any archive log in the recovery catalog


Which means that the controlfile does not know about the new location of your archivelogs.

Friday, 2 November 2007

Innovation in Oracle 11g Database

I have attended the Ascot, UK (26/09/2007) Presentation by Andrew Sutherland, vice president, technology for Oracle EMEA and I am posting a summary of notes I have taken in that event.



Oracle 11g Innovations


Real Application Testing : Test workloads are typically simulated and are not accurate and complete representations of real world.

Database Replay : The ability to accurately and realistically rerun actual production workloads, including online user and batch workloads as well as cuncurrency dependencies and timing on a test system. No need to understand the application and write scripts to simulate load. DBAs now can test without the need to duplicate the application infrastructure/hardware. Cuts down testing cycles and time taken for tests.

Used for Database Upgrades, patches, parameter, schema changes , etc
Configuration changes such as conversion from a single instance to RAC, ASM, etc.
Storage, network, interconnect changes. Operating system, hardware migrations, patches, upgrades, parameter changes

SQL Performance Analyzer : A tool measuring the impact of environment changes on SQL execution plans in the database. A kind of a tool which integrates together previous SQL Tuning Sets (STS) and SQL Tuning Advisor (STA) tools. It produces a report outlining the benefit on the workload introduced by the change. So it gives a picture of the SQL Execution plans before the change and after the change,including the statements which regressed, seperately. The advantage of this tool, compared to the home grown approaches that dba’s have evolved over the years, is that it takes the number of executions into consideration while measuring the impact. Oracle 11G SQL Performance Analyzer takes all these factors into account while predicting the overall performance improvement and regressions and allows the oracle 11g database administrator to fix the SQL or fix the regression through SQL Plan Baselines.

Database upgrade Configuration changes to the operating system, hardware, or database Database initialization parameter changes Schema changes, for example, adding new indexes or materialized views Gathering optimizer statistics SQL tuning actions, for example, creating SQL profiles

New Datatypes

XML, 3-D data, RFID (Radio Frequency ID), MP3, JPEG….


Binary XML -(No need to store the XML as CLOB anymore) “… If you’re mostly just going to pump the XML back out, binary seems to be the way to go. …” Binary XML allows you to store XML totally unparsed but still accessible via XPath. No encoding checks are done on loading. With no (or at least reduced) parsing, validation or conversions, you will have much less IO and much less CPU meaning much faster load times. With the combination of Secure Files this is a killer!

XMLIndex, that is a new way to improve access performance of your XML. In the past, your options were Xpath function based indexes (using extract or extractvalue) or Oracle Text. Both of those have some limitations that XML Index tries to address.

Secure Files The New BLOBs. Performance is the same as Unix File handling performance. This feature introduces a completely reengineered large object (LOB) data type to dramatically improve performance, manageability, and ease of application development. The new implementation also offers advanced, next-generation functionality such as intelligent compression, de-duplication of LOB segments and transparent encryption.

Managing Data Growth

Data Growth Facts:

DW is bogged down. The increase in the ammount of data of a system stores, is faster than the decrease of the cost of the disks. So buying cheap disks will not solve the data issues.

30 years ago the largets database was about 30GB DB has now become a 300TB DB.
Data Growth Solution:

Ability to comresss in the OLTP environments as well. Better compress ratios. Advanced Compression (Not gZIP). The compression algorithm in Oracle works with symbolic tables. It is a Logical Compression rather than a Physical one! Before compression was only for DWs now is for OLTPs.

ILM (Information Life Cycle Management) old, not-so-old, current data in different storage media with partitioning and advanced compression.

Secure Files, more efficiency with BLOBs.
Automating partitioning adds partitions on the go.

More Partitioning Options In 10g you could only do range-list and range-hash combinations now you can do many.

Higher Quality of Service

All performance figures have increased, better, faster. Faster backups, ZLIB algorithm for compression. Parallel backup of the same datafile.
MVs become “CUBE” organized MVs. PIVOT operator instead of DECODE. The new MODEL clause, MVs refresh without logs.
MMA (Maximum Availability Architecture)

Recovery Advisor (RMAN) RMAN Advisor, repairs and previews RMAN OEM GUI.
Flashback Transaction, & Dependecies

Total Recall (Flashback Data Archive, “Time Travel”) with SQL such as “AS OF” The main difference between flashback and Total Recall is that with Total Recall data will be permanently stored in an archive tablespace and will the only ages out after a user defined retention time. Oracle Total Recall enables administrators to easily maintain historical archives of changed data for compliance and business intelligence. Oracle Total Recall provides a secure, efficient, easy-to-use and application-transparent solution for long-term access to changed data.
Oracle Active Data Guard - No need to stop continous redo apply like in 10g when reading Standby Copy.

By the way and here is what is installed by default when you install Oracle 11g:

Overview of new components during Oracle 11g Release 1 (11.1) Linux x86 install:

Oracle Apex (HTMLDB) installed by default with 11g.
Oracle Configuration Manager is an available install option.
Oracle Database vault is an available install option.
Oracle Real Application Testing installed by default.
Orace SQL Developer is installed by default.
Oracle Warehouse Builder is installed by default (basic feautures only).
Oracle XML DB is installed by default for all database instances.
Oracle Ultra Search is integrated within the database, suppose installs by default.
Oracle HTTP Server available as option on seperate media.
Oracle Data Mining installs by default on Oracle Enterprise Edition.


Summary

Oracle Real Application Testing;
Oracle Advanced Compression;
Oracle Total Recall; and,
Oracle Active Data Guard.

Detailed Oracle 11g New Features Guide
http://download.oracle.com/docs/cd/B28359



Monday, 17 September 2007

Oracle 11g Database Editions

Oracle XE, SE One, SE and EE Editions Matrix for Oracle 11g :

  • What does the Enterprise Edition include?
  • What is Oracle Standard Edition and what can I get with it?
  • How many CPUs can I have with Oracle Sandard Edition?
  • Can I install RAC with Standard Edition One?
  • What is Oracle Enterprise Edition what does it include?

here is the link with answers: http://www.oracle.com/database/product_editions.html



Do you want to see how much Oracle Licenses cost? Here is the Oracle Online Store:

https://oraclestore.oracle.com


Friday, 7 September 2007

Redo Log Generation

See how much redo logs you are generating per day. Good to use when you see your archivelog directories are filling up unusually.

This will not show you where the problem is though, will just give you the number of log switches and average MB generated per day so that you can compare day by day. If you do start seeing unusually high log switches, thus high volume of archivelog generation, maybe a good place to start is looking for transactions which do lots of UPDATE/DELETE operations in the STATSPACK.

I found this very nice SQL script on the Oracle website at this url :http://www.oracle.com/technology/oramag/code/tips2005/062705.html

Daily Count and Size of Redo Log Space (Single Instance)

SQL> SELECT A.*,
Round(A.Count#*B.AVG#/1024/1024) Daily_Avg_Mb
FROM
(
SELECT
To_Char(First_Time,'YYYY-MM-DD') DAY,
Count(1) Count#,
Min(RECID) Min#,
Max(RECID) Max#
FROM
v$log_history
GROUP
BY To_Char(First_Time,'YYYY-MM-DD')
ORDER
BY 1 DESC
) A,
(
SELECT
Avg(BYTES) AVG#,
Count(1) Count#,
Max(BYTES) Max_Bytes,
Min(BYTES) Min_Bytes
FROM
v$log
) B;

Sample Output:


DAY COUNT# MIN# MAX# DAILY_AVG_MB
---------- ---------- ---------- ---------- ------------
2007-09-05 1 700 700 50
2007-09-04 1 699 699 50
2007-09-03 3 696 698 150
2007-09-02 2 694 695 100
2007-09-01 3 691 693 150
2007-08-31 4 687 690 200
2007-08-30 3 684 686 150
2007-08-29 2 682 683 100
2007-08-28 3 679 681 150
2007-08-27 3 676 678 150
2007-08-26 3 673 675 150

Hourly archivelog switches

There is another one which I find quite useful. I found it on the internet. I have also modified to show me a window of the last 7 days of the number of archivelog generation per hour (switches). I scripted it in unix so that shell sends me an email once a day with this log information. Very useful monitoring stuff!

SQL>SELECT to_date(first_time) DAY,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23"
from
v$log_history
where to_date(first_time) > sysdate -8
GROUP by
to_char(first_time,'YYYY-MON-DD'), to_date(first_time)
order by to_date(first_time)
/


The output is like this:

DAY 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23
--------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
31-AUG-07 1 1 1 1 2 1 1 1 1 1 1 2 0 1 0 1 0 1 1 0 1 0 1 1
01-SEP-07 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 1 0 1 1 1 1 1 1
02-SEP-07 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 2 1 1 1 1
03-SEP-07 1 2 2 1 1 1 1 1 1 1 1 2 1 1 1 1 1 2 1 1 2 1 1 2
04-SEP-07 1 1 2 1 1 2 1 1 1 2 1 1 2 1 2 1 2 1 2 1 2 1 2 2
05-SEP-07 2 1 2 1 2 2 1 2 1 2 2 1 2 2 1 2 2 2 1 2 2 2 2 2
06-SEP-07 2 1 2 2 2 2 1 2 2 2 2 2 1 2 2 1 0 2 0 0 0 0 1 0
07-SEP-07 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0


Thursday, 23 August 2007

Upgrade an Oracle 10g database to Oracle 11g using dbua

Oracle suggests several ways for upgrading a database from a previous version into 11g R1.

  • Database Upgrade Assistant (DBUA)
  • Manual upgrade using SQL scripts and utilitites
  • Export and Import utilities
  • CREATE TABES AS... SQL statement.

In this post I am trying to upgrade my Oracle installation and a database on my Ubuntu 7.10 Desktop from Oracle (10.2.0.1) to Oracle 11g (11.1.0.6.0) using DBUA. During this very messy job, I hit couple of problems, looked around in blogs and here is how I got it all sorted.

1. Install New Oracle Software in a seperate directory in a new home

I have downloaded the Oracle 11g (11.1.0.6.0) version from Oracle here , then I kept my old $ORACLE_BASE directory as: /usr/local/oracle and created a new $ORACLE_HOME on my Desktop like this /usr/local/oracle/product/11.1.0.6.0/. I installed the Oracle 11g version in there using OUI (Oracle Universal Installer). Hit lots of errors, as I am using an uncertified OS version (Ubuntu 7.10). I implemented different workarounds, most of them just ignoring all the errors :-) Still installs!

Blogs which I found very useful while doing this were:


    http://www.pythian.com/blogs/549/installing-oracle-11g-on-ubuntu-linux-704


    http://www.dizwell.com/prod/node/1046

    http://advait.wordpress.com/2007/09/07/upgrading-to-oracle-database-11g/



Although the bloggers above indicate gentle ways of dealing with installing Oracle 11g on uncertified OS (like Ubuntu), I followed the crude way and "Ignored All" warning messages until I have completed the installation of Oracle 11g softwarre. Next in step 2 I talk about upgrading my existing Oracel 10g database using dbua (Database Upgrade Assistant).



2. Upgrading existing databases to the New Oracle 11g (11.1.0.6.0) version using dbua



Now is time to do the upgrade. Setting all ENV parameters ORACLE_HOME and ORACLE_BASE correctly to point to the new Oracle 11g software (don't forget oratab), I fired up dbua Database Upgrade Assistant and started upgrading my database.

ADVAIT’S ORACLE BLOG http://advait.wordpress.com/2007/09/07/upgrading-to-oracle-database-11g/
explains the command line method for upgrading databases from Oracle 10g 10.2.x.x version to Oracle 11g 11.1.0.6.0. Very good. Well done Advait!

It solved all my critical upgrade issues which I have encountered using dbua. Especially the timezone 4 bug identified as 396387.1 in metalink is very well explained in Advait's blog. This one hits versions 10.2.x.x, most of them.

I have ignored lots of INVALID object warnings from dbua and went ahead with the upgrade. Had a bumpy ride with Enterprise Manager Configuration not succeeding (says failed), skipped that as well as I could do it later. I have managed to upgrade and got the Oracle 11g prompt:


SQL*Plus: Release 11.1.0.6.0 - Production on Thu Dec 6 22:40:21 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
SQL> conn / as sysdba
Connected.
SQL> select name from v$database;
NAME
---------
TESTDB01



Friday, 27 July 2007

RMAN Incrementally Updated Backups

In this post I will explore the RMAN Incremental Backups. Incremental backups are used best in Data Warehouse environments where there is a lot of data to backup and keeping multiple copies on the disk takes space.


There are few alternatives Oracle provides to optimize backups of large databases and the incrementally updated backups strategy is one of them. Actually is the Oracle recommended backup policy!

You can NOT use OS methods to take Incremental Backups. Only RMAN will let you take Incremental backups.

The incrementally updated backups allow you to apply database changes incrementally (i.e. every day) to an image copy (not zipped) full backup! That is you update the level 0 full backup with new data incrementally all the time. This way you don't have to keep old full backups, spread over days, separately with their individual archive logs and controlfiles. Even you don't have to take full backups periodically any more! You only take one full backup (level 0) at the beginning and you keep on updating it every day incrementally with the new changes in the database. By applying the changes to the last backup copy, you are bringing the backup copy forward in time. This is also called rolling forward the backup copy.

In my case I used this backup methodology to reduce the ammount of disk space used for backups, as I didn't need to keep and individual full backup of the datawarehouse for each day. I only took one full backup (level 0) at the beginning. And my retention policy was thereafter for 7 days. In this case the retention policy is the duration of time I kept the incremental backups before applying them on the full backup.

To improve the performance of incrementally updated backups another feature called Change Tracking has also been introduced in Oracle 10g. This lets you record the changes in the blocks of datafiles in a separate datafile in the database called Change Tracking File. Then when the time for backup comes, RMAN reads the Change Tracking File to find out the changes which happened to the database instead of scanning whole datafile. This makes the backup much more faster.

So, before running the incrementally updated backup backup script, I enable change tracking in the database like this:

1. First check to see if change tracking is already enabled by querying the DBA view like this:

SQL> conn / as sysdba

Connected

SQL> SELECT STATUS''FILENAME''BYTES FROM V$BLOCK_CHANGE_TRACKING
SQL> /

STATUS''FILENAME''BYTES
--------------------------------------------------------------------------------
DISABLED


2. Then enable Change Tracking.

SQL> alter database enable block change tracking using file '/usr/local/oracle/testdw0/rman_change_track.dbf';

Database altered


The Script I used to take an Incrementally Updated Backup is rman_incr_backup.sh below. This is a sript which uses non-default RMAN location as I explicitly indication where the backups should be stored on disk. This version doesn't use FRA (Flash Recovery Area) either.


# ########################################
# !/bin/bash
# Unix controls
trap cleanup 1 2 3 15
cleanup()
{
echo "Caught CTRL-C Signal ... exiting script."
exit 1
}
# Oracle Variables
export ORACLE_SID=testdw0
export ORACLE_BASE=/usr/local/oracle
export ORACLE_HOME=/usr/local/oracle/product/10.2.0.1
export PATH=$PATH:${ORACLE_HOME}/bin
# RMAN INCREMENTALLY UPDATED BACKUPS (Window of 24 hours)
rman target=/ << EOF
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/usr/local/oracle/backups/rman_bkps/dwbacks/atbckp_cntrlfile_testdw0%F';
run {
ALLOCATE CHANNEL RMAN_BACK_CH01 TYPE DISK FORMAT '/usr/local/oracle/backups/rman_bkps/dwbacks/databasefiles_%d_%u_%s_%T';
CROSSCHECK BACKUP;
RECOVER COPY OF DATABASE with TAG 'testdw0_incr_update';
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY with TAG 'testdw0_incr_update' DATABASE;
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP as compressed backupset ARCHIVELOG ALL format '/usr/local/oracle/backups/rman_bkps/dwbacks/archivelogs_%d_%u_%s_%T' DELETE INPUT;
CROSSCHECK BACKUP;
DELETE NOPROMPT OBSOLETE;
DELETE NOPROMPT EXPIRED BACKUP;
RELEASE CHANNEL RMAN_BACK_CH01;
}
EXIT;
EOF
#########################################

Friday, 20 July 2007

RMAN Recovery

In this post I will try to explain how a database recovery is done on a New Host using the latest RMAN full Hot Backup.

The recovery is done on a New Host with the same directory structure and the same database name and DBID. Lets assume that you have used the Rman Hot Backup Script provided in this blog, which takes backups of the database using a retention policy with a recovery window of 2 days. The full script is here:

http://www.databasesystems.info/2008/02/rman-hot-backup-script.html


That is, it will only keep backups and archive logs necessary to recover in any point of time in the last 2 days. During recovery, RMAN will use the latest backups and archivelogs from the backup directory.


1. On the New Host connect to RMAN after you set ORACLE_SID to newSID


oracle@NEWHOST:~ . oraenv
ORACLE_SID = [testdw0] ? testdb0



oracle@NEWHOST:~$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jun 20 15:00:13 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database (not started)

RMAN>





2. Set the DBID to the DBID of the database you want to restore and recover, you can find the DBID in backup logs or in the autobackup controlfile name.






RMAN> set DBID 347812949

executing command: SET DBID

RMAN>




3. Start the database with nomount. It will fail to startup properly with LRM-00109 as below, because you don't have init.ora file yet for your instance to start.


RMAN> startup force nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/usr/local/oracle/product/10.2.0.1/dbs/inittestdb0.ora'

starting Oracle instance without parameter file for retrival of spfile

Oracle instance started
Total System Global Area 159383552 bytes
Fixed Size 1218268 bytes
Variable Size 54528292 bytes
Database Buffers 100663296 bytes
Redo Buffers 2973696 bytes


RMAN>



4. Indicate the location of your controlfile autobackups.

As this location contains the controlfile which has built into it your init.ora as well.



RMAN> set controlfile autobackup format for device type disk to '/usr/local/oracle/backups/rman_cntrl/autobackup_control_file%F';

executing command: SET CONTROLFILE AUTOBACKUP FORMAT
using target database control file instead of recovery catalog

RMAN>


5. Restore the init.ora file.

It is perfectly valid at this point aflter you restore the init.ora to edit it to suit your needs. Change directory locations for controlfiles etc...



RMAN> run
{
allocate channel c1 type disk;
restore spfile to pfile '/usr/local/oracle/product/10.2.0.1/dbs/inittestdb0.ora' from autobackup;
shutdown abort;
};

allocated channel: c1
channel c1: sid=36 devtype=DISK

Starting restore at 20-JUN-07

channel c1: looking for autobackup on day: 20070620
channel c1: autobackup found: /usr/local/oracle/backups/rman_cntrl/autobackup_control_filec-347812949-20070620-02
channel c1: SPFILE restore from autobackup complete
Finished restore at 20-JUN-07

Oracle instance shut down

RMAN>



6. Restart the instance with the proper init.ora file you restoed in step 5.




RMAN> startup force nomount;

Oracle instance started

Total System Global Area 608174080 bytes

Fixed Size 1220844 bytes
Variable Size 192941844 bytes
Database Buffers 411041792 bytes
Redo Buffers 2969600 bytes

RMAN>



7. Restore the controlfile

First indicate where is the controlfile you want to use.




RMAN> set controlfile autobackup format for device type disk to '/usr/local/oracle/backups/rman_cntrl/autobackup_control_file%F';

executing command: SET CONTROLFILE AUTOBACKUP FORMAT


Now restore the controlfile.





RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;

Starting restore at 20-JUN-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: looking for autobackup on day: 20070620
channel ORA_DISK_1: autobackup found: /usr/local/oracle/backups/rman_cntrl/autobackup_control_filec-347812949-20070620-02
channel ORA_DISK_1: control file restore from autobackup complete
output filename=/usr/local/oracle/testdb0/control01.ctl
output filename=/usr/local/oracle/testdb0/control02.ctl
output filename=/usr/local/oracle/testdb0/control03.ctl
Finished restore at 20-JUN-07

released channel: ORA_DISK_1

RMAN>




8. Now you have a controlfile, mount the database





RMAN> ALTER DATABASE MOUNT;

database mounted

RMAN>



9. Indicate RMAN where your hotbackups are

This is where you tell RMAN where you backups are. Try changing backup files directory and re-run this command with the appropriate location and you will see that it will 'find' the files and update, the control file or catalog, accordingly.




RMAN> CATALOG START WITH '/usr/local/oracle/backups/rman_bkps/hot_backups/';

searching for all files that match the pattern /usr/local/oracle/backups/rman_bkps/hot_backups/
no files found to be unknown to the database

RMAN>



10. Restore the database




RMAN> restore database;

Starting restore at 20-JUN-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /usr/local/oracle/testdb0/system01.dbf
restoring datafile 00002 to /usr/local/oracle/testdb0/undotbs01.dbf
restoring datafile 00003 to /usr/local/oracle/testdb0/sysaux01.dbf
restoring datafile 00004 to /usr/local/oracle/testdb0/users01.dbf
restoring datafile 00005 to /usr/local/oracle/testdb0/ts1.dbf
restoring datafile 00006 to /usr/local/oracle/testdb0/apex.dbf
restoring datafile 00007 to /usr/local/oracle/testdb0/FLOW_1.dbf
channel ORA_DISK_1: reading from backup piece /usr/local/oracle/backups/rman_bkps/hot_backups/databasefiles_TESTDB0_s3iko9hm_899_20070620
channel ORA_DISK_1: restored backup piece 1
piece handle=/usr/local/oracle/backups/rman_bkps/hot_backups/databasefiles_TESTDB0_s3iko9hm_899_20070620 tag=TAG20070620T104510
channel ORA_DISK_1: restore complete, elapsed time: 00:03:16
Finished restore at 20-JUN-07

RMAN>



11. Recover the database, will automatically apply archivelogs and will stop (will fail like below on the last log)

This is normal as Oracle will always try to roll-forward the database to the latest log, will always ask for the latest archivelogs and in this will stop on the last available one. At this point we will assume that we are doing CANCEL based recovery and open the database with RESETLOGS option. You could avoid this error by first finding out which archivelog sequence is the last one and the recovering up to that point. Since RMAN does NOT do CANCEL BASED RECOVERY. Nevertheless here is what happens if you just say RECOVER DATABASE at this point





RMAN> recover database;

Starting recover at 20-JUN-07
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archive log restore to default destination
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=333
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=334
channel ORA_DISK_1: reading from backup piece /usr/local/oracle/backups/rman_bkps/hot_backups/archivelogs_TESTDB0_s5iko9p1_901_20070620
channel ORA_DISK_1: restored backup piece 1
piece handle=/usr/local/oracle/backups/rman_bkps/hot_backups/archivelogs_TESTDB0_s5iko9p1_901_20070620 tag=TAG20070620T104905
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
archive log filename=/usr/local/oracle/arch/testdb0_1_333_614535884.arc thread=1 sequence=333
archive log filename=/usr/local/oracle/arch/testdb0_1_334_614535884.arc thread=1 sequence=334
unable to find archive log
archive log thread=1 sequence=335
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 06/20/2007 15:22:07
RMAN-06054: media recovery requesting unknown log: thread 1 seq 335 lowscn 1793924713

RMAN>


You could have avoided seing this ugly error message at the end of your recovery, if you only knew until which archivelog sequence number (usually last sequence number avaliable) you want to recover to. You can obtain this last sequence number by issuging the command:




RMAN> list backup of archivelog from time='sysdate-7';


This would have shown you the last available sequence number from your archivelog backups and then you would more gracefully recover to that sequence number like this:



RMAN> recover database until sequence=;


Then next, you would open the database as in step 12 below.


12. Then you have to open the database with RESETLOGS option


You have to since you are doing a cancel based recovery from a full hot backups with no online redo logs. Redo logs are not present and this step creates them and resets them to 0.




RMAN> alter database open resetlogs;

database opened

RMAN>


You are done! You have now the database fully operational on the new host from an RMAN full hot backup.


Friday, 8 June 2007

LAST_VALUE analytic functions

Oracle analytic functions are great! There is a few of them which I like particularly and would like to post here. I always need to check tables for the last entry of a record.

LAST_VALUE

The LAST_VALUE analytic function returns the last_value of an ordered set of values. Think of a table with data like this:


SQL> select * from tab1;

        ID     ENDEKS ISIM
---------- ---------- ----------
         1       1001 AA
         1       1002 CV
         1       1003 FC
         2       1001 AA
         3       1001 KK
         3       1000 LL
         2       1002 ZZ
         2       1010 MM


Suppose, all you wanted to see is the last entry for each ID at any point in time, then an Oracle Analytics SQL Query like this would do.
SQL> select x.id,max(endeks), isim from
(
SELECT ID,
last_value(endeks) over(PARTITION BY ID order by endeks asc) endeks,
last_value(isim) over(PARTITION BY Id ) isim
FROM tab1
order by 1 , 2
) X
group by x.id, isim
order by 1
/

     ID MAX(ENDEKS) ISIM
---------- ----------- ----------
         1        1003 FC
         2        1010 MM
         3        1001 KK


Same answer with MAX and Oracle Analytics

It is possible to get the same answer with Oracle Analytics and the MAX function. If your table is very big, use this one as it is much more efficient as it goes through your table only once. Use Autotrace for both statements to see the difference.
SQL> 1 select id, endeks, isim from
(
select a.*, max(endeks) over (partition by id) last_message
from tab1 a
)
where endeks = last_message
/

        ID     ENDEKS ISIM
---------- ---------- ----------
         1       1003 FC
         2       1010 MM
         3       1001 KK

Saturday, 14 April 2007

Configuring Oracle Standby Database on Ubuntu Linux

In this post I will write the steps I took to configure a Physical Standby Database on two Ubuntu Hosts versions Breezy Badger or Dapper Drake and running Oracle 10g R2. For all this to work


following assumptions must hold true:

  • Both hosta and hostb must have the same operating system and version.
  • The directory structure for datafiles on both hosts must be the same.



  • 1. Create the Primary Database.

    I have created a small database on hosta called testdb1 to use as the Primary Database using dbca. Then I took a cold backup of this database.

    2. Move backups of Primary Database to Standby Database on hostb.

    Next I moved the backup datafiles to hostb in their corresponding same locations with scp like this:

    scp /u10/oradata/testdb1/* oracle@hostb:/u10/oradata/testdb1/

    3. Create the Standby Database control file.

    Then I went back to hosta and started up testdb1 Primary Database, and created a special control file. It is important to create the controlfile after you shutdown the Primary Database cleanly and take the backup and not before. This controlfile will be used on hostb to startup the Standby Database.
    Log in to Primary Database on hosta as SYSDBA and run the command:

    SQL> alter database create standby controlfile as ‘/tmp/sbycontrol.ctl’

    NOTE: This file is binary. Use a different filename than the controlfiles of the Primary Database and watch out not to overwrite a current Primary Database control files.

    Then move the created controlfile to hostb. You can choose any location on hostb. Later, you will manually edit the initialization parameter file of the Standby Database to indicate the chosen location.


    SQL> !scp /tmp/sbycontrol.ctl oracle@hostb:/u10/oradata/testdb1/

    4. Create the initialization parameter file for the Standby Database.

    Create a pfile from the spfile of the Primary Database to be used later as the intialization parameter file to startup the Standby Database. The initialization parameter file you will create from the Primary Database will be manually modified to introduce Standby Database parameters later.
    Log in to Primary Database on hosta as SYSDBA and run the command:

    SQL> create pfile=’/tmp/inittestdb1.ora’ from spfile

    Then copy the file to $ORACLE_HOME/dbs on hostb, to the default location. You will edit the file later on hostb.

    SQL> !scp /tmp/inittestdb1.ora oracle@hostb:$ORACLE_HOME/dbs

    5. Set up networking configuration between Primary and Standby Databases.

    You have to make sure that both databases will be able to communicate using their own service names pointing to each other. Create service names on both hosta and hostb which will connect to each other and ship archived logs.
    Edit the tnsnames.ora file on hosta and hostb and add the service name TO_STANDBY to both.

    in tnsnames.ora on hosta add:

    TO_STANDBY =(DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = hosta)(PORT = 1521)))(CONNECT_DATA =(SERVICE_NAME = testdb1)))

    in tnsnames.ora on hostb add:

    TO_STANDBY =(DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = hostb)(PORT = 1521)))(CONNECT_DATA =(SERVICE_NAME = testdb1)))


    You must also create the password file on standby as well as primary database to enable logins from each other. The password for these must be the same.
    on hosta and hostb do

    $orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=yourpassword

    6. Edit the pfile created for Standby Database

    In step 4 above an initialization parameter file was created from the Primary Database. Now is time to make that file the initialization parameter of the Standby Database and add the parameters necessary to startup the Standby Database in Standby mode.

    The file created earlier should be in $ORACLE_HOME/dbs directory like:
    /usr/local/oracle/product/10.2.0.1/dbs/inittestdb1.ora

    According to the method of synchronization (redo transport) you choose between the Primary Database and the Standby Database certain parameters will be set in this file.

    For my installation I used Standby Logs on the Standby Database and Archived Log Transmission.

    I edited the Standby Database initialization parameter file as below, where bold is what I added.


    *.compatible=’10.2.0.1.0′
    *.control_files=’/u10/oradata/testdb1/sbycontrol.ctl’…
    *.log_archive_dest_1=’LOCATION=/u01/arch’
    *.log_archive_dest_2=’LOCATION=/u11/arch’
    *.standby_archive_dest=’LOCATION=/u00/sbylogs’
    *.standby_file_management=’AUTO’
    *.remote_archive_enable=’TRUE’
    *.log_archive_format=’testdb1_%t_%s_%r.arc”
    *.undo_tablespace=’UNDOTBS1′
    *.user_dump_dest=’/usr/local/oracle/admin/testdb1/udump’
    ...

    Once you edited the Standby Database initialization parameter file next is to make an spfile out of it and startup the Standby Database.

    7. Create spfile for Standby Database and start it up

    Log in to Standby Database on hostb as SYSDBA and run the command:

    SQL> create spfile from pfile=’/usr/local/oracle/product/10.2.0.1/dbs/inittestdb1.ora’;

    Next startup the Standby Database on hostb with NOMOUNT

    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area 167772160 bytes
    Fixed Size 1218316 bytes
    Variable Size 62916852 bytes
    Database Buffers 100663296 bytes
    Redo Buffers 2973696 bytes

    And then MOUNT the Standby Database as standby!

    SQL> alter database mount standby database;
    Database altered.

    Now the Standby Database has started, it has got a control-file, but hasn’t got standby logs where redo data will be applied as redo comes from the Primary Database and is applied to the standby database via its archivelogs. Next is to add these standby logs on hostb to the standby database testdb1.
    NOTE: If you want to configure fail-over between the two host where each can assume the role of Primary Database / Standby Database then is a good idea to add these standby logs to the Primary Database as well. I don’t explain how to do that here. In this post I have only explained how to configure “A physical standby database for a primary database” Is best if you read the Oracle documentation for that. There is a good explanation of that here:
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm

    8. Add Standby Logs to your Standby Database

    I added 11 logs using the following statement.
    Log in to Standby Database on hostb as SYSDBA and run the command:

    SQL> alter database add standby logfile ‘/u00/sbylogs/sbylog07.log’ size 5M;

    SQL> alter database add standby logfile ‘/u00/sbylogs/sbylog011.log’ size 5M;

    It is important and there is a good argument to chose twice the number of primary database redo logs as standby logs on the standby database, for reasons of archiving contention. That is during log shipping you don’t want the system to get bottlenecked, due to redo log switches and archiving not being able to cope with them.

    9. On hosta on the Primary Database start shipping logs

    Log in to Primary Database on hosta as SYSDBA and run the command:
    alter system set log_archive_dest_3=’SERVICE=TO_STANDBY ARCH’;
    alter system set log_archive_dest_state_3=enable scope=both;
    This will start the process of shipping the archivelogs to the standby database from a brand new log archive destination which is pointing to the standby database, check the service name used TO_STANDBY
    NOTE: At this point is best to have windows open to both alert.log logs of standby and primary and see what is hapenning and to spot errors

    10. Put the Standby Database in recovery mode (Last Step)

    Last step, is to put the Standby database in recovery mode. That is the standby database in the “Physical” configuration stays mounted in recovery mode. Is not open for querying.

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

    That is it! You have a standby database on hostb!

    If you want to test and see your system working, for examble by changing something in the primary database on hosta and seeing the change being applied to the standby database on hostb you will need to open the standby database. Steps on how to stop the standby database from receiving Archivelogs from the primary database, and how to put it back to the recovery mode, are below.
    Switch from log shipping mode to read only standby database
    Cancel Redo Apply:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

    Open the database for read-only access:

    SQL> ALTER DATABASE OPEN;

    Now you can query the standby database and see the changes you made in primary database. You can NOT do DML on the standby database! That is this is a one way synch from primary to standby.

    Change the standby database from being open for read-only access to performing Redo Apply: Terminate all active user sessions on the standby database, you my need to kill the sessions except yours.
    Restart Redo Apply. To start Redo Apply, issue the following statement:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

    Now the standby database is in recovery mode/ standby mode and is receiving Archivelogs from the primary database.

    In future post I will write on how to create a physical standby database using RMAN and how to configure Oracle Data Guard Broker for automatic switchover/failover operations and High Availability solutions.



    References:

    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm
    http://www.adp-gmbh.ch/ora/data_guard/create_physical_standby_db.html

    Using Data Guard to do fail-over and other cool features here:
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14230/cli.htm#i1005573



    Thursday, 5 April 2007

    Oracle Table Compression

    Oracle employs a very nice compression algorithm on database block level which is quite efficient and can yield threefold compression ratio in most cases.

    The table compression is highly recommended for Data Warehouse environments where the operations tend to be read heavy and batch. Table compression is not recommended for OLTP environments or on tables with high OLTP like operations because of the cost associated with DML operations on compressed tables. This is true with Oracle 10g. Oracle 11g has gone wild, and offers OLTP compression as well!

    I have found this quite nice short explenation in Oracle Documentation about what Oracle Table compression is:

    "...Oracle table compression feature compresses data by eliminating duplicate values in a database block. Compressed data stored in a database block (also known as disk page) is self-contained. That is, all the information needed to re-create the uncompressed data in a block is available within that block. Duplicate values in all the rows and columns in a block are stored once at the beginning of the block, in what is called a symbol table for that block. All occurrences of such values are replaced with a short reference to the symbol table..."

    The benefits associated with Table Compression are:
    • Save space
    • Faster backups
    • Query Performance

    1. Find out the candidate table to compress

    Usually NUM_ROWS will reveal the table in need for compression. In Data Warehouses these are usually fact tables. The fact tables tend to have lots of redundant data in the columns such as keys, data and status flag data. The compression algorithm uses a symbolic table structure to record this redundancy numerically rather than physically and repeatedly a the table level. That is compression on the 'Relational Table' level rather than byte level.

    Something like this:

    Table A Uncompressed







    a1 a2
    x z
    x z
    x m
    x z
    y m


    Table A When compressed becomes




    a1 a1 sla2a2 sl
    x 4z3
    y 1m2


    My worked example:

    As example we will use the X table, the largest table which I will partition and compress all its partitions except the current partition. The reason we don't compress the current partition is because in the current month partition there will be load of ETL activity (DML) and we do not want to introduce high CPU usage because of compression. Compression is no good when there is lots of DML.
    The size of the table before compression is 6GB:

    SQL> select segment_name, sum(bytes)/1024/1024 MB from user_Segments where segment_name like 'X'
    group by segment_name
    SQL> /

    SEGMENT_NAME MB
    ------------------------------ ----------
    X 6179

    2. List the partitions of the candidate table and decide which to compress

    You can list the partition names for the tables you are interested to compress like this:

    SQL> set pages 49999
    SQL> l

    1* select table_name, partition_name, compression from user_tab_partitions where table_name ='X'

    SQL> /TABLE_NAME PARTITION_NAME COMPRESS

    ------------------------------ ------------------------------ --------

    ...

    X P0201 DISABLED
    X P0202 DISABLED

    ...
    Then you can write an SQL query to build the command which will move and compress the partitions. Just by altering the property of the table to COMPRESS => ENABLED doesn't compress existing rows in the table. It will compress only the new rows which will be added to the table. To save space we must compress what is there and leave only the current partition and future partitions uncompressed. You must remember to NOT compress the current partition and the future partitions, as the script below will generate commands to compress all partitions. To find out the current partition look at the PARTITION_POSITION in USER_TAB_PARTITIONS, the highest positioned partition is the current partition. You can also order by PARTITION_POSITION as below and don't do the first line.
    Generate PARTITION COMPRESS commands with the following script.


    SQL> select 'ALTER TABLE 'table_name' MOVE PARTITION 'partition_name ' TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL; ' from user_tab_partitions where table_name ='X'
    order by partition_position desc;

    'ALTERTABLE'TABLE_NAME'MOVEPARTITION'PARTITION_NAME'TABLESPACENEW_TABLESPACECOMPRESSNOLOGGINGPARALLEL;'

    ---------------------------------------------------------------------------------------------------
    ALTER TABLE X MOVE PARTITION P0207 TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL;
    ALTER TABLE X MOVE PARTITION P0107 TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL;

    WARNING : It is very important to choose to compress the tables in NEW tablespace as compressing them alone doesn't reorganize the extents in the datafiles in a way for them to be able to be re-sized and shrunk later on.

    3. If there are bitmap indexes on the candidate table you must first mark them unusable


    WARNING : To be able to apply the commands below the table in question must not have bitmap indexes , if it does you must mark them as unusable otherwise you get ORA-14646 err.

    ORA-14646: Specified alter table operation involving compression cannot be performed
    in the presence of usable bitmap indexes

    See here for more info :http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14223/parpart.htm#i1006787

    You will have to rebuild the indexes after you have finished with compressing the partitions.

    4. Run the compression commands

    For each partition you wish to compress you are now ready to run the commands like :

    ALTER TABLE X MOVE PARTITION P0207 TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL;
    ALTER TABLE X MOVE PARTITION P0107 TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL;

    You can choose to use NOLOGGING for this operation as it generates lots of redo and will affect your archive logging and PARALLEL to make it faster.

    After compressing all partitions on the X table the savings realised are as follows:

    A 41% saving!

    -- Before Compression

    SEGMENT_NAME MB
    ------------------------------ ----------
    X 6179

    -- After Compression

    SEGMENT_NAME MB
    ------------------------------ ----------
    X 3670


    5. You must rebuild the BITMAP INDEXES marked UNUSABLE at the previous step 4.


    You will have to rebuild all bitmap , and other if any, indexes marked unusable at the previous steps, to do it quickly you ca use NOLOGGING and PARALLEL. Similarly write an SQL statement querying the USER_IND_PARTITIONS table and generate statement like the following for all indexes.

    ALTER INDEX my_bitmap_index REBUILD PARTITION P1001 NOLOGGING;

    ALTER INDEX my_bitmap_index REBUILD PARTITION P1002 NOLOGGING;

    ...
    This is a one-off operation you won't do it again for the table, ie when you add a new partition.

    6. You must rebuild ALL LOCAL INDEXES in the partitioned tables as their state has now become UNUSABLE

    After the compression of the table partitions all local indexes associate with the table become UNUSABLE. As the oracle documentation categorically indicates:

    If you use the ALTER TABLE ... MOVE statement, the local indexes for a partition become unusable. You have to rebuild them afterwards. Otherwise you will get errors in partition maintenance operations like this:
    *

    ERROR at line 1:

    ORA-01502: index 'my_schema.my_X_table_index' or partition of such index is in unusable

    state

    You can do that by writing something like this and change the table_name=X for each table.

    SQL> select 'alter table X modify partition ' partition_name ' rebuild unusable local indexes;' from user_tab_partitions where table_name='X'

    NOTE: You will also have to rebuild indexes whenever you COMPRESS/NOCOMPRESS any partitions. That is, when you use any of these two operations, data moves, and indexes break and have to be rebuild.

    7. Check to see the space you have gained.

    After moving the table partition segments to their new tablespace and rebuilding necessary indexes you ought to gain some space. You will realize that the space which the objects before compression has decreased. You will require less space to store them, when compressed, in their new tablespace. After you move everything from the old tablespace into the new and you are sure the old tablespace is empty you can then drop the old tablespace and its datafiles.

    Well the only approach to 'claim' space back after a table compression operation seems to be the use of ALTER TABLE ... MOVE TABLESPACE 'YOURNEWTABLESPACE' on a 'new' tablespace. Although we can claim physical space back from the database only by resizing datafiles and we could have 'moved' the tables withing their 'own' tablespace, it is not recommended.

    Moving the tables within the same tablespace is not a good idea, because you are never sure if the reorganization of the table segments will actually move the HWM of the datafile down so that you can resize the datafile. (see below asktom.oracle.com links for more info)

    There might be 'that' extent still hanging at the end of the tablespace which will prevent you from resizing the datafiles. Although you can fish that 'hanging at the end' extent out and reorganize the segment you are better off by moving all to a new tablespace.

    Use DBA_SEGMENTS before you decide to drop a tablespace to see if any segments are left in it. Alternativaly you can choose to resize the datafiles to very small files.

    I am also reading lately that in Oracle 11g you can get Advanced Table Compression in OLTP environments as well! It is an option which you have to puchase separately in Enterprise Edition though.

    References:

    Forum Questions to Asktom :

    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7873209745771#240368600346159666

    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:54178027703899

    Oracle Docs :

    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14223/parpart.htm#sthref196

    ANALYZE to compute statistics

    I use the ANALYZE command to calculate quickly and to see what effect on the execution plan of a query the new statistics will have.

    The statistics calculation method recommended by many is actually the DBMS_STATS package. Search this blog for examples of DBMS_STATS.

    Analyze a table with a percent estimate:

    ANALYZE TABLE X ESTIMATE STATISTICS SAMPLE 10 PERCENT;

    Analyze a table partition:

    ANALYZE TABLE X PARTITION (Z) COMPUTE STATISTICS;

    Analyze a table partition using a percent estimate:

    ANALYZE TABLE X PARTITION (Z) ESTIMATE STATISTICS SAMPLE 10 PERCENT;


    Tuesday, 27 March 2007

    Oracle Apex Install

    Cool and straightforward tutotrial to use when installing apex installation.

    There is a better version of this technology nowadays and will appear in Oracle 11g I hear. It is called PL/SQL Gateway. It is like everything you need to build and host a web application will be in Oracle! Very nice, I like the idea, so is it bye bye application servers , bye bye web servers? Hello mighty Oracle! Well it seems like the future where we will be able to write that valuable piece of SQL code under an umbrella on a beach somewhere in Barcelona is near.

    Oracle 11g comes with Apex preinstalled when you install it, so you want have to do this.

    http://www.oracle.com/technology/obe/obe10gdb/install/htmldbinst/htmldbinst.htm



    Home made (DIY) quick installation:

    1. Download apex.

    2. Unzip apex in your chosen directory

    3. Switch to apex directory after unzip.

    4. From the apex directory, start SQL*Plus and connect the database as SYSDBA.

    $ \apex\sqlplus sys/SYS_password as sysdba

    5. Run

    SQL>@apexins password tablespace_apex tablespace_files tablespace_temp images

    Where:

    password: is the password you choose for the apex admin account / owner
    tablespace_apex: the name of the tablespace for the Apex user.
    tablespace_files: the name of the tablespace for the Apex Files user.
    tablespace_temp: the name of the temporary tablespace
    images: virtual directory for the images on the webserver kind of thing, define it to be /i/
    This is how I run it, after creating a tablespace APEX.

    SQL>@apexins kubilay APEX APEX TEMP /i/

    If for some reason you fail you can remove it like this and try again

    $ \apex\sqlplus sys/SYS_password as sysdba

    Execute the following commands
    SQL> drop user FLOWS_030000 CASCADE;

    SQL> drop user FLOWS_FILES cascade;

    Visit my Oracle Apex mock applications


    Saturday, 24 March 2007

    Get DDL with dbms_metadata

    Did you ever wanted to get the Data Definition Language of an object in Oracle from SQL*Plus.


    There is package called DBMS_METADATA which you can use and it gives you exactly that. Use the package to get the DDL in SQL*Plus by passing as first parameter the object type (i.e. TABLE, INDEX…) and second parameter the object name.

    SQL> set long 50000

    SQL> select dbms_metadata.get_ddl( 'TABLE', 'MY_TABLE' ) from dual;


    Wednesday, 21 March 2007

    Monitor Dynamic SGA components

    Use this SQL script when logged in as SYSDBA to see what happens to the SGA components over time, how they grow and shrink

    I found it very useful monitoring the SHARED_POOL usage over time.


    SELECT COMPONENT ' - ' OPER_TYPE COMPONENT_STATUS,INITIAL_SIZE,TARGET_SIZE,FINAL_SIZE,STATUS,TO_CHAR(START_TIME, ‘dd-mon-yyyy hh24:mi’) START_TIME,END_TIMEFROM V$SGA_RESIZE_OPSORDER BY END_TIME;

    Wednesday, 7 March 2007

    SPFILE or PFILE startup

    I have found this very useful SQL to identify what type of initialization parameter file an oracle instance has started with.



    Found this on Thomas Kyte’s site at http://asktom.oracle.com most useful information as ever!

    Run the following as SYSDBA.

    SELECT DECODE(COUNT(*), 1, 'spfile', 'pfile')
    FROM V$SPPARAMETER
    WHERE ROWNUM = 1
    AND ISSPECIFIED = ‘TRUE’



    Install Oracle 10g R2 on Ubuntu (Breezy Badger and Dapper Drake)

    I have found http://www.dizwel.com website very helpful when I was trying to install Oracle 10g R2 on Ubuntu.

    The instructions are clear end well tested.I have used it couple of time to install Oracle on Ubuntu.


    I strongly recommend it.Step by step installation guide of Oracle 10g R2 on Ubuntu (Breezy Badger and Dapper Drake)


    Great website :http://www.dizwell.com/prod/node/52?page=0%2C0

    Friday, 2 February 2007

    Search anywhere in a string with INSTR function

    Type your summary here


    Suppose you have a table X with the following data and you want to search for a particular string in its columns.


    SQL>SELECT * FROM X;registration_origin

    -------------------
    aaa.bbbbb
    ccccccc.dd
    f.eeeeeeee

    You can change this data using

    SQL> SELECT REPLACE(SUBSTR(REGISTRATION_ORIGIN, 0, INSTR(REGISTRATION_ORIGIN, '.')), '.', ' ') FROM X
    /

    to this

    registration_origin
    -------------------
    aaa
    ccccccc
    f

    Saturday, 20 January 2007

    Shared_pool latching

    Trying to get answer for this from Gurus like Thomas Kyte and Jonathan Lewis.
    SHARED POOL LATCHING in LIBRARY CACHE. Why is it bad?


    Shared pool is where the SQL code and the data dictionary lives. Isn’t this structure ‘mostly’ READ_ONLY? Why latch/lock? Why do you care to latch? What will be corrupted in the library cache? Why does Oracle have to “protect” it, as you say above?

    To stop latching/parsing and the bad scalability which comes with it, Tom Kyte says we must use bind variables, yes I agree, but why latch the shared pool? What will that other “nasty” user do to your SQL statement in the shared pool apart from sharing the SQL with you.

    Shared Pool is not like the Data Buffer where data lives . In the Data Buffer I understand the reason latching/locking (shared locks, exclusive locks during updates, selects) is bad, and why it inhibits concurrency, ‘Consistent Gets’ ACID theory of transactions, locks, transactions trying to concurrently modify same data, phantom reads and dirty reads etc, they all come to mind, fine.
    But why in SHARED POOL, it is just SQL text, what is going to be corrupt? Why does it have to be concurrent and use latches (serialization devices) ?

    Wiating Response from :

    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::
    P11_QUESTION_ID:1513005503967#58524800346079273

    http://jonathanlewis.wordpress.com/2007/01/05/bind-variables/

    Monday, 15 January 2007

    Recovering after SHUTDOWN ABORT, or CRASH during OS hot-backup

    What to do if the hotbackup database crashes during the hotbackup?
    When this happens the tablespaces are going to be in an inconsistent state, the database will not OPEN and a STARTUP will give an error similar to:
    ORA-01113: file 12 needs media recoveryORA-01110: data file 12: ‘/oracle/dbs/tbs_41.f’

    In this situation what you can do is limited to 2 things:

    1. Recover the database.
    2. Mount the database, then run ALTER DATABASE END BACKUP


    1. Recover the database.

    Before you use the following quick steps make sure you read the link as well, provided at the bottom of this page. Also use this step when you are not sure if a RESTORE was made after the crash, perhaps by the backup operator!? If there was a restore then the RECOVER command will bring the database up to date as well.

    1. Mount your database. For example, enter:

    SQL> STARTUP MOUNT

    2. Recover your database as normal. For example, enter:

    SQL> RECOVER DATABASE

    3. Use the V$BACKUP view to confirm that there are no active datafiles:

    SQL> SELECT * FROM V$BACKUP WHERE STATUS = 'ACTIVE';

    FILE# STATUS CHANGE# TIME
    ———- —————— ———- ———

    0 rows selected.

    4. Open the database for business.

    SQL> ALTER DATABASE OPEN;


    2. Mount the database, then run ALTER DATABASE END BACKUP.

    When you get the above STARTUP error message probably the database will be in MOUNT state already. You can choose to carry one with the commands below immediately after the message or choose to do a SHUTDOWN ABORT and then STARTUP MOUNT. To take tablespaces out of backup mode simultaneously, the database must be in MOUNT state:

    1. Mount but do not open the database. For example, enter:

    SQL> STARTUP MOUNT

    2. If performing this procedure manually (that is, not as part of a crash recovery script), query the V$BACKUP view to list the datafiles of the tablespaces that were being backed up before the database was restarted:

    SQL> SELECT * FROM V$BACKUP WHERE STATUS = 'ACTIVE';FILE# STATUS CHANGE# TIME
    ———- —————— ———- ———
    12 ACTIVE 20863 25-NOV-00
    13 ACTIVE 20863 25-NOV-00
    20 ACTIVE 20863 25-NOV-00
    3 rows selected.

    3. Issue the ALTER DATABASE END BACKUP statement to take all datafiles currently in backup mode out of backup mode. For example, enter:

    SQL> ALTER DATABASE END BACKUP;

    You can use this statement ONLY and ONLY when the database is mounted but not open. If the database is open, use ALTER TABLESPACE … END BACKUP or ALTER DATABASE DATAFILE … END BACKUP for each affected tablespace or datafile and that is what happens during the hot backup.

    4. Open the database for business.

    SQL> ALTER DATABASE OPEN;

    After this you should take a new backup of the db immediately!
    Source and further reading at this Oracle Link:http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96572/osbackups.htm#9817

    Friday, 12 January 2007

    DBMS_STATS package

    Use DBMS_STATS to gather Oracle dictionary statistics. DBMS_STATS is the preferred and recommended method as it is more complete that the ANALYZE command which soon (read on http://asktom.oracle.com) will soon be depreciated. DBMS_STATS gathers more information about the objects it works on.

    1. To gather table statistics with DBMS_STATS

    SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'KUBILAY', tabname => 'PART_USER')

    2. To gather table statistics with DBMS_STATS with estimate percent

    SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'KUBILAY', tabname => 'USERS', estimate_percent =>50)

    3. To gather schema statistics with estimate percent

    SQL> EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('KUBILAY', estimate_percent =>50)

    4. To gather a partition’s statistics in full

    SQL> EXECUTE dbms_stats.gather_table_Stats( ownname => 'SCOTT', tabname => 'SOME_TABLE', partname => 'P0705');

    * To gather statistics on a partition without calculating statistics for indexes, i.e. if the table partition is really big and there are lots of indexes and you want to skip them, use this:


    EXECUTE dbms_stats.gather_table_stats
    (
    ownname=>'SCOTT',
    tabname=>'SOME_TABLE',
    partname=>'SOME_PARTITION',
    estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE,
    cascade=>FALSE,
    degree=>DBMS_STATS.AUTO_DEGREE,
    granularity=>'PARTITION'
    );


    This will work exclusively and only on the partition specified, it will ignore all indexes of the partition!



    The Oracle Way

    Faster and neater, with more accurate estimates if you use Oracle recommended procedures, and parameters see below. These parameters have a mind of their own and they decided how much they should estimate and if they should run the operation in parallel or not.

    1. On Schema with DBMS_STATS

    SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname =>'MY_SCHEMA', ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, DEGREE => DBMS_STATS.AUTO_DEGREE);

    2. On Table with DBMS_STATS

    SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'SESSIONS', ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, DEGREE => DBMS_STATS.AUTO_DEGREE);

    I find using Oracle Recommended values (i.e. degree=>DBMS_STATS.AUTO_DEGREE) always makes the statistics gathering faster and more effective.


    The documentation for the full package is here: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461

    Monday, 8 January 2007

    Add Oracle range partitions quickly

    Find the partition with the maximum HIGH_VALUE and see if you need to add new RANGE partitions which use date partitioning.

    I hear Oracle 11g will do this automatically. About time!

    USER's partitions

    select table_name, partition_name, high_value from user_Tab_partitions a
    where partition_position in (
    select max(partition_position) from user_tab_partitions b
    where b.table_name=a.table_name
    group by table_name)


    You can enhance this script further by investigating all partitions in your database when logged in as SYSDBA and find out the maxiumum partition HIGH_VALUE for all user schemas (except SYS,SYSTEM) in your database as follows:

    DBA (All) partitions

    SELECT a.table_owner,
    a.table_name ,
    a.partition_position ,
    b.high_value
    FROM dba_tab_partitions b,
    (SELECT table_owner ,
    table_name ,
    MAX(partition_position) partition_position
    FROM
    (SELECT p.table_owner ,
    p.table_name ,
    p.partition_position,
    p.high_value
    FROM dba_tab_partitions p
    WHERE p.table_owner NOT IN ('SYS','SYSTEM')
    ORDER BY p.partition_position DESC,
    p.table_name
    )
    GROUP BY table_owner,
    table_name
    ) a
    WHERE a.table_owner = b.table_owner
    AND a.table_name = b.table_name
    AND a.partition_position = b.partition_position
    order by a.table_owner

    Saturday, 6 January 2007

    Change timestamp

    Changing he timestamp of a file in linux

    $ touch -t 200605101200 myfile.txt

    Email Publishing is cool on blogger.

    This thing put me back in blogging!

    Just forward your email to the email address you create in blogger and get them published automatically. Now that is a cool use of SQL INSERT, wow.