Pages

Saturday, 14 April 2007

Configuring Oracle Standby Database on Ubuntu Linux

In this post I will write the steps I took to configure a Physical Standby Database on two Ubuntu Hosts versions Breezy Badger or Dapper Drake and running Oracle 10g R2. For all this to work


following assumptions must hold true:

  • Both hosta and hostb must have the same operating system and version.
  • The directory structure for datafiles on both hosts must be the same.



  • 1. Create the Primary Database.

    I have created a small database on hosta called testdb1 to use as the Primary Database using dbca. Then I took a cold backup of this database.

    2. Move backups of Primary Database to Standby Database on hostb.

    Next I moved the backup datafiles to hostb in their corresponding same locations with scp like this:

    scp /u10/oradata/testdb1/* oracle@hostb:/u10/oradata/testdb1/

    3. Create the Standby Database control file.

    Then I went back to hosta and started up testdb1 Primary Database, and created a special control file. It is important to create the controlfile after you shutdown the Primary Database cleanly and take the backup and not before. This controlfile will be used on hostb to startup the Standby Database.
    Log in to Primary Database on hosta as SYSDBA and run the command:

    SQL> alter database create standby controlfile as ‘/tmp/sbycontrol.ctl’

    NOTE: This file is binary. Use a different filename than the controlfiles of the Primary Database and watch out not to overwrite a current Primary Database control files.

    Then move the created controlfile to hostb. You can choose any location on hostb. Later, you will manually edit the initialization parameter file of the Standby Database to indicate the chosen location.


    SQL> !scp /tmp/sbycontrol.ctl oracle@hostb:/u10/oradata/testdb1/

    4. Create the initialization parameter file for the Standby Database.

    Create a pfile from the spfile of the Primary Database to be used later as the intialization parameter file to startup the Standby Database. The initialization parameter file you will create from the Primary Database will be manually modified to introduce Standby Database parameters later.
    Log in to Primary Database on hosta as SYSDBA and run the command:

    SQL> create pfile=’/tmp/inittestdb1.ora’ from spfile

    Then copy the file to $ORACLE_HOME/dbs on hostb, to the default location. You will edit the file later on hostb.

    SQL> !scp /tmp/inittestdb1.ora oracle@hostb:$ORACLE_HOME/dbs

    5. Set up networking configuration between Primary and Standby Databases.

    You have to make sure that both databases will be able to communicate using their own service names pointing to each other. Create service names on both hosta and hostb which will connect to each other and ship archived logs.
    Edit the tnsnames.ora file on hosta and hostb and add the service name TO_STANDBY to both.

    in tnsnames.ora on hosta add:

    TO_STANDBY =(DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = hosta)(PORT = 1521)))(CONNECT_DATA =(SERVICE_NAME = testdb1)))

    in tnsnames.ora on hostb add:

    TO_STANDBY =(DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = hostb)(PORT = 1521)))(CONNECT_DATA =(SERVICE_NAME = testdb1)))


    You must also create the password file on standby as well as primary database to enable logins from each other. The password for these must be the same.
    on hosta and hostb do

    $orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=yourpassword

    6. Edit the pfile created for Standby Database

    In step 4 above an initialization parameter file was created from the Primary Database. Now is time to make that file the initialization parameter of the Standby Database and add the parameters necessary to startup the Standby Database in Standby mode.

    The file created earlier should be in $ORACLE_HOME/dbs directory like:
    /usr/local/oracle/product/10.2.0.1/dbs/inittestdb1.ora

    According to the method of synchronization (redo transport) you choose between the Primary Database and the Standby Database certain parameters will be set in this file.

    For my installation I used Standby Logs on the Standby Database and Archived Log Transmission.

    I edited the Standby Database initialization parameter file as below, where bold is what I added.


    *.compatible=’10.2.0.1.0′
    *.control_files=’/u10/oradata/testdb1/sbycontrol.ctl’…
    *.log_archive_dest_1=’LOCATION=/u01/arch’
    *.log_archive_dest_2=’LOCATION=/u11/arch’
    *.standby_archive_dest=’LOCATION=/u00/sbylogs’
    *.standby_file_management=’AUTO’
    *.remote_archive_enable=’TRUE’
    *.log_archive_format=’testdb1_%t_%s_%r.arc”
    *.undo_tablespace=’UNDOTBS1′
    *.user_dump_dest=’/usr/local/oracle/admin/testdb1/udump’
    ...

    Once you edited the Standby Database initialization parameter file next is to make an spfile out of it and startup the Standby Database.

    7. Create spfile for Standby Database and start it up

    Log in to Standby Database on hostb as SYSDBA and run the command:

    SQL> create spfile from pfile=’/usr/local/oracle/product/10.2.0.1/dbs/inittestdb1.ora’;

    Next startup the Standby Database on hostb with NOMOUNT

    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area 167772160 bytes
    Fixed Size 1218316 bytes
    Variable Size 62916852 bytes
    Database Buffers 100663296 bytes
    Redo Buffers 2973696 bytes

    And then MOUNT the Standby Database as standby!

    SQL> alter database mount standby database;
    Database altered.

    Now the Standby Database has started, it has got a control-file, but hasn’t got standby logs where redo data will be applied as redo comes from the Primary Database and is applied to the standby database via its archivelogs. Next is to add these standby logs on hostb to the standby database testdb1.
    NOTE: If you want to configure fail-over between the two host where each can assume the role of Primary Database / Standby Database then is a good idea to add these standby logs to the Primary Database as well. I don’t explain how to do that here. In this post I have only explained how to configure “A physical standby database for a primary database” Is best if you read the Oracle documentation for that. There is a good explanation of that here:
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm

    8. Add Standby Logs to your Standby Database

    I added 11 logs using the following statement.
    Log in to Standby Database on hostb as SYSDBA and run the command:

    SQL> alter database add standby logfile ‘/u00/sbylogs/sbylog07.log’ size 5M;

    SQL> alter database add standby logfile ‘/u00/sbylogs/sbylog011.log’ size 5M;

    It is important and there is a good argument to chose twice the number of primary database redo logs as standby logs on the standby database, for reasons of archiving contention. That is during log shipping you don’t want the system to get bottlenecked, due to redo log switches and archiving not being able to cope with them.

    9. On hosta on the Primary Database start shipping logs

    Log in to Primary Database on hosta as SYSDBA and run the command:
    alter system set log_archive_dest_3=’SERVICE=TO_STANDBY ARCH’;
    alter system set log_archive_dest_state_3=enable scope=both;
    This will start the process of shipping the archivelogs to the standby database from a brand new log archive destination which is pointing to the standby database, check the service name used TO_STANDBY
    NOTE: At this point is best to have windows open to both alert.log logs of standby and primary and see what is hapenning and to spot errors

    10. Put the Standby Database in recovery mode (Last Step)

    Last step, is to put the Standby database in recovery mode. That is the standby database in the “Physical” configuration stays mounted in recovery mode. Is not open for querying.

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

    That is it! You have a standby database on hostb!

    If you want to test and see your system working, for examble by changing something in the primary database on hosta and seeing the change being applied to the standby database on hostb you will need to open the standby database. Steps on how to stop the standby database from receiving Archivelogs from the primary database, and how to put it back to the recovery mode, are below.
    Switch from log shipping mode to read only standby database
    Cancel Redo Apply:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

    Open the database for read-only access:

    SQL> ALTER DATABASE OPEN;

    Now you can query the standby database and see the changes you made in primary database. You can NOT do DML on the standby database! That is this is a one way synch from primary to standby.

    Change the standby database from being open for read-only access to performing Redo Apply: Terminate all active user sessions on the standby database, you my need to kill the sessions except yours.
    Restart Redo Apply. To start Redo Apply, issue the following statement:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

    Now the standby database is in recovery mode/ standby mode and is receiving Archivelogs from the primary database.

    In future post I will write on how to create a physical standby database using RMAN and how to configure Oracle Data Guard Broker for automatic switchover/failover operations and High Availability solutions.



    References:

    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm
    http://www.adp-gmbh.ch/ora/data_guard/create_physical_standby_db.html

    Using Data Guard to do fail-over and other cool features here:
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14230/cli.htm#i1005573



    Thursday, 5 April 2007

    Oracle Table Compression

    Oracle employs a very nice compression algorithm on database block level which is quite efficient and can yield threefold compression ratio in most cases.

    The table compression is highly recommended for Data Warehouse environments where the operations tend to be read heavy and batch. Table compression is not recommended for OLTP environments or on tables with high OLTP like operations because of the cost associated with DML operations on compressed tables. This is true with Oracle 10g. Oracle 11g has gone wild, and offers OLTP compression as well!

    I have found this quite nice short explenation in Oracle Documentation about what Oracle Table compression is:

    "...Oracle table compression feature compresses data by eliminating duplicate values in a database block. Compressed data stored in a database block (also known as disk page) is self-contained. That is, all the information needed to re-create the uncompressed data in a block is available within that block. Duplicate values in all the rows and columns in a block are stored once at the beginning of the block, in what is called a symbol table for that block. All occurrences of such values are replaced with a short reference to the symbol table..."

    The benefits associated with Table Compression are:
    • Save space
    • Faster backups
    • Query Performance

    1. Find out the candidate table to compress

    Usually NUM_ROWS will reveal the table in need for compression. In Data Warehouses these are usually fact tables. The fact tables tend to have lots of redundant data in the columns such as keys, data and status flag data. The compression algorithm uses a symbolic table structure to record this redundancy numerically rather than physically and repeatedly a the table level. That is compression on the 'Relational Table' level rather than byte level.

    Something like this:

    Table A Uncompressed







    a1 a2
    x z
    x z
    x m
    x z
    y m


    Table A When compressed becomes




    a1 a1 sla2a2 sl
    x 4z3
    y 1m2


    My worked example:

    As example we will use the X table, the largest table which I will partition and compress all its partitions except the current partition. The reason we don't compress the current partition is because in the current month partition there will be load of ETL activity (DML) and we do not want to introduce high CPU usage because of compression. Compression is no good when there is lots of DML.
    The size of the table before compression is 6GB:

    SQL> select segment_name, sum(bytes)/1024/1024 MB from user_Segments where segment_name like 'X'
    group by segment_name
    SQL> /

    SEGMENT_NAME MB
    ------------------------------ ----------
    X 6179

    2. List the partitions of the candidate table and decide which to compress

    You can list the partition names for the tables you are interested to compress like this:

    SQL> set pages 49999
    SQL> l

    1* select table_name, partition_name, compression from user_tab_partitions where table_name ='X'

    SQL> /TABLE_NAME PARTITION_NAME COMPRESS

    ------------------------------ ------------------------------ --------

    ...

    X P0201 DISABLED
    X P0202 DISABLED

    ...
    Then you can write an SQL query to build the command which will move and compress the partitions. Just by altering the property of the table to COMPRESS => ENABLED doesn't compress existing rows in the table. It will compress only the new rows which will be added to the table. To save space we must compress what is there and leave only the current partition and future partitions uncompressed. You must remember to NOT compress the current partition and the future partitions, as the script below will generate commands to compress all partitions. To find out the current partition look at the PARTITION_POSITION in USER_TAB_PARTITIONS, the highest positioned partition is the current partition. You can also order by PARTITION_POSITION as below and don't do the first line.
    Generate PARTITION COMPRESS commands with the following script.


    SQL> select 'ALTER TABLE 'table_name' MOVE PARTITION 'partition_name ' TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL; ' from user_tab_partitions where table_name ='X'
    order by partition_position desc;

    'ALTERTABLE'TABLE_NAME'MOVEPARTITION'PARTITION_NAME'TABLESPACENEW_TABLESPACECOMPRESSNOLOGGINGPARALLEL;'

    ---------------------------------------------------------------------------------------------------
    ALTER TABLE X MOVE PARTITION P0207 TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL;
    ALTER TABLE X MOVE PARTITION P0107 TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL;

    WARNING : It is very important to choose to compress the tables in NEW tablespace as compressing them alone doesn't reorganize the extents in the datafiles in a way for them to be able to be re-sized and shrunk later on.

    3. If there are bitmap indexes on the candidate table you must first mark them unusable


    WARNING : To be able to apply the commands below the table in question must not have bitmap indexes , if it does you must mark them as unusable otherwise you get ORA-14646 err.

    ORA-14646: Specified alter table operation involving compression cannot be performed
    in the presence of usable bitmap indexes

    See here for more info :http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14223/parpart.htm#i1006787

    You will have to rebuild the indexes after you have finished with compressing the partitions.

    4. Run the compression commands

    For each partition you wish to compress you are now ready to run the commands like :

    ALTER TABLE X MOVE PARTITION P0207 TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL;
    ALTER TABLE X MOVE PARTITION P0107 TABLESPACE NEW_TABLESPACE COMPRESS NOLOGGING PARALLEL;

    You can choose to use NOLOGGING for this operation as it generates lots of redo and will affect your archive logging and PARALLEL to make it faster.

    After compressing all partitions on the X table the savings realised are as follows:

    A 41% saving!

    -- Before Compression

    SEGMENT_NAME MB
    ------------------------------ ----------
    X 6179

    -- After Compression

    SEGMENT_NAME MB
    ------------------------------ ----------
    X 3670


    5. You must rebuild the BITMAP INDEXES marked UNUSABLE at the previous step 4.


    You will have to rebuild all bitmap , and other if any, indexes marked unusable at the previous steps, to do it quickly you ca use NOLOGGING and PARALLEL. Similarly write an SQL statement querying the USER_IND_PARTITIONS table and generate statement like the following for all indexes.

    ALTER INDEX my_bitmap_index REBUILD PARTITION P1001 NOLOGGING;

    ALTER INDEX my_bitmap_index REBUILD PARTITION P1002 NOLOGGING;

    ...
    This is a one-off operation you won't do it again for the table, ie when you add a new partition.

    6. You must rebuild ALL LOCAL INDEXES in the partitioned tables as their state has now become UNUSABLE

    After the compression of the table partitions all local indexes associate with the table become UNUSABLE. As the oracle documentation categorically indicates:

    If you use the ALTER TABLE ... MOVE statement, the local indexes for a partition become unusable. You have to rebuild them afterwards. Otherwise you will get errors in partition maintenance operations like this:
    *

    ERROR at line 1:

    ORA-01502: index 'my_schema.my_X_table_index' or partition of such index is in unusable

    state

    You can do that by writing something like this and change the table_name=X for each table.

    SQL> select 'alter table X modify partition ' partition_name ' rebuild unusable local indexes;' from user_tab_partitions where table_name='X'

    NOTE: You will also have to rebuild indexes whenever you COMPRESS/NOCOMPRESS any partitions. That is, when you use any of these two operations, data moves, and indexes break and have to be rebuild.

    7. Check to see the space you have gained.

    After moving the table partition segments to their new tablespace and rebuilding necessary indexes you ought to gain some space. You will realize that the space which the objects before compression has decreased. You will require less space to store them, when compressed, in their new tablespace. After you move everything from the old tablespace into the new and you are sure the old tablespace is empty you can then drop the old tablespace and its datafiles.

    Well the only approach to 'claim' space back after a table compression operation seems to be the use of ALTER TABLE ... MOVE TABLESPACE 'YOURNEWTABLESPACE' on a 'new' tablespace. Although we can claim physical space back from the database only by resizing datafiles and we could have 'moved' the tables withing their 'own' tablespace, it is not recommended.

    Moving the tables within the same tablespace is not a good idea, because you are never sure if the reorganization of the table segments will actually move the HWM of the datafile down so that you can resize the datafile. (see below asktom.oracle.com links for more info)

    There might be 'that' extent still hanging at the end of the tablespace which will prevent you from resizing the datafiles. Although you can fish that 'hanging at the end' extent out and reorganize the segment you are better off by moving all to a new tablespace.

    Use DBA_SEGMENTS before you decide to drop a tablespace to see if any segments are left in it. Alternativaly you can choose to resize the datafiles to very small files.

    I am also reading lately that in Oracle 11g you can get Advanced Table Compression in OLTP environments as well! It is an option which you have to puchase separately in Enterprise Edition though.

    References:

    Forum Questions to Asktom :

    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7873209745771#240368600346159666

    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:54178027703899

    Oracle Docs :

    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14223/parpart.htm#sthref196

    ANALYZE to compute statistics

    I use the ANALYZE command to calculate quickly and to see what effect on the execution plan of a query the new statistics will have.

    The statistics calculation method recommended by many is actually the DBMS_STATS package. Search this blog for examples of DBMS_STATS.

    Analyze a table with a percent estimate:

    ANALYZE TABLE X ESTIMATE STATISTICS SAMPLE 10 PERCENT;

    Analyze a table partition:

    ANALYZE TABLE X PARTITION (Z) COMPUTE STATISTICS;

    Analyze a table partition using a percent estimate:

    ANALYZE TABLE X PARTITION (Z) ESTIMATE STATISTICS SAMPLE 10 PERCENT;