Recently came across installations where all DB link were having suffix US.ORACLE.COM or REGRESS.RDBMS.DEV.US.ORACLE.COM and was bit annoying for users so here is how we went forward to fix it.
As per documentation , whenever we create DB link, the db_domain is automatically appended to it. So following SQL should help determines naming of all DB links,
select name,value from v_$parameter where name IN ('db_name', 'db_domain');
db_name = MYDB
db_domain = NULL
So in our case db_domain was NULL then where does US.ORACLE.COM or REGRESS.RDBMS.DEV.US.ORACLE.COM come form, lets check the global name, in case you dont know, GLOBAL_NAME = db_name.db_domain
select global_name from global_name;
global_name = MYDB.US.ORACLE.COM
hmm so global name does show me domain name now, how do I fix it?
Option 1 :
Rename global name with desired domain name e.g. MYWORLD
alter database rename global_name to MYDB.MYWORLD;
Please note with above alter statement, you MUST specify db_domain else it wont work.
Option 2 :
In case you dont wont any domain, Update global name to db_name without db_domain
UPDATE GLOBAL_NAME SET GLOBAL_NAME = 'MYDB';
commit;
Now cross check the changes,
select global_name from global_name ;
global_name = MYDB
Now you can go and create DB link as any user and it will be as expected i.e. with desired suffix or without.
While we are on DB link topic, please note global_name <> global_names i.e.
Global Name is global name of the DB determined by db_name.db_domain
Global Names is a parameter which determines whether DB link should always be named AFTER the global name of the database it connects to. You can set parameter value to be TRUE or FALSE (default) . If you have a replication environment then it is in general a good thing to set it to true.
Friday, April 3, 2009
Monday, March 23, 2009
Tip#20 Session Marked for Kill Forever
I had a situation recently where our junior DBA accidentally killed an MV refresh job session and session killed was 'marked for kill' forever i.e. it wont release the resource (in this case lock) hence any subsequent refresh were also failing.
Lock holding session can be determined by,
select SID from v$lock where ID1 = (select object_id from dba_objects where object_name = *Object name* and OWNER = *Owner*)
Also any subsequent try to kill/disconnect session using alter session will result in ORA-00031: session marked for kill. So only way to release resources is to kill the process at OS level and let the PMON do the cleanup.
Now since sessoin is already killed and we are not yet using 11g (which gives you OS PID after killing as well), I tried getting OS PID using following SQL (Also suggest to read Metalink Doc ID 1020720.102) i.e.
SQL> SELECT spid FROM v$process WHERE NOT EXISTS ( SELECT 1 FROM v$session WHERE paddr = addr) AND spid IS NOT NULL;
unfortunately this didnt help as PID returned was not JOB process, you can check on Linux using,
% ps -ef | grep spid
The process I was expecting to be something like ora_j00x_sid but didnt found any so search for OS PID was still on.
Since we still have an entry in v$session with 'KILLED' status, we decided to get a 'Logon time' and see if we can match that with OS PID for same time and with command format 'ora_j00x_sid'. And fortunately we had hit the bulls eye we had only one OS PID with same time and expected command so we could just easily nailed it. So we killed the process at OS level
% kill spid
and after few moments we could see the lock was released!
Lock holding session can be determined by,
select SID from v$lock where ID1 = (select object_id from dba_objects where object_name = *Object name* and OWNER = *Owner*)
Also any subsequent try to kill/disconnect session using alter session will result in ORA-00031: session marked for kill. So only way to release resources is to kill the process at OS level and let the PMON do the cleanup.
Now since sessoin is already killed and we are not yet using 11g (which gives you OS PID after killing as well), I tried getting OS PID using following SQL (Also suggest to read Metalink Doc ID 1020720.102) i.e.
SQL> SELECT spid FROM v$process WHERE NOT EXISTS ( SELECT 1 FROM v$session WHERE paddr = addr) AND spid IS NOT NULL;
unfortunately this didnt help as PID returned was not JOB process, you can check on Linux using,
% ps -ef | grep spid
The process I was expecting to be something like ora_j00x_sid but didnt found any so search for OS PID was still on.
Since we still have an entry in v$session with 'KILLED' status, we decided to get a 'Logon time' and see if we can match that with OS PID for same time and with command format 'ora_j00x_sid'. And fortunately we had hit the bulls eye we had only one OS PID with same time and expected command so we could just easily nailed it. So we killed the process at OS level
% kill spid
and after few moments we could see the lock was released!
Wednesday, March 11, 2009
Tip#19 Oracle Date Insight
Note : Partial text from Metalink Note:69028.1
Oracle DATE values are always stored in 7 bytes, excluding the length byte, within a datafile. These bytes store the century, year, month, day, hour, minute, and second details respectively. The following is the definition of Oracle's internal DATE storage structure:
BYTE Meaning
---- -------
1 Century -- stored in excess-100 notation
2 Year -- " "
3 Month -- stored in 0 base notation
4 Day -- " "
5 Hour -- stored in excess-1 notation
6 Minute -- " "
7 Second -- " "
Note that the century and year components are stored in 'excess 100 format', which means that 100 must be deducted from the byte's value. If a negative number results, then we've got a BC date at which point we take the absolute number. Also, to avoid ever having a zero byte, 1 is added to the hour, minute and second bytes. Therefore, 1 must be detected to get the correct value.
For example, take the following date:
11-MAR-2009 13:08:00
we would expect this date to be stored internally as follows:
120,109,3,11,14,9,1
Let's confirm this,
SQL> create table test1 as select sysdate sd from dual;
SQL> select to_char(sd, 'DD-MON-YYYY HH24:MI:SS'), dump(sd) from test1;
Result:
TO_CHAR(SD,'DD-MON-YYYYHH24:MI:SS') : 11-MAR-2009 13:08:00
DUMP(SD) : Typ=12 Len=7: 120,109,3,11,14,9,1
Let's try using the DUMP() function to do the same thing with TO_DATE now. Issue the following statement:
SQL> SELECT dump(to_date('11-MAR-2009 13:08:00', 'DD-MON-YYYY HH24:MI:SS'))
FROM dual;
Result: Typ=13 Len=8: 217,7,3,11,13,8,0,0
Note the different "Typ=" values to understand why we are seeing these results. The datatype returned is 13 and not 12, the external DATE datatype. This occurs because we rely on the TO_DATE function! External datatype 13 is an internal c-structure whose length varies depending on how the c-compiler represents the structure. Note that the "Len=" value is 8 and not 7. Type 13 is not a part of the published 3GL interfaces for Oracle and is used for date calculations mainly within PL/SQL operations.
Note that the same result can be seen when DUMPing the value SYSDATE.
Using deductive logic, we can derive the following storage format for type 13 data:
Byte 1 - Base 256 year modifier
2 - Base 256 year
3 - Month
4 - Day
5 - Hours
6 - Minutes
7 - Seconds
8 - Unused
For AD dates, the year and base 256 modifier are stored in base 0 notation and we must add the modifier to the year to obtain the true year. For BC dates, the year and base 256 modifier are stored in excess-255 notation. We must subtract the modifier from the year to obtain the true year.
For our year 2009, we could read this to be, Byte 1 + Byte 2 * 256. In other words, 217 + 7 * 256 = 2009.
Oracle is capable of handling dates from 01-JAN-4712 BC 00:00:00 TO 31-DEC-9999 AD 23:59:59 AD OR in terms of Julian Day: 1 through Julian Day: 5373484
The Julian Day number is a count of days elapsed since Greenwich mean noon on 1 January 4712 B.C. in the Julian proleptic calendar. The Julian Date is the Julian Day number followed by the fraction of the day elapsed since the preceding noon.
Oracle DATE values are always stored in 7 bytes, excluding the length byte, within a datafile. These bytes store the century, year, month, day, hour, minute, and second details respectively. The following is the definition of Oracle's internal DATE storage structure:
BYTE Meaning
---- -------
1 Century -- stored in excess-100 notation
2 Year -- " "
3 Month -- stored in 0 base notation
4 Day -- " "
5 Hour -- stored in excess-1 notation
6 Minute -- " "
7 Second -- " "
Note that the century and year components are stored in 'excess 100 format', which means that 100 must be deducted from the byte's value. If a negative number results, then we've got a BC date at which point we take the absolute number. Also, to avoid ever having a zero byte, 1 is added to the hour, minute and second bytes. Therefore, 1 must be detected to get the correct value.
For example, take the following date:
11-MAR-2009 13:08:00
we would expect this date to be stored internally as follows:
120,109,3,11,14,9,1
Let's confirm this,
SQL> create table test1 as select sysdate sd from dual;
SQL> select to_char(sd, 'DD-MON-YYYY HH24:MI:SS'), dump(sd) from test1;
Result:
TO_CHAR(SD,'DD-MON-YYYYHH24:MI:SS') : 11-MAR-2009 13:08:00
DUMP(SD) : Typ=12 Len=7: 120,109,3,11,14,9,1
Let's try using the DUMP() function to do the same thing with TO_DATE now. Issue the following statement:
SQL> SELECT dump(to_date('11-MAR-2009 13:08:00', 'DD-MON-YYYY HH24:MI:SS'))
FROM dual;
Result: Typ=13 Len=8: 217,7,3,11,13,8,0,0
Note the different "Typ=" values to understand why we are seeing these results. The datatype returned is 13 and not 12, the external DATE datatype. This occurs because we rely on the TO_DATE function! External datatype 13 is an internal c-structure whose length varies depending on how the c-compiler represents the structure. Note that the "Len=" value is 8 and not 7. Type 13 is not a part of the published 3GL interfaces for Oracle and is used for date calculations mainly within PL/SQL operations.
Note that the same result can be seen when DUMPing the value SYSDATE.
Using deductive logic, we can derive the following storage format for type 13 data:
Byte 1 - Base 256 year modifier
2 - Base 256 year
3 - Month
4 - Day
5 - Hours
6 - Minutes
7 - Seconds
8 - Unused
For AD dates, the year and base 256 modifier are stored in base 0 notation and we must add the modifier to the year to obtain the true year. For BC dates, the year and base 256 modifier are stored in excess-255 notation. We must subtract the modifier from the year to obtain the true year.
For our year 2009, we could read this to be, Byte 1 + Byte 2 * 256. In other words, 217 + 7 * 256 = 2009.
Oracle is capable of handling dates from 01-JAN-4712 BC 00:00:00 TO 31-DEC-9999 AD 23:59:59 AD OR in terms of Julian Day: 1 through Julian Day: 5373484
The Julian Day number is a count of days elapsed since Greenwich mean noon on 1 January 4712 B.C. in the Julian proleptic calendar. The Julian Date is the Julian Day number followed by the fraction of the day elapsed since the preceding noon.
Monday, February 16, 2009
Tip#18 Rebuilding A Table
The DBA may have to rebuild a table after maintenance e.g. after a huge delete would like to lower the High Water Mark (HWM). There are several options to do it,
Option 1: Export and Import the table
Option 2: CTAS - Create table as Select
Option 3: Alter table MOVE (> 8.1.6)
Optoin 4: Alter table SHRINK (> 10g )
First two option are fairly straight forward so I wont go in the details here.
ALTER TABLE MOVE :
This one is my preferred option compare to the first two (if not already on 10g or above) and would like to point out few facts.
The ALTER TABLE MOVE command was implemented in 8.1.6. The command will allow tables to be moved from one tablespace to another, reorganise a table, provide the ability to modify of the INITIAL parameter. Also note that we don't lose grants (unlike CTAS), above all it is easy and fast.
Although it has a drawback, please note that when you use it, all the indices on the table become invalid, have to do an ALTER INDEX REBUILD.
e.g. I have a large table MBS with 50 million rows and now due to some process change I need to delete 25 million rows. After such huge delete obviously I would like to lower the HWM to avoid scanning those empty blocks below HWM.
Since I dont want to change the tablespace or initial parametere I would simply execute this,
Alter table MBS move; --tablespace test_tbs pctfree 20 pctused 40;
Since indices on the table will become invalid, I will also need to do this,
SELECT index_name FROM user_indexes WHERE table_name = 'MBS';
Once you know the indices used by the table just rebuild each of them as shown below,
Alter index pk_mbs rebuild; -- unrecoverable; --to avoid generating redo
ALTER TABLE SHRINK :
If you are on 10g or above, both of the above mentioned steps (table move and index rebuild) are done using single statement using SHRINK option.
alter table MBS enable row movement;
alter table MBS shrink space cascade;
alter table MBS disable row movement;
Just for comparison of Alter table options,
1.) deallocate unused : claims back unused space above HWM and doesnt lowers the HWM.
2.) move : claims back unused space below HWM and lowers the HWM
3.) shrink : claims back unused space below and above HWM and lowers the HWM
This blog was intended to just give you an overview, for more details on both the command please refer to the documentation. Also as usual always do backup before and after the changes.
Option 1: Export and Import the table
Option 2: CTAS - Create table as Select
Option 3: Alter table MOVE (> 8.1.6)
Optoin 4: Alter table SHRINK (> 10g )
First two option are fairly straight forward so I wont go in the details here.
ALTER TABLE MOVE :
This one is my preferred option compare to the first two (if not already on 10g or above) and would like to point out few facts.
The ALTER TABLE MOVE command was implemented in 8.1.6. The command will allow tables to be moved from one tablespace to another, reorganise a table, provide the ability to modify of the INITIAL parameter. Also note that we don't lose grants (unlike CTAS), above all it is easy and fast.
Although it has a drawback, please note that when you use it, all the indices on the table become invalid, have to do an ALTER INDEX REBUILD.
e.g. I have a large table MBS with 50 million rows and now due to some process change I need to delete 25 million rows. After such huge delete obviously I would like to lower the HWM to avoid scanning those empty blocks below HWM.
Since I dont want to change the tablespace or initial parametere I would simply execute this,
Alter table MBS move; --tablespace test_tbs pctfree 20 pctused 40;
Since indices on the table will become invalid, I will also need to do this,
SELECT index_name FROM user_indexes WHERE table_name = 'MBS';
Once you know the indices used by the table just rebuild each of them as shown below,
Alter index pk_mbs rebuild; -- unrecoverable; --to avoid generating redo
ALTER TABLE SHRINK :
If you are on 10g or above, both of the above mentioned steps (table move and index rebuild) are done using single statement using SHRINK option.
alter table MBS enable row movement;
alter table MBS shrink space cascade;
alter table MBS disable row movement;
Just for comparison of Alter table options,
1.) deallocate unused : claims back unused space above HWM and doesnt lowers the HWM.
2.) move : claims back unused space below HWM and lowers the HWM
3.) shrink : claims back unused space below and above HWM and lowers the HWM
This blog was intended to just give you an overview, for more details on both the command please refer to the documentation. Also as usual always do backup before and after the changes.
Wednesday, July 30, 2008
Tip#17 How to find out 32/64 Bit
Find out whether OS is 32 bit or 64 bit
Linux : getconf LONG_BIT
Solaris : /usr/bin/isainfo -kv
AIX : getconf -a | grep KERN
HPUX : /usr/bin/getconf KERNEL_BITS
Windows : Start>All Programs>accessories> System Tools>System Information> System summary
Find out whether Oracle Software is 32 bit or 64 bit.
Method # 1
cd $ORACLE_HOME/bin/
file oracle
Method # 2
sqlplus / as sysdba (Look for "Connected to:")
Method # 3
If the two directories $ORACLE_HOME/lib32 and $ORACLE_HOME/lib are existing then it is 64 bit else 32 bit
Linux : getconf LONG_BIT
Solaris : /usr/bin/isainfo -kv
AIX : getconf -a | grep KERN
HPUX : /usr/bin/getconf KERNEL_BITS
Windows : Start>All Programs>accessories> System Tools>System Information> System summary
Find out whether Oracle Software is 32 bit or 64 bit.
Method # 1
cd $ORACLE_HOME/bin/
file oracle
Method # 2
sqlplus / as sysdba (Look for "Connected to:")
Method # 3
If the two directories $ORACLE_HOME/lib32 and $ORACLE_HOME/lib are existing then it is 64 bit else 32 bit
Monday, July 7, 2008
Tip#16 Fix Migrated/Chained rows
What do they mean?
Row chaining is what happens when a row cannot fit into a single Oracle block ever, because it is just too big. Oracle therefore has to break it up into several smaller row pieces, store each piece in a separate block, and link (or “chain”) from one block to another so that the entire set of pieces can be retrieved when you query the table for the row.
Row migration is what happens when a row grows in size and can no longer fit into its original Oracle block: Oracle will move it (or ‘migrate’ it) into a completely new block into which it can fit.
How to Find them?
Assuming you have latest stats for the tables, run following SQL to find out if you have any tables with high percentage of chained or migrated rows in a table.
SELECT owner, table_name, num_rows, chain_cnt,
round((chain_cnt * 100 / num_rows),2) pct, pct_free, pct_used
FROM dba_tables WHERE chain_cnt > 0
AND owner NOT IN ('SYS', 'SYSTEM')
ORDER BY 5 desc ;
If you have any tables with large number of chained/migrated rows then follow the below steps to fix it,
How do I fix them?
1. connect to the database as the owner of the table having chained rows run utlchain.sql script which is in your ORACLE_HOME/rdbms/admin. It should create a table called 'chained_rows'.
2. Analyze the table suppose that the table having the issue is called: MBS
SQL>analyze table MBS list chained rows into chained_rows;
3. The above statement will load the chained row information into the table created in step 1
SQL>select * from chained_rows;
4. To fix the rows, put them in a temp table and remove it from original table and copy them from temp table to original table
create table my_tmp AS select * from mbs where rowid IN (select head_rowid from chained_rows where table_name = 'MBS');
DELETE FROM MBS WHERE rowid IN (select head_rowid from chained_rows where table_name = 'MBS');
INSERT INTO MBS SELECT * FROM my_tmp;
COMMIT;
Any Suggestion to avoid it?
Increase PCTFREE to avoid the future problem for those tables.
Row chaining is what happens when a row cannot fit into a single Oracle block ever, because it is just too big. Oracle therefore has to break it up into several smaller row pieces, store each piece in a separate block, and link (or “chain”) from one block to another so that the entire set of pieces can be retrieved when you query the table for the row.
Row migration is what happens when a row grows in size and can no longer fit into its original Oracle block: Oracle will move it (or ‘migrate’ it) into a completely new block into which it can fit.
How to Find them?
Assuming you have latest stats for the tables, run following SQL to find out if you have any tables with high percentage of chained or migrated rows in a table.
SELECT owner, table_name, num_rows, chain_cnt,
round((chain_cnt * 100 / num_rows),2) pct, pct_free, pct_used
FROM dba_tables WHERE chain_cnt > 0
AND owner NOT IN ('SYS', 'SYSTEM')
ORDER BY 5 desc ;
If you have any tables with large number of chained/migrated rows then follow the below steps to fix it,
How do I fix them?
1. connect to the database as the owner of the table having chained rows run utlchain.sql script which is in your ORACLE_HOME/rdbms/admin. It should create a table called 'chained_rows'.
2. Analyze the table suppose that the table having the issue is called: MBS
SQL>analyze table MBS list chained rows into chained_rows;
3. The above statement will load the chained row information into the table created in step 1
SQL>select * from chained_rows;
4. To fix the rows, put them in a temp table and remove it from original table and copy them from temp table to original table
create table my_tmp AS select * from mbs where rowid IN (select head_rowid from chained_rows where table_name = 'MBS');
DELETE FROM MBS WHERE rowid IN (select head_rowid from chained_rows where table_name = 'MBS');
INSERT INTO MBS SELECT * FROM my_tmp;
COMMIT;
Any Suggestion to avoid it?
Increase PCTFREE to avoid the future problem for those tables.
Thursday, July 3, 2008
Tip#15 DBFile Sequential and Scattered Reads
Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from sys.v_$system_event a, sys.v_$system_event b
where a.event = 'db file sequential read'
and b.event = 'db file scattered read';
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from sys.v_$system_event a, sys.v_$system_event b
where a.event = 'db file sequential read'
and b.event = 'db file scattered read';
Subscribe to:
Posts (Atom)