Wednesday, July 30, 2008

Tip#17 How to find out 32/64 Bit

Find out whether OS is 32 bit or 64 bit

Linux : getconf LONG_BIT

Solaris : /usr/bin/isainfo -kv

AIX : getconf -a | grep KERN

HPUX : /usr/bin/getconf KERNEL_BITS

Windows
: Start>All Programs>accessories> System Tools>System Information> System summary


Find out whether Oracle Software is 32 bit or 64 bit.

Method # 1
cd $ORACLE_HOME/bin/
file oracle

Method # 2
sqlplus / as sysdba (Look for "Connected to:")

Method # 3
If the two directories $ORACLE_HOME/lib32 and $ORACLE_HOME/lib are existing then it is 64 bit else 32 bit

Monday, July 7, 2008

Tip#16 Fix Migrated/Chained rows

What do they mean?

Row chaining
is what happens when a row cannot fit into a single Oracle block ever, because it is just too big. Oracle therefore has to break it up into several smaller row pieces, store each piece in a separate block, and link (or “chain”) from one block to another so that the entire set of pieces can be retrieved when you query the table for the row.

Row migration is what happens when a row grows in size and can no longer fit into its original Oracle block: Oracle will move it (or ‘migrate’ it) into a completely new block into which it can fit.

How to Find them?

Assuming you have latest stats for the tables, run following SQL to find out if you have any tables with high percentage of chained or migrated rows in a table.

SELECT owner, table_name, num_rows, chain_cnt,
round((chain_cnt * 100 / num_rows),2) pct,
pct_free, pct_used
FROM dba_tables
WHERE chain_cnt > 0
AND owner NOT IN ('SYS', 'SYSTEM')

ORDER BY 5 desc ;

If you have any tables with large number of chained/migrated rows then follow the below steps to fix it,

How do I fix them?

1. connect to the database as the owner of the table having chained rows run utlchain.sql script which is in your ORACLE_HOME/rdbms/admin. It should create a table called 'chained_rows'.

2. Analyze the table suppose that the table having the issue is called: MBS

SQL>analyze table MBS list chained rows into chained_rows;

3. The above statement will load the chained row information into the table created in step 1

SQL>select * from chained_rows;

4. To fix the rows, put them in a temp table and remove it from original table and copy them from temp table to original table

create table my_tmp AS select * from mbs where rowid IN (select head_rowid from chained_rows where table_name = 'MBS');


DELETE FROM MBS WHERE rowid IN (select head_rowid from chained_rows where table_name = 'MBS');


INSERT INTO MBS SELECT * FROM my_tmp;

COMMIT;

Any Suggestion to avoid it?

Increase PCTFREE to avoid the future problem for those tables.

Thursday, July 3, 2008

Tip#15 DBFile Sequential and Scattered Reads

Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.

db file sequential read:

A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.

db file scattered read:

Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.

The following query shows average wait time for sequential versus scattered reads:

select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from sys.v_$system_event a, sys.v_$system_event b

where a.event = 'db file sequential read'

and b.event = 'db file scattered read';

Monday, June 30, 2008

Tip#14 Resumable Space Allocation

In the event of space allocation failures, rather than returning the error to the user and stopping the operation, the transaction can be temporarily suspended and corrective action taken. After the error condition is corrected, the suspended operation automatically resumes. This feature is called resumable space allocation. The affected statements are called resumable statements.


When the execution of a resumable statement is suspended, the system issues a Resumable Session Suspended alert. A suspended operation is automatically aborted if the error condition is not fixed within the time-out period. By default, the time-out period is two hours. you can enable resumable space allocation either at the system level, by setting the RESUMABLE_TIMEOUT initialization parameter to a nonzero value, or at the session level, by issuing the ALTER SESSION ENABLE RESUMABLE statement. A resumable statement can be suspended and resumed multiple times during execution.


Which statements are resumable?


• DML statements, including INSERT INTO ... SELECT from external tables, are resumable.
• DDL statements, including CREATE TABLE ... AS SELECT, are resumable.
• SELECT statements that run out of temporary space are also candidates for resumable execution.

A resumable statement can be suspended under any of these conditions:

• Space quota exceeded (e.g. The user has exceeded his or her assigned space quota in the tablespace),
• Maximum extents reached (e.g. The number of extents in a table or index equals the number of maximum extents defined on the object.)
• Out of space (e.g. The operation cannot acquire any more extents for an index in a tablespace.)

Wednesday, May 28, 2008

Tip#13 Big Vs Little Endian

Little Endian means that the low-order byte of the number is stored in memory at the lowest address, and the high-order byte at the highest address. (The little end comes first.) For example, a 4 byte LongInt

Byte3 Byte2 Byte1 Byte0

will be arranged in memory as follows:
Base Address+0 Byte0
Base Address+1 Byte1
Base Address+2 Byte2
Base Address+3 Byte3

Linux, Windows use "Little Endian" byte order.


Big Endian means that the high-order byte of the number is stored in memory at the lowest address, and the low-order byte at the highest address. (The big end comes first.) Our LongInt, would then be stored as:

Base Address+0 Byte3
Base Address+1 Byte2
Base Address+2 Byte1
Base Address+3 Byte0

Solaris, HPUX, Apple Mac use "Big Endian" byte order.


In Oracle 10g, following SQL should tell you which operating systems follow which byte order,

select * from v$transportable_platform order by platform_id;

To transport tablespaces between OS with same byte orders (endianness), we don’t need conversion in other cases we do which can be done using RMAN. E.g. I need to transport a Tablespace from Linux (Little Endian) to Solaris (Big Endian)

RMAN> convert tablespace XXX to
platform ‘Solaris[tm] OE (64-bit)'
db_file_name_convert '/app/oracle/oradata/mbs’,
'/app/oracle/rman_bkups';

Now this file can be copied over to the target Solaris system, and the rest of the steps are easy.

Monday, April 28, 2008

Tip#12 STATUS column of dba_undo_extents

I am sure you might have seen loads of information regarding how to resize/drop/recreate UNDO tablspace but point to note during dropping (ex-default/previous) UNDO tablespace is to check a view dba_undo_extents to see if any undo statments is still used by any session or needed for flashback. Following SQL should tell you if you can go ahead and drop the ex-default/previous UNDO tablespace or not (assuming undo tablespace I want to drop is UNDOTBS2),

SELECT tablespace_name, status, COUNT (*)
FROM SYS.dba_undo_extents
WHERE tablespace_name = 'UNDOTBS2'
GROUP BY tablespace_name, status

What is of our interest is status column of dba_undo_extents, 3 possible values - ACTIVE, EXPIRED, UNEXPIRED. What are the meanings ?

ACTIVE means that this undo segment contains active transactions

EXPIRED means that this segment is not rqeuired at all after considering Undo retention period

UNEXPIRED means that this segment does not contain any active transactions but it contains transactions which are still required for Flashback option (after considering Undo retention period).

So if you have all extents EXPIRED then go ahead and drop the tablespace, as usual make sure that you have done proper backup before and after.

Wednesday, April 9, 2008

Tip#11 SQL*Plus - Ignore Blank Lines in the Script

Recently I came across a situation where during our database deploy one of our View script was exiting in SQLplus due to a blank line inside the script. To fix it, we had to set sqlblanllines setting in sqlplus to ignore the blank lines and that did the trick for us.

SQL> create or replace view xx
2 as
3 select
4 sysdate as x
5
SQL> from
SP2-0042: unknown command "from" - rest of line ignored.
SQL> dual
SP2-0042: unknown command "dual" - rest of line ignored.
SQL> set sqlblanklines on
SQL> create or replace view xx
2 as
3 select
4 sysdate as x
5
6 from
7 dual
8 ;

View created.

SQL>

Friday, April 4, 2008

Tip#10 Dump to text file using UTL_FILE

Dump the data from table/sql to text file (CSV or PIPE delimated etc) on the server using UTL_FILE

Note :
A) Specify the accessible directories for the UTL_FILE functions in the initialization file using the UTL_FILE_DIR parameter. e.g. : UTL_FILE_DIR = /app/oracle/my_dir
FYI, UTL_FILE_DIR = * means turn off directory access checking, and it makes any directory accessible to the UTL_FILE (NOT RECOMMANDED).

EDIT: if you are using > 9i , you can use DIRECTORY instead of UTL_FILE_DIR.

B) Since we have data with special character, we open file with FOPEN_NCHAR to make sure we don’t loose special characters in the text dump.

C) I have just tested it on Redhat Linux - Oracle 9.2.0.6

This is the Function which I use (adapted to my requirements but thanks to Tom Kyte to get me started)

CREATE OR REPLACE FUNCTION dump_to_file (
p_query IN VARCHAR2,
p_separator IN VARCHAR2 DEFAULT '|',
p_dir IN VARCHAR2,
p_filename IN VARCHAR2
)
RETURN NUMBER
IS
l_filehandle UTL_FILE.file_type;
l_thecursor INTEGER DEFAULT DBMS_SQL.open_cursor;
l_columnvalue VARCHAR2 (32767);
l_col_hdr VARCHAR2 (32767);
l_status INTEGER;
l_colcnt NUMBER DEFAULT 0;
l_separator VARCHAR2 (10) DEFAULT '';
l_cnt NUMBER DEFAULT 0;
col_cnt PLS_INTEGER;
rec_tab DBMS_SQL.desc_tab;
col_num NUMBER;
BEGIN
-- open a writable file in the UTL_FILE_DIR directory
l_filehandle := UTL_FILE.fopen_nchar (p_dir, p_filename, 'w', 32767);

-- parse the sql we got
DBMS_SQL.parse (l_thecursor, p_query, DBMS_SQL.native);

-- execute the sql cursor
l_status := DBMS_SQL.EXECUTE (l_thecursor);

-- get the column information l_colcnt OUT param to get us number of columns
-- and rec_tab is plsql table with all column related information like name, length,schema, precision, scale etc
DBMS_SQL.describe_columns (l_thecursor, l_colcnt, rec_tab);

-- we need to print the column names as the first line
l_separator := '';
l_col_hdr := '';


FOR i IN 1 .. l_colcnt
LOOP
-- we define which columns we need in the fetch
DBMS_SQL.define_column (l_thecursor, i, l_columnvalue, 4000);
-- print the column name in the file
l_col_hdr := l_col_hdr || l_separator || rec_tab (i).col_name;
l_separator := p_separator;
END LOOP;

UTL_FILE.put_nchar (l_filehandle, l_col_hdr);
UTL_FILE.new_line (l_filehandle);

-- lets print the data now
LOOP
EXIT WHEN (DBMS_SQL.fetch_rows (l_thecursor) <= 0); l_separator := ''; FOR i IN 1 .. l_colcnt LOOP DBMS_SQL.COLUMN_VALUE (l_thecursor, i, l_columnvalue); UTL_FILE.put_nchar (l_filehandle, l_separator || l_columnvalue); l_separator := p_separator; END LOOP; UTL_FILE.new_line (l_filehandle); l_cnt := l_cnt + 1; END LOOP; DBMS_SQL.close_cursor (l_thecursor); UTL_FILE.fclose (l_filehandle); RETURN l_cnt; EXCEPTION WHEN UTL_FILE.invalid_operation THEN UTL_FILE.fclose (l_filehandle); raise_application_error (-20061, 'Dump To File Error: Invalid Operation'); WHEN UTL_FILE.invalid_filehandle THEN UTL_FILE.fclose (l_filehandle); raise_application_error (-20062, 'Dump To File Error: Invalid File Handle'); WHEN UTL_FILE.write_error THEN UTL_FILE.fclose (l_filehandle); raise_application_error (-20063, 'Dump To File Error: Write Error'); WHEN OTHERS THEN UTL_FILE.fclose (l_filehandle); RAISE; END dump_to_file;


How to Use above function ?

To generate pipe (‘ | ’) delimated file called emp.txt in /app/oracle/my_dir for my query, I execute something like following

DECLARE
l_rows NUMBER := 0;
l_dir VARCHAR2 (100) := '/app/oracle/my_dir';
BEGIN
l_rows := dump_to_file ('select * FROM scott.emp ', '|', l_dir, 'emp.txt');
DBMS_OUTPUT.put_line ('Number of rows dumped to emp.txt : ' || l_rows);
END;

EDIT:
if you want to use DIRECTORY instead of UTL_FILE_DIR, just create one like following,

create or replace directory my_dir as '/app/oracle/my_dir';

then use above code with 'my_dir' instead of l_dir.


Tuesday, April 1, 2008

Tip#9 Some OS utilities (Unix)

NOHUP :

nohup (no hangups) is a UNIX command that allows you to execute a command in the background while you are logged into UNIX. The background process will continue to run until completion, even if you log off. For long running processes, this is a nice way to execute them. In my case, it allows me to log into a client server remotely, execute a process interactively with nohup, and then disconnect. Another benefit of nohup is that by default, a log of all activity is recorded in the current directory in the file nohup.out . e.g.

nohup sqlplus user/pwd @sqlscript &

Note : & at the end is for running the process in the background

SKILL:

Sometime I have a situations where I find that a process is consuming a lot of CPU and memory, but I don't want to kill it and I just wished if I could just 'pause' the process and that's where SKILL really helps me. e.g Process ID 1234 is taking too much resources and I need to somehow get other 'important' things done I will execute following command,

skill -STOP 1234

Once I have finished other 'important' things done I can free the process with following command

skill -CONT 1234

The nice feature is you can pass not just PID but also User, Terminal ID OR Command which makes it more powerful then I initially thought. e.g. You can 'pause' all RMAN commands on DB server with

skill -STOP rman

SNICE:

The command snice is similar to skill but Instead of stopping a process it makes its priority a lower one. e.g. for the same heavy process 1234 I could decrease the priority by using something like this,

snice +4 -p 1234

Note: +4 means we are increasing the Nice Value and effectively the higher the number, the lower the priority.

This utility is quite useful in reducing priorities.