JDBC Thin Vs OCI (Thick)
Loads of info available on what it means and which one to use etc etc. I just put two of the links which says it all,
Horse's Mouth : Oracle Document Link
Generally the Thin driver is the best choice. In most cases it is as fast or faster than the OCI driver (from 10.1.0), has almost exactly the same set of features, and is easier to administer.
In a few cases the OCI driver has slightly better performance. The OCI driver supports a few Oracle features better than the Thin driver.
The Thin driver is easier to administer since it does not require installation of the OCI C libraries.
The Thin driver will work on any machine that has a suitable Java VM, whereas with the OCI driver you must install the proper OCI C libraries for each machine.
We recommend using the Thin driver unless you must have one or more of the OCI only features, or until it is clear that the small performance gain provided by the OCI driver is worth the extra effort.
Thin Vs Thick Performance Test : Test Link
Final Word :
"The Thin driver clearly outperforms the OCI driver for every type of operation except executions of CallableStatement objects. On a Unix platform, my experience has been that the CallableStatement numbers are tilted even more in favor of the OCI driver. Nonetheless, you can feel completely comfortable using the Thin driver in almost any setting. The Thin driver has been well-tuned by Oracle's JDBC development team to perform better than its OCI counterpart."
Monday, August 9, 2010
Wednesday, August 4, 2010
Tip#27 Listener Log File Too Big
Over a period it is quite often you will see that listener.log in ORACLE_HOME/network/log has grown to X GB in size.So how do we manage it ?
Do not have to stop the listener in order to delete the log files, just do the following
$> lsnrctl
Do not have to stop the listener in order to delete the log files, just do the following
$> lsnrctl
LSNRCTL> set current_listener listener_nameLSNRCTL> show log_file LSNRCTL> set log_file xxx.log LSNRCTL> show log_file LSNRCTL> exit
$> remove or backup the old listener.log
Thursday, October 1, 2009
Tip#26 OMS generates lot of core.xxx files
Recently we had disk full issue on our Grid Control OMS server, firstly we thought it must be just few backup copies which are lying around. After cleaning up the disk of unwanted backups got some breathing space (atleat thats what we thought!) but soon found out it is full again in like an hour.
So digging further found out that OMS was generating HUGE amount of logs (like core.xxx) and at very brisk speed too. Metalink Doc ID 419999.1 says it is due to "the access_log for the http server of the OMS is over 2Gb in size and this is causing the http server to core dump." and to fix it we need to stop all OMS process, remove the log file and startup the OMS process. So basically,
/opmn/bin/opmnctl stopall
rm/Apache/Apache/logs/access_log
/opmn/bin/opmnctl startall
After doing that all seems to be back to normal. To avoid the same in future, as suggested in the metalink, Consider rotating the Apache logs on a regular basis as part of the routine maintenance of the system. This can be done while the OMS is down during monthly maintenance tasks.
So digging further found out that OMS was generating HUGE amount of logs (like core.xxx) and at very brisk speed too. Metalink Doc ID 419999.1 says it is due to "the access_log for the http server of the OMS is over 2Gb in size and this is causing the http server to core dump." and to fix it we need to stop all OMS process, remove the log file and startup the OMS process. So basically,
rm
After doing that all seems to be back to normal. To avoid the same in future, as suggested in the metalink, Consider rotating the Apache logs on a regular basis as part of the routine maintenance of the system. This can be done while the OMS is down during monthly maintenance tasks.
Wednesday, July 15, 2009
Tip#25 Fix for ORA-38029 : object statistics are locked
Recently while analyzing tables in one of our test database, we got ORA-38029: object statistics are locked. It turns out that in 10gR2, when you import (imp or impdp) table without data i.e. structure only, oracle will lock the table statistics. (ref : metalink doc id 433240.1)
You can see list of all locked tables in a schema by running following query:
select table_name, stattype_locked from dba_tab_statistics where owner = 'MBS' and stattype_locked is not null;
To generate unlock statement for all tables in the schema you can use following,
select 'exec DBMS_STATS.UNLOCK_TABLE_STATS ('''|| owner ||''','''|| table_name ||''');' from dba_tab_statistics where owner = 'MBS' and stattype_locked is not null;
OR for each table individually the following,
exec DBMS_STATS.UNLOCK_TABLE_STATS('owner','table name');
You can see list of all locked tables in a schema by running following query:
select table_name, stattype_locked from dba_tab_statistics where owner = 'MBS' and stattype_locked is not null;
To generate unlock statement for all tables in the schema you can use following,
select 'exec DBMS_STATS.UNLOCK_TABLE_STATS ('''|| owner ||''','''|| table_name ||''');' from dba_tab_statistics where owner = 'MBS' and stattype_locked is not null;
OR for each table individually the following,
exec DBMS_STATS.UNLOCK_TABLE_STATS('owner','table name');
Wednesday, June 17, 2009
Tip#24 Fix for ORA-12545 on RAC
We had a case recently with a 10gR2 RAC, Some clients getting ORA-12545 errors i.e. ORA-12545 Connect failed because target host or object does not exist. TNS and Listener entries were all fine and TNSPING was also working so it had to be something else.
On digging further and googling a bit it turned out that root cause was that the client was being redirected to the server hostname instead of virtual addresses. Due to that the client had to resolve the server hostname and in some cases it could not ( e.g. no entry in their DNS & client Host file), hence the error. Here is the link where it is nicely explained Link .
Hence two thing needed to be fixed,
1) Client should have been pointed(redirected) to Virtual address and not real host names
2) Client should be able to resolve virtual address
Point 1 can be fixed by setting up LOCAL_LISTENER parameter on all RAC nodes. Here is how,
Add entry in TNSnames.ora on each node e.g. on Node 1
LISTENER_NODE1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST =)(PORT =1521))
)
set local listner for each node,
Alter system set LOCAL_LISTENER= 'LISTENER_NODE1' scope=both sid='SID1';
Repeat above steps on each node with respective host & tns name.
Point 2 can be fixed by either add DNS entry for the host names or if not possible, add entry in Client's host file e.g. on Linux add entry in /etc/hosts or on Windows add entry in windows\system32\drivers\etc\hosts.
On digging further and googling a bit it turned out that root cause was that the client was being redirected to the server hostname instead of virtual addresses. Due to that the client had to resolve the server hostname and in some cases it could not ( e.g. no entry in their DNS & client Host file), hence the error. Here is the link where it is nicely explained Link .
Hence two thing needed to be fixed,
1) Client should have been pointed(redirected) to Virtual address and not real host names
2) Client should be able to resolve virtual address
Point 1 can be fixed by setting up LOCAL_LISTENER parameter on all RAC nodes. Here is how,
Add entry in TNSnames.ora on each node e.g. on Node 1
LISTENER_NODE1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST =
)
set local listner for each node,
Alter system set LOCAL_LISTENER= 'LISTENER_NODE1' scope=both sid='SID1';
Repeat above steps on each node with respective host & tns name.
Point 2 can be fixed by either add DNS entry for the host names or if not possible, add entry in Client's host file e.g. on Linux add entry in /etc/hosts or on Windows add entry in windows\system32\drivers\etc\hosts.
Wednesday, June 3, 2009
Tip#23 2>&1 and Argument list too long
What does 2>&1 mean?
Sometimes I call my SQL script via shell script which generally have this kind of statement in it,
$SCRIPTDIR/run_me.sh 2>&1 > $LOGDIR/log_me.log
What above statement does for me is, it runs run_me.sh script from my script folder and logs the detail in files called log_me.log in my log folder. Over here what I am interested in is what is 2>&1. Well firstly let me define three data streams in linux i.e. STDIN, STDOUT, and STDERR,
STDIN : Standard input usually comes from the keyboard or from another program
STDOUT : The program usually prints to standard output
STDERR : The program sometimes prints to standard error
In Linux/Unix, The built-in numberings for them are 0, 1, and 2, in that order.
The command above is redirecting standard error into standard output (you have to put an & in front of the destination when you do this) and redirecting standard output into my log file, which is a place I want to dump anything my scripts writes out.
So effectively, all output from this command should be logged into my log file. So in case of any issue I can always look at the log to find out the trouble area.
--------------------------------------------------------------------------------------------
Sometimes there will be cases where a directory is filled with lots of files e.g. dump directory with lots of trace file. In such case when I tried following,
/bin/rm *.trc
I got following error message:
bash: /bin/rm: Argument list too long
Solution:
find . -name "*.trc"| xargs /bin/rm.
Depending on the number of files, after a while all files will be erased.
Sometimes I call my SQL script via shell script which generally have this kind of statement in it,
$SCRIPTDIR/run_me.sh 2>&1 > $LOGDIR/log_me.log
What above statement does for me is, it runs run_me.sh script from my script folder and logs the detail in files called log_me.log in my log folder. Over here what I am interested in is what is 2>&1. Well firstly let me define three data streams in linux i.e. STDIN, STDOUT, and STDERR,
STDIN : Standard input usually comes from the keyboard or from another program
STDOUT : The program usually prints to standard output
STDERR : The program sometimes prints to standard error
In Linux/Unix, The built-in numberings for them are 0, 1, and 2, in that order.
The command above is redirecting standard error into standard output (you have to put an & in front of the destination when you do this) and redirecting standard output into my log file, which is a place I want to dump anything my scripts writes out.
So effectively, all output from this command should be logged into my log file. So in case of any issue I can always look at the log to find out the trouble area.
--------------------------------------------------------------------------------------------
Sometimes there will be cases where a directory is filled with lots of files e.g. dump directory with lots of trace file. In such case when I tried following,
/bin/rm *.trc
I got following error message:
bash: /bin/rm: Argument list too long
Solution:
find . -name "*.trc"| xargs /bin/rm.
Depending on the number of files, after a while all files will be erased.
Tip#22 Fix for ORA-01882
ORA-01882: timezone region %s not found
Recently we got this error while running via TOAD
select * from dba_scheduler_jobs;
The error message itself turns out not very informative.
01882, 00000, "timezone region %s not found"
// *Cause: The specified region name was not found.
// *Action: Please contact Oracle Customer Support.
In short, the error is because there are 7 timezone region IDs changed from version 3 and above. If you have old Timezone data from Version 2 that using one of these IDs the error raises.
In our case, server had time zone files were already upgraded so running same query on the DB server was working fine. So to fix it we had fix it on the client side,
1) Download patch 5731535 for a 10.2.0.X client ( 10.2.0.1 to 10.2.0.3)
2) Copy the 2 .dat files located at Client_patch\5731535\files\oracore\zoneinfo and the readme.txt file into the %ORACLE_HOME%\oracore\zoneinfo directory on your oracle client.
3) restart the TOAD
Ref : Metalink Doc ID: 414590.1 and 417893.1
Recently we got this error while running via TOAD
select * from dba_scheduler_jobs;
The error message itself turns out not very informative.
01882, 00000, "timezone region %s not found"
// *Cause: The specified region name was not found.
// *Action: Please contact Oracle Customer Support.
In short, the error is because there are 7 timezone region IDs changed from version 3 and above. If you have old Timezone data from Version 2 that using one of these IDs the error raises.
In our case, server had time zone files were already upgraded so running same query on the DB server was working fine. So to fix it we had fix it on the client side,
1) Download patch 5731535 for a 10.2.0.X client ( 10.2.0.1 to 10.2.0.3)
2) Copy the 2 .dat files located at Client_patch\5731535\files\oracore\zoneinfo and the readme.txt file into the %ORACLE_HOME%\oracore\zoneinfo directory on your oracle client.
3) restart the TOAD
Ref : Metalink Doc ID: 414590.1 and 417893.1
Subscribe to:
Posts (Atom)