Home » RDBMS Server » Performance Tuning » After trigger causing performance bottleneck (Windows Xp, Oracle 10g)
After trigger causing performance bottleneck [message #506012] Fri, 06 May 2011 02:03 Go to next message
a_oracle
Messages: 98
Registered: November 2010
Member
here is the issue
There is a trigger 'TRG A' inserting data in Table A which is an after insert/update/delete for each row and fires for any DML operation on table B
Now, for certain quarter for year 2010 we ran a script which inserted data in table B. This created huge data amount in Table A (nearly 4997886).
Now, the issue is..when I am running the script for inserting data in Table B then it's taking huge time probably because insertion is happening
in Table A for each row. Moreover, I just saw the table A structure and there are no indexes, no partitions also.
ca you please advice a suitable approach to start with for the issue?
I am going for a analyze of Table A as we got to know this table has been never been analyzed. But need to know
if any other checks need to be done as well.
Re: After trigger causing performance bottleneck [message #506014 is a reply to message #506012] Fri, 06 May 2011 02:10 Go to previous messageGo to next message
Michel Cadot
Messages: 68641
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
alter session set sql_trace=true;
<your statement>
alter session set sql_trace=false;

and see is done in the trace file.

Regards
Michel

Re: After trigger causing performance bottleneck [message #506015 is a reply to message #506012] Fri, 06 May 2011 02:14 Go to previous messageGo to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
If all your code is doing in the trigger is inserting into A, not looking for the row in A first, then not having any indexes is the best for performance since no index is assisting in the insert logic and you are not maintaining indexes.

Triggers are expensive. If you need them, then you need them and you pay the price. Decide how important it is to insert into table A as part of the same transaction event. This is a typical auditing solution for many. Consider yourself lucky as you have learned something about the performance cost of triggers.

I use triggers all the time. I am happy to pay the performance price in most of my databases because of what triggers let me do. But there are some databases with super high transaction rate requirements. These databases cannot use triggers (nor instead of triggers). The reason of course is the cost. If I am not mistaken, triggers inhibit parallel query as well. Consider all this as just another choice you can make or a form of tradeoff. You can have whatever you want, as long as you are willing to pay for it with a tradeoff.

One possibility may be that your script is inefficient. Many develoeprs like to "build rows" as they go. Thus the will do something like

1) insert a row with a key but mostly empty columns.
2) update one group of columns in the row
3) update another group of columns in the row
4) update yet another group of columns in the row

It seem they find it easier to think this way. Unfortunately this kind of "row building" process leads to four rows in an audit table instead of one. If your process is doing something like this, fix it. Use SET SQL (try ananlytics, query subfactoring (eg. with clause) and other features) to build the row in one SQL insert statement. This would then reduce the number of rows you are working with in table A.

Good luck, Kevin

[Updated on: Fri, 06 May 2011 02:19]

Report message to a moderator

Re: After trigger causing performance bottleneck [message #506074 is a reply to message #506015] Fri, 06 May 2011 07:22 Go to previous message
John Watson
Messages: 8929
Registered: January 2010
Location: Global Village
Senior Member
Quote:


1) insert a row with a key but mostly empty columns.
2) update one group of columns in the row
3) update another group of columns in the row
4) update yet another group of columns in the row



Another issue: this is going to result in an awful row migration problem as the rows get bigger, unless the table has been defined with a well calculated percent_free.
Previous Topic: Elapsed time
Next Topic: Oracle Buffer Cache vs AIX Filesystem Cache
Goto Forum:
  


Current Time: Fri Apr 19 06:01:15 CDT 2024