Home » RDBMS Server » Performance Tuning » how to tune 100s of transactions over single table
how to tune 100s of transactions over single table [message #567159] Mon, 24 September 2012 13:29 Go to next message
kamilp
Messages: 10
Registered: August 2012
Junior Member
Hi,

I am trying to find some way how to tune and optimize the server performance in following situation. There are 100s of sessions inserting records to one table. Sessions are communication threads in java application, each thread is receiving messages that are to be stored in the table. Each message must be commited and then is ACK sent to remove client. Two problems are raising of course - much of ITLs on the table and lots of very small transactions. I can adjust the java application, but cant do much about the design. I was thinking about some "caching" - if the messages are stored in memory and bulk-inserted to database by single thread the performance would be much higher. However, there would be possible loss of data - the message could be lost from memory cache and client already received ACK.
So I wonder, If you have any advise on this. I would appreciate any help.

Regards.
Re: how to tune 100s of transactions over single table [message #567161 is a reply to message #567159] Mon, 24 September 2012 13:37 Go to previous messageGo to next message
Michel Cadot
Messages: 68637
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
Oracle version? with 4 decimals.

Regards
Michel
Re: how to tune 100s of transactions over single table [message #567162 is a reply to message #567161] Mon, 24 September 2012 13:44 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
Does table have Primary Key (PK)?
does PK contain SEQUENCE?
Re: how to tune 100s of transactions over single table [message #567167 is a reply to message #567159] Mon, 24 September 2012 15:45 Go to previous messageGo to next message
kamilp
Messages: 10
Registered: August 2012
Junior Member
> Oracle version? with 4 decimals.
11.2.0.1, but I think this issue is same in any 10,11 version

> Does table have Primary Key (PK)?
yes

> does PK contain SEQUENCE?
no, PK is NUMBER(38) generated by java application logic
table is

ID NOT NULL NUMBER(38)
PHEADER VARCHAR2(64)
MSG LONG RAW
TSTMP NUMBER(38)
METADATAID NUMBER(38)


msg is containing short texts with length approx. 100-200 bytes on average

[Updated on: Mon, 24 September 2012 15:46]

Report message to a moderator

Re: how to tune 100s of transactions over single table [message #567181 is a reply to message #567167] Mon, 24 September 2012 23:47 Go to previous messageGo to next message
Michel Cadot
Messages: 68637
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
Why do you use a LONG RAW to store some text of short length?
The first step should be to fix that and use VARCHAR2.

What is the kind of tablespace you use?
Post the result of (in text inline, no image, no atteched file:
select extent_management, segment_space
from dba_tablespaces
where tablespace_bame in (select tablespace from dba_segments where segment_name='<your table')
/


Before, Please read OraFAQ Forum Guide and How to use [code] tags and make your code easier to read.
Make sure that lines of code do not exceed 80 characters when you format.
Indent the code, use code tags and align the columns in result.
Use the "Preview Message" button to verify.

Regards
Michel
Re: how to tune 100s of transactions over single table [message #567208 is a reply to message #567181] Tue, 25 September 2012 05:08 Go to previous messageGo to next message
kamilp
Messages: 10
Registered: August 2012
Junior Member
> Why do you use a LONG RAW to store some text of short length?
because sometimes there are large messages - few MB


What is the kind of tablespace you use?

EXTENT_MAN SEGMEN
---------- ------
LOCAL      AUTO
Re: how to tune 100s of transactions over single table [message #567213 is a reply to message #567208] Tue, 25 September 2012 05:32 Go to previous messageGo to next message
Michel Cadot
Messages: 68637
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
Quote:
because sometimes there are large messages - few MB


Then it should be CLOB.

OK, do you experience any actual performances problems? If so post the figures (what you did and got to say there is a performances problem).

Regards
Michel
Re: how to tune 100s of transactions over single table [message #567214 is a reply to message #567213] Tue, 25 September 2012 05:53 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
Best way to reduce contention on hot blocks is to get different sessions to write to different blocks.
An index on a current date or incrementing number will have new rows all trying to write to same physical index leaf. You can reduce this by converting the index to a REVERSE KEY index. Note that you can only scan a reverse key index with = and IN, so no good for date range queries. Each transaction that writes a different index value will write to a different block, so you fix your hot block problem, but may create a cache-miss problem

It's a bit more difficult with a table. In a full table, new rows want to go at the end. Some alternatives that come to mind:
- Hash partition the table on the PK. Different PK values will write to different partitions, and therefore different blocks. Keep your partition count a power of 2. If you have hundreds of concurrent transactions, 128 hash partitions might be suitable.
- Create the table in an Index Cluster. Make the Cluster Index the PK column with a REVERSE KEY. I have never tried this, and would only consider it after you had rejected the 1st idea.

Ross Leishman
Re: how to tune 100s of transactions over single table [message #567237 is a reply to message #567214] Tue, 25 September 2012 08:57 Go to previous messageGo to next message
kamilp
Messages: 10
Registered: August 2012
Junior Member
Ross - I completly forgot about the reverse key index ! Great idea. I will do performance tests and see what will happen with the range scans.
Re: how to tune 100s of transactions over single table [message #567245 is a reply to message #567237] Tue, 25 September 2012 10:11 Go to previous messageGo to next message
Michel Cadot
Messages: 68637
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
You didn't tell us what the key is, just it is "generated by java application logic", so maybe reverse key will not be useful for you, will not change anything.

Please answer my questions.

Regards
Michel

[Updated on: Tue, 25 September 2012 10:12]

Report message to a moderator

Re: how to tune 100s of transactions over single table [message #567506 is a reply to message #567245] Tue, 02 October 2012 17:24 Go to previous message
alan.kendall@nfl.com
Messages: 163
Registered: June 2012
Location: Culver City, California
Senior Member
Kamil,
Here are a few Ideas that I have implemented to reduce "enq: TX - row lock contention" on inserts.
1) Delete as many indexes as possible from the table you are inserting into.
2) Disable all triggers from this table.
3) Disable foreign keys from this table to other tables.
3) Do the inserts on one RAC node to reduce contention.
4) Look at your event waits to make sure you are not locking on something else like "latch free" or "log buffer".
5) In 11g Turn result cache off on the table you are inserting into. "alter table scott.emp result_cache (mode manual);"
6) Consider inserting into a table without any indexes then on one node do inserts 10 at a time with commit into the main table with a cursor. Construct your delete from the smaller table based on the row already being commited in the main table to avoid loss of data.
7) Consider making your apps servers collect 20 inserts, then the first apps server inserts at 5 seconds after the minute and insert them with a commit. The next apps server collects 20 inserts and inserts 10 seconds after the mintue to commit. If any apps server goes over 20 inserts it wakes up and inserts with a commit.
Cool Consider making the table a hash partition.
9) Cache the table and its indexes into the keep buffer pool.
10) Look at what is hogging memory because these other objects will cause the table to get LRU and cause inserts to lock more.
ENWEBP1P > SELECT
  2  o.object_type,i.instance_name db,COUNT(*)*8192/1024/1024 meg_in_memory,
  3  o.owner||'.'||o.OBJECT_NAME Object_in_Memory
  4       FROM DBA_OBJECTS o, gV$BH bh, gv$instance i
  5      WHERE o.DATA_OBJECT_ID = bh.OBJD
  6      and bh.status<>'free'
  7      and bh.inst_id = i.inst_id
  8      GROUP BY o.owner||'.'||o.OBJECT_NAME,o.object_type,i.instance_name
  9      having count(*)>1024*100
 10      ORDER BY COUNT(*);

OBJECT_TYPE DB       MEG_IN_MEMORY OBJECT_IN_MEMORY
----------- -------- ------------- -----------------------------------------
TABLE       NWEBP2             854 ALFC.NFL_ABSTRACT_CONTENT
TABLE       NWEBP3             856 ALFC.NFL_ABSTRACT_CONTENT
TABLE       NWEBP4             858 ALFC.NFL_ABSTRACT_CONTENT
TABLE       NWEBP4            1416 CONTENT.NFL_DOCBASE_PROPERTIES1_S
TABLE       NWEBP2            1435 CONTENT.NFL_DOCBASE_PROPERTIES1_S
TABLE       NWEBP1            1533 ALFC.NFL_ABSTRACT_CONTENT
TABLE       NWEBP1            1536 CONTENT.NFL_DOCBASE_PROPERTIES1_S
TABLE       NWEBP1            2813 PROFILE.REGISTRATIONS
TABLE       NWEBP4            2813 PROFILE.REGISTRATIONS
TABLE       NWEBP3            3184 PROFILE.REGISTRATIONS
TABLE       NWEBP2            3632 PROFILE.REGISTRATIONS

Many times you can purge a table that is hogging memory or better index it to relieve the impact on your more important tables.
Previous Topic: What does analyzing a table do for the indexes ?
Next Topic: Help required for tuning referring AWR Reports
Goto Forum:
  


Current Time: Tue Apr 16 18:43:30 CDT 2024