Tuesday, September 13, 2011

Performance Tuning Methodology

I recently had an opportunity to do a bit of performance tuning on a newly deployed production App and thought I would share a methodology for tackling some of the low hanging fruit, sort of the 80/20 rule of siebel performance tuning.  My experience is that with Siebel 7.8 and higher, on Oracle 10 and higher, most performance issues are Siebel configuration issues.  Of those, most of the issues fall into one of two categories:
  • Missing Indexes
  • Sort Specs
When customizing Siebel, you will frequently create new relationships between logical objects via a new foreign key.  There should always be a corresponding index for that foreign key on the M side of the 1:M or M:M  linked table.  Typically, it is just a single Id column but if for some reason, there are multiple columns (perhaps a join spec and a join constraint) make sure all of the columns from the child table are part of the index.

Be aware that all the perfectly planned indexes in the world will frequently be ignored if there is a sort spec on a query.  The sort essentially takes precedence and any index that optimizes the sort will usually be used to the exclusion of other indexes that perhaps optimize what a user is doing on that view.  I frequently see performance problems on visibility views (All/All Across) without any query refinement at all.  When this occurs, it is usually because of the All Mode Sort user property settings.  If you are seeing performance problems on an All view, try changing the settings of that property for the BC to fix the issue.

Here is a general methodology for identifying and fixing performance issues.

  • For the OM component having the issue, change the Server Configuration event logging level to 4 for these events:
    • SQL Parse and Execute
    • Object Manager SQL Log
  • Execute the operation that performs the slow query
  • Open the corresponding OM log and find the SQL statement representing the slow query
  • Finding the statement can be done in a couple of ways, but I use the following:
    • Query for this string in the log '***** SQL Statement Execute Time'
    • Look at the seconds for this line and the subsequent '***** SQL Statement Initial Fetch Time' to see a large value
  • Copy the preceeding SQL statement into a SQL editor such as Toad or Benthic or whatever your fancy, swapping out the bind variables
  • Run an explain plan on the statement
    • Look for a line that says Table Access Full.  If you see such a line, look at the table being accessed this way and look back at the where clause to see how the SQL statement is joining to that table.  Then look in tools to see if there is an index for that table on the columns in the join to that table.
    • If indexes are not an issue, but there is an Order By in the SQL statement, try commenting out the Order By and rerunning the explain plan to see how it changes.  If you see the explain plan change significantly (Cost goes down) than confirm that you really need the sort spec in the particular scenario you are in.
This is really just meant to be a way to find low hanging fruit performance issues.  It is important to configure with performance in mind (especially when using script or workflow algoritms).  Other sources of performance bottlenecks include (but are not limited to):
  • Synchonous Interface round trips
  • File System operations
  • Network bandwidth  (especially if using a VPN)
  • Memory or CPU bound servers

6 comments:

  1. Even if we have "Not equal" to clause in the Query, well planned indexes are ignored resulting in performance degradation.

    ReplyDelete
  2. This is true. Also using 'Or'. I have sometimes found that this can be avoided with a clever use of a sub/super LOV. For instance if you have a status with 6 values, a user may often wish to just get open activities and so run a query where status <> 'Cancelled' and status <> 'Done'. You could instead offer a supertype (or 'status group') populated automatically from the status that groups the 6 values into 3 like values, with one of them being 'Closed'. When analyzing the system, you might want to take out very frequent queries or PDQs and look for alternatives to having the queries written the way they are.

    ReplyDelete
  3. hello, all great comments on that blog..note that you can use Germain Customer Validation tool for Siebel to help detect all these issues (e.g. bad escript, slow workflow, missing indexes, etc): http://germainsoftware.com

    ReplyDelete
  4. anyone wants technical support or training on siebel

    Please contact me @ chandrasekharr391@gmail.com

    demo :- https://youtu.be/w2bt_WtRg1Q

    ReplyDelete
  5. wats "Initial Fetch Time" mean? I cant understand, can somebody told me?

    ReplyDelete