Tuesday, September 13, 2011

Performance Tuning Methodology

I recently had an opportunity to do a bit of performance tuning on a newly deployed production App and thought I would share a methodology for tackling some of the low hanging fruit, sort of the 80/20 rule of siebel performance tuning.  My experience is that with Siebel 7.8 and higher, on Oracle 10 and higher, most performance issues are Siebel configuration issues.  Of those, most of the issues fall into one of two categories:
  • Missing Indexes
  • Sort Specs
When customizing Siebel, you will frequently create new relationships between logical objects via a new foreign key.  There should always be a corresponding index for that foreign key on the M side of the 1:M or M:M  linked table.  Typically, it is just a single Id column but if for some reason, there are multiple columns (perhaps a join spec and a join constraint) make sure all of the columns from the child table are part of the index.

Be aware that all the perfectly planned indexes in the world will frequently be ignored if there is a sort spec on a query.  The sort essentially takes precedence and any index that optimizes the sort will usually be used to the exclusion of other indexes that perhaps optimize what a user is doing on that view.  I frequently see performance problems on visibility views (All/All Across) without any query refinement at all.  When this occurs, it is usually because of the All Mode Sort user property settings.  If you are seeing performance problems on an All view, try changing the settings of that property for the BC to fix the issue.

Here is a general methodology for identifying and fixing performance issues.

  • For the OM component having the issue, change the Server Configuration event logging level to 4 for these events:
    • SQL Parse and Execute
    • Object Manager SQL Log
  • Execute the operation that performs the slow query
  • Open the corresponding OM log and find the SQL statement representing the slow query
  • Finding the statement can be done in a couple of ways, but I use the following:
    • Query for this string in the log '***** SQL Statement Execute Time'
    • Look at the seconds for this line and the subsequent '***** SQL Statement Initial Fetch Time' to see a large value
  • Copy the preceeding SQL statement into a SQL editor such as Toad or Benthic or whatever your fancy, swapping out the bind variables
  • Run an explain plan on the statement
    • Look for a line that says Table Access Full.  If you see such a line, look at the table being accessed this way and look back at the where clause to see how the SQL statement is joining to that table.  Then look in tools to see if there is an index for that table on the columns in the join to that table.
    • If indexes are not an issue, but there is an Order By in the SQL statement, try commenting out the Order By and rerunning the explain plan to see how it changes.  If you see the explain plan change significantly (Cost goes down) than confirm that you really need the sort spec in the particular scenario you are in.
This is really just meant to be a way to find low hanging fruit performance issues.  It is important to configure with performance in mind (especially when using script or workflow algoritms).  Other sources of performance bottlenecks include (but are not limited to):
  • Synchonous Interface round trips
  • File System operations
  • Network bandwidth  (especially if using a VPN)
  • Memory or CPU bound servers

Monday, September 12, 2011

eScript Framework on 8.1

Converting the eScript framework to 8.1 proved a bit troublesome for me as the Siebel strong type engine has apparently dropped support for prototyping Siebel objects, such as a Business Service.  This makes the implementation a bit less clean since without being able to declare a prototype of the Log or Frame objects on application start, we are left with having to have every framework function be a child of the Application object.  This being the case, I consolidated the Frame and Log objects from the 7.8 framework into a single Utility object since there was not as much advantage in separating them.  Instead of the elegant 7.8 calls:

Log.Stack("MyMethod",1);
Log.Step("Log the time as "+Frame.Timestamp("DateTimeMilli"),3);
Log.Vars("VariableName", varValue,3)
Log.Unstack("",1);

we instead do this:

TheApplication().logStack("Write",this)
TheApplication().Utility.logStep("Log the time as "+
    TheApplication().Utility.Timestamp("DateTimeMilli"));
TheApplication().Utility.logVars("VariableName", varValue)
TheApplication().Unstack("");

Oh well.  To mitigate this somewhat, I have added a number of enhancements since the initial series of posts, which I will try to discuss sometime soon.
  • Automatically tie log level to the function doing the logging (Stack/Unstack vs variables for instance), hence no need for the numeric last parameter to all logging functions (though it is still optional as an override)
  • Added support for unix file systems
  • Standardize the identification of logging record Ids (by passing the 'this' reference it will append the row id for methods with Write, Delete and Invoke in the name)
To implement the basic framework in 8.1, you need something like this in the Application Start event:
        this.Utility = TheApplication().GetService("ETAR Utilities");
        this.Utility.Init();

Here is the Declarations section:

var gsOutPutFileName;
var gsFileName;
var gsLogMode;
var giIndent = 2; //Indent child prop sets this many spaces to the right for each level down.
var giPSDepth = 0; // How deep in the property set tree, what level
var gaFunctionStack = new Array(); //used in debugStack function to store called functions
var giStackIndex = 0; //Where in the function stack the current function resides
var gsIndent = ''; //used in debug methods to identify stack indents
var giLogBuffer = 0;
var giLogLines = 0;
var gsLogPath = "";
var gsLogCache = "";
var gsLogSession = "";
var giErrorStack = 0;
var ge = new Object();
var gStack = new Object();
var gCurrentLogLvl;

The Utilities business service is a cached service in tools.  It's Init function looks like this:

giErrorStack = 0;
ExtendObjects();
gsLogMode = GetSysPref("Framework Log Mode");
gsLogMode = (gsLogMode == "" ? "FILE" : gsLogMode.toUpperCase());
gsLogSession = TimeStamp("DateTimeMilli");

if (TheApplication().GetProfileAttr("ETAR User Log Level") != "")
    gCurrentLogLvl = TheApplication().GetProfileAttr("ETAR User Log Level");
else gCurrentLogLvl = GetSysPref("CurrentLogLevel");
giLogBuffer = GetSysPref("Log Buffer");
gsLogPath = GetSysPref("Framework Log Path");
try {
     var os;
     os = Clib.getenv("OS");
} catch(e) { os = "UNIX Based"; }
try {
   gsFileName = "Trace-"+TheApplication().LoginName()+"-"+gsLogSession+".txt"
  //A Windows OS indicates a thick client. Assume the path is the dynamicly
  //determined Siebel_Home\Log directory, or ..\log
  if (os.substring(0, 7) == "Windows") {
//  gsLogPath = gsLogPath.replace(/\\$/, "");  //Remove trailing backslash if used
//  gsLogPath = gsLogPath.replace(/\x47/, "\\");  //switch invalid OS directory seperators
    gsLogPath = "..\\Log\\";
    gsOutPutFileName = gsLogPath+gsFileName;
  } else {
    gsLogPath = gsLogPath.replace(/\x47$/, "");  //Remove trailing backslash if used
    gsLogPath = gsLogPath.replace(/\\/, "/");  //switch invalid OS directory seperators
    gsLogPath = gsLogPath+"/";
    gsOutPutFileName = gsLogPath+gsFileName;
 }
} catch(e) {
  gsLogPath = "";
  gsOutPutFileName = gsFileName;
}

Tuesday, March 29, 2011

Tools Bleg

Don't get me wrong. I love Siebel Tools. Compared to other enterprise systems where development for the most part involves modifying script, Siebel has a very elegant development platform. OK, all that being said, after developing in Tools for over eleven years (odd writing that), there are some things I would love to do better to make my development experience more efficient. So to that end I thought I would put some thoughts out into the cloud to see if anyone has thought of a workaround for any of these items:
  • Column Preferences. Is it just me or does the Tools client not save preferences the way the Siebel client does. Rearranging columns usually works, but changing widths do not seem to save.
  • PDQs. The idea of Bookmarks is nice but I hate the fact that drilling down or using them loses the context of my exporer pane when I go back. PDQs on every object like within the Siebel Client (and the ability to set default PDQs for each view) would do wonders.
  • Drilldowns. Speaking of drilldowns, is it really necessary for drilling down to collapse the rest of my explorer pane, hence refreshing all the queries on other objects?
  • Expose Tab Order on Applets. I am tempted to try this one out myself one day because it seems doable. Who knows.
  • Applet Wizard. Not for creating a new one. That is ok. But to synchronize with a BC down the road when I want to add a new column. A wizard would just be a much easier way to add a new column rather than adding a control or list column, then adding it to the web template.
  • Allow sync of Meta Data needed by Tools without Remote Sync. This might be a bit more out there but I find it annoying that Users (Help about record) and LOVs cannot be synced with a 'Get'. I know you can get them with a remote sync, but more and more, a lot of client's do not use Remote or use it so infrequently that it is not emphasized and it is a pain to keep my remote client in sync with the server in a development environment anyway. This might sound minor, but like I said, it annoys me.
I have mainly limited this list to just applying functionality that already exists in the Siebel Client or to exposing data which I am pretty sure is there to be exposed. Not really trying to create a forum for adding "New" features. I may add to this list in the future, but feel free to add your own wishes/solutions in comments.

Thursday, January 20, 2011

Building a BI Developer's SuperView

Another limitation I find irritating when it comes to building BI templates is how basic the sample file generator is. My main beef is that it just takes the first 10 records in the Integration Object and spits them out. If you have a complicated IO with child ICs, it is possible, and even likely that those first ten records do not have the child detail records you need to test your report output. There are some ways around this, like hard coding a search spec on the BC against a thick client partial compile to generate a file with data you want, but that seems so inelegant. My other tick regarding this feature is that the report developer once again either needs to have a Siebel thick client or access to the Siebel Server file system to actually get the xml file produced. It seems like the whole point of all the BI Administration views is to avoid having to go to the file system. What to do...

Caveat Emptor. Configuration steps below are to give you an idea. I am posting this after I finished to highlight what I recall as the important pieces so not every step is included. You will need to create the new custom table CX_TMPL (or use another), create all links, applets, view objects, make BO/Screen changes and deploy them.

First I build a view with the same IO BC based applet as the vanilla view on top, and child applets for both attachments and a new object which is essentially a search spec. First the attachment bc. This is a new BC which you can copy from an existing attachment BC and change the names around. Here is mine, called 'Sample IO Attachment' based on S_FILE_ATT. Use the field name prefix 'Sample' instead of which ever prefix is used on the BC you are copying (Be sure to set the User Property DefaultPrefix to 'Sample' too):
NameJoinColumnForce ActivePredefault ValueText LengthType
IO Id
PAR_ROW_ID
Y
Parent: 'Repository Integration Object.Id'15DTYPE_ID
Parent KeyX_PARENT_KEY
Parent: 'Repository Integration Object.Name'100DTYPE_TEXT


The Search Spec applet is based on a custom BC, 'Report IO Sample File Template', based on the new table, CX_TMPL (I use this table for other things too so I type spec each record):
NameJoinColumnForce ActivePredefault ValueText LengthType
Name
NAME
Field: "Id"100DTYPE_TEXT
Parent IdS_INT_OBJROW_IDYParent: "Repository Integration Object.Id"15DTYPE_ID
Parent Name
PARENT_FLD
Parent: "Repository Integration Object.Name"50DTYPE_TEXT
Search Specification
CONSTRAINTY
250DTYPE_TEXT
Type
TYPE
SAMPLE_IO_CONSTRAINT30DTYPE_TEXT
Number of Records
LN_NUM10
DTYPE_INTEGER


The join, S_INT_OBJ, is based on the specification of Parent Name = NAME. Using name instead of Id allows the search specs to remain visible after repository moves.

You will also need the following Named Method User Property:

"GenerateConstrainedData", "INVOKESVC", "Report IO Sample File Template", "Workflow Process Manager", "RunProcess", "'ProcessName'", "'Export Sample IO To File'", "SearchConstraint", "[Search Specification]", "'IOName'", "[Parent Name]", "Path", "'..\XMLP\Data'", "Object Id", "[Parent Id]", "PageSize", "[Number of Records]"

This user property is to activate the button you will need to place on the applet based on this BC. On that applet (based on class CSSSWEFrameListBase), add a button which invokes the method 'GenerateConstrainedData'. No additional script should be needed there.

Create a Service Flow Workflow Process called 'Export Sample IO To File'


Here are the Process Properties:

NameIn/OutData Type
FileNameInString
IONameInString
PageSizeInString
PathInString
SearchConstraintInString
SiebelMessageNoneHierarchy
ViewModeInString


The first 'Echo' step is a Business Service based on Workflow Utilities, Echo method. This step sets up all the variables used later in the process. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputIONameProcess PropertyIOName
InputPageSizeProcess PropertyPageSize
InputPathProcess PropertyPath
InputSearchConstraintProcess PropertySearchConstraint
InputViewModeProcess PropertyViewMode
OutputFileNameExpressionIIF([&FileName] is not null, [&FileName], [&IOName])


The next 'Export IO' step is a Business Service based on EAI Siebel Adapter, QueryPage method. This step queries the integration object. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputOutputIntObjectNameProcess PropertyIOName
InputPageSizeProcess PropertyPageSize
InputSearchSpecProcess PropertySearchConstraint
InputViewModeProcess PropertyViewMode
OutputSiebelMessageOutput ArgumentSiebelMessage


The next 'Write to File' step is the Business Service, EAI XML Write to File, WriteEAIMsg method. This step writes the property set out as an XML document to the file system. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputFileNameExpression[&Path]+"\"+[&Process Instance Id]+"_"+[&FileName]+".xml"
InputSiebelMessageProcess PropertySiebelMessage


The final 'Attach' step is another Business Service, this one custom. The basic logic here is to add an Attachment to the file system which is first described in Oracle document 477534.1 (I have made some improvements which I will perhaps discuss another day). Here are the arguments:

I/OArgumentTypeValue/Property Name
InputAttBusinessComponentLiteralSample IO Attachment
InputAttachmentFieldNameLiteralSampleFileName
InputBusinessObjectLiteralRepository Integration Object
InputFileExpression[&Path]+"\"+[&Process Instance Id]+"_"+[&FileName]+".xml"
InputObjectIdProcess PropertyObject Id
InputPrimaryBusinessComponentLiteralRepository Integration Object

Wednesday, January 19, 2011

BI - Upload Limitation

I have recently been designated the BI technical resource on my project so am looking at the BI capabilities (on 7.8) for the first time. Despite a fairly complicated and mistake laden patch upgrade which I do not even want to get into, it is a pretty powerful tool, much better architected than Actuate. Anyway, there are also some pretty glaring limitations as well on how it is administered that require so little effort to fix, I decided to just go ahead and fix them.

My main beef is that the architecture requires your BI report developer to have access to both the BI file system and the Siebel Server file system. I suppose you could set this up in a way that minimizes security risk, but it just seems so unnecessary. Essentially, to upload a new BI Report Template, the developer creates a record in the BI Report Template administration view, attaches the two template files (an RTF and an XLF) and clicks the upload button. So far, so good. The problem is that these template files must also exist in a specific place in the Siebel Server file system as well to generate a report. But the code behind that button does not take the extra step to just copy the files to where they need to go. Also, there is an existing product defect where modifications to an existing report record require the developer to go into the BI File system and delete the entire directory containing that report template. So that is where I step in.

First I added two new System Parameters indicating the locations of the BI and Siebel home directories. There is a way to grab environment variables through script but I did not feel like investigating this so let's call that phase II. For example, here are my two:


NameValue
BIHomeDirectoryE:\OraHome
SiebelHomeDirectoryE:\sea78\siebsrvr


Then, we need to trap the call to upload the templates file. This call is performed from 'Report Template BC' by the 'Upload' method. We need to always delete the directory before this upload is called. We also want to delete the existing template file from the Siebel server file system. Here is a script to place in the PreInvoke method to accomplish that (there are also some references to the Log and Frame objects):

switch (MethodName) {
case "Upload":
try {
Log.StartStack("Business Component", this.Name()
+".PreInvoke", MethodName, 1);
this.WriteRecord();
var sReturn, sCommand;
var sSiebel = Frame.GetSysPref("SiebelHomeDirectory")
+"\\XMLP\\TEMPLATES";
var sPath = Frame.GetSysPref("BIHomeDirectory");
var sFile = this.GetFieldValue("ReportTmplFileName")
+"."+this.GetFieldValue("ReportTmplFileExt");

sPath = sPath
+"\\XMLP\\XMLP\\Reports\\SiebelCRMReports\\"
+this.GetFieldValue("Report Name");
Log.stepVars("BI Report Path", sPath, 3);

sCommand = 'rmdir "'+sPath+'" /S /Q';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);

sCommand = 'del "'+sSiebel+'\\'+sFile+'"';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);
} catch(e) {
Log.RaiseError(e);
} finally {
Log.Unstack("", 1);
}
break;
}
return (ContinueOperation);
Ok. That addresses the product defect for updates. Now the second part is to copy these template files to the Siebel server file system once the template files are uploaded. The following script can be added to the InvokeMethod event:

switch (MethodName) {
case "Upload":
try {
Log.StartStack("Business Component", this.Name()
+".Invoke", MethodName, 1);
var sReturn, sCommand;

var sSiebel = Frame.GetSysPref("SiebelHomeDirectory")+
"\\XMLP\\TEMPLATES";
var sPath = Frame.GetSysPref("BIHomeDirectory");
var sFile = this.GetFieldValue("ReportTmplFileName")
+"."+this.GetFieldValue("ReportTmplFileExt");

sPath = sPath
+"\\XMLP\\XMLP\\Reports\\SiebelCRMReports\\"
+this.GetFieldValue("Report Name");
Log.stepVars("Source Path",sPath,"Target Path",
sSiebel,"File to copy",sFile, 3);
sCommand = 'copy "'+sPath+'\\'+sFile+'" "'+sSiebel
+'\\'+sFile+'"';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);
} catch(e) {
Log.RaiseError(e);
} finally {
Log.Unstack("", 1);
}
break;
}

And there you go.

Friday, January 7, 2011

ADM - List Of Values

There are plenty of posts on support web discussing the issues with migrating LOVs, but for my own sanity, I thought I would summarize all of the relevant issues in one place.

First, we need to address the defects. These are documented on support in Document 731411.1 but I will summarize here:
(1) Go to BC 'List Of Values Child (UDA)'
(2) Add a new field 'Parent Type' based on Join 'Parent LOV' and Column 'TYPE' with Text Length '30'.
(3) Expand the pickmap for the 'Parent' field. Replace pickmap field 'Type' with 'Parent Type' and uncheck Constrain flg.
(4) Go to the integration object 'UDA List Of Values'
(5) Find the Integration component 'List Of Values Child (UDA)'
(6) Add a new field to the integration component with Name = 'Parent Type'. Data Type = 'DTYPE_TEXT', Length = '30', Type = 'Data', External Name = 'Parent Type', External Data Type = 'DTYPE_TEXT', External Length = '30', External Sequence = '38', XML Tag = 'ParentType'
(8) Compile changes.
The SR then goes into some more detail on why after all that it still does not quite work. To understand, we need to see that the LOV ADM Integration Object is Hierarchical in one dimension. That is, there is the LOV_TYPE record and then there are the value records. But LOVs are frequently Hierarchical in two dimensions, by virtue of the Parent value. What I mean is that a given LOV value record will always have one 'parent' record, it's type or technical parent, and may have a second parent record, it's functional parent, if you will.

ADM loads the first, technical parent in the standard way, through the relationships of the Integration Object. To load the functional parent though, ADM must run in two passes, the first to create all the parent and child records, and the second to relate them. This is necessary because we cannot guarantee the sequence with which LOV value records will be placed in the extract file. If these value records do not exist in the target already, and the parent is alphabetically (or however else we chose to sort the records) after the child, then it would error if ADM did not take this approach. So how ADM takes two passes is by virtue of the ADM Data Type explorer. You will notice that the explorer does not actually specify the foreign key fields of an object to link them to each other. Its only purpose is to run ADM in multiple passes. But the twist is that ADM will actually process dependent data types setup in the explorer in reverse order, importing the children before the parent. I personally find this confusing from a terminology perspective. Perhaps a better way of naming these Data Types is to use 'LOV-2ndPass' instead of 'LOV-HierParent' and 'LOV-1stPass' instead of 'LOV-HierChild'. This way when we set up the search specifications for an ADM Export session, it is clear what we are trying to do.

OK, one more wrinkle to throw into the mix (just when you thought it was all making sense). There is actually a third parent relationship involved. That is the records that populate the S_LOV_REL table. I will be honest; I do not use the LOV explorer view that often and I don't really know what the point of this table is. In theory it can make LOVs M:M but I just don't think this is practical. Nevertheless, there are some vanilla uses of LOVs where these records are in fact used that way. The one that comes to mind is in payments, where the PAYMENT_TYPE_CODE values are children of the PAYMENT_METHOD_CODE and there are S_LOV_REL records created to store the relationships. The same issue applies when migrating these relationships. The related value must exist prior to the relationship being built.

One final note. I think the whole not deleting LOVs is well intended but more likely to cause confusion than solve anything. Here is why. Users can and will just change the Name/Value of a value record to something else in which case any sense of history is lost anyway. There are no foreign key relationships to LOVs so business data using these values is unaffected regardless. But others may disagree so this step is completely optional. I remove the no delete properties from the 'List Of Values Child (UDA)' BC and Integration Component. (I also allow deletes from the GUI but that is a separate issue). So my migration methodology is to synchronize values between environments for an initial release. You would take a different approach on a point release where values are likely to have been added directly to production and therefore may not exist in your DEV and TEST environments.

Anyway, what are we trying to do. Quite simply, we are trying to create all the Value records in pass 1, then we need to relate them to each other in Pass 2. I have already discussed how to group LOVs together for a release. This is where I diverge from Siebel's example because I am trying to think of real life scenarios where I am deploying releases, not just one LOV_TYPE. When creating the ADM Project/Session, here are the session items I use:


Data TypeChild DeleteDeployment Filter
LOV-2ndPassY[Release] = '1.1'
LOV-1stPassN[Release] = '1.1' AND [List Of Values Relationship.Name] IS NULL

What this means is that the first pass includes all LOV_TYPE records that have been marked for this release and all LOV value records related to them. The second part of the expression basically just insures that no relationship records are included in the first pass. When ADM attempts to set the parent value on a child, it may not be able to find it so it will log a warning and move on. In the second pass, ADM will load all the Relationship records and set the Parent values that it missed on the first pass. I have also set child delete to true on the second pass so that this job effectively synchronizes the value records for the type records marked.