Thursday, January 20, 2011

Building a BI Developer's SuperView

Another limitation I find irritating when it comes to building BI templates is how basic the sample file generator is. My main beef is that it just takes the first 10 records in the Integration Object and spits them out. If you have a complicated IO with child ICs, it is possible, and even likely that those first ten records do not have the child detail records you need to test your report output. There are some ways around this, like hard coding a search spec on the BC against a thick client partial compile to generate a file with data you want, but that seems so inelegant. My other tick regarding this feature is that the report developer once again either needs to have a Siebel thick client or access to the Siebel Server file system to actually get the xml file produced. It seems like the whole point of all the BI Administration views is to avoid having to go to the file system. What to do...

Caveat Emptor. Configuration steps below are to give you an idea. I am posting this after I finished to highlight what I recall as the important pieces so not every step is included. You will need to create the new custom table CX_TMPL (or use another), create all links, applets, view objects, make BO/Screen changes and deploy them.

First I build a view with the same IO BC based applet as the vanilla view on top, and child applets for both attachments and a new object which is essentially a search spec. First the attachment bc. This is a new BC which you can copy from an existing attachment BC and change the names around. Here is mine, called 'Sample IO Attachment' based on S_FILE_ATT. Use the field name prefix 'Sample' instead of which ever prefix is used on the BC you are copying (Be sure to set the User Property DefaultPrefix to 'Sample' too):
NameJoinColumnForce ActivePredefault ValueText LengthType
IO Id
PAR_ROW_ID
Y
Parent: 'Repository Integration Object.Id'15DTYPE_ID
Parent KeyX_PARENT_KEY
Parent: 'Repository Integration Object.Name'100DTYPE_TEXT


The Search Spec applet is based on a custom BC, 'Report IO Sample File Template', based on the new table, CX_TMPL (I use this table for other things too so I type spec each record):
NameJoinColumnForce ActivePredefault ValueText LengthType
Name
NAME
Field: "Id"100DTYPE_TEXT
Parent IdS_INT_OBJROW_IDYParent: "Repository Integration Object.Id"15DTYPE_ID
Parent Name
PARENT_FLD
Parent: "Repository Integration Object.Name"50DTYPE_TEXT
Search Specification
CONSTRAINTY
250DTYPE_TEXT
Type
TYPE
SAMPLE_IO_CONSTRAINT30DTYPE_TEXT
Number of Records
LN_NUM10
DTYPE_INTEGER


The join, S_INT_OBJ, is based on the specification of Parent Name = NAME. Using name instead of Id allows the search specs to remain visible after repository moves.

You will also need the following Named Method User Property:

"GenerateConstrainedData", "INVOKESVC", "Report IO Sample File Template", "Workflow Process Manager", "RunProcess", "'ProcessName'", "'Export Sample IO To File'", "SearchConstraint", "[Search Specification]", "'IOName'", "[Parent Name]", "Path", "'..\XMLP\Data'", "Object Id", "[Parent Id]", "PageSize", "[Number of Records]"

This user property is to activate the button you will need to place on the applet based on this BC. On that applet (based on class CSSSWEFrameListBase), add a button which invokes the method 'GenerateConstrainedData'. No additional script should be needed there.

Create a Service Flow Workflow Process called 'Export Sample IO To File'


Here are the Process Properties:

NameIn/OutData Type
FileNameInString
IONameInString
PageSizeInString
PathInString
SearchConstraintInString
SiebelMessageNoneHierarchy
ViewModeInString


The first 'Echo' step is a Business Service based on Workflow Utilities, Echo method. This step sets up all the variables used later in the process. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputIONameProcess PropertyIOName
InputPageSizeProcess PropertyPageSize
InputPathProcess PropertyPath
InputSearchConstraintProcess PropertySearchConstraint
InputViewModeProcess PropertyViewMode
OutputFileNameExpressionIIF([&FileName] is not null, [&FileName], [&IOName])


The next 'Export IO' step is a Business Service based on EAI Siebel Adapter, QueryPage method. This step queries the integration object. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputOutputIntObjectNameProcess PropertyIOName
InputPageSizeProcess PropertyPageSize
InputSearchSpecProcess PropertySearchConstraint
InputViewModeProcess PropertyViewMode
OutputSiebelMessageOutput ArgumentSiebelMessage


The next 'Write to File' step is the Business Service, EAI XML Write to File, WriteEAIMsg method. This step writes the property set out as an XML document to the file system. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputFileNameExpression[&Path]+"\"+[&Process Instance Id]+"_"+[&FileName]+".xml"
InputSiebelMessageProcess PropertySiebelMessage


The final 'Attach' step is another Business Service, this one custom. The basic logic here is to add an Attachment to the file system which is first described in Oracle document 477534.1 (I have made some improvements which I will perhaps discuss another day). Here are the arguments:

I/OArgumentTypeValue/Property Name
InputAttBusinessComponentLiteralSample IO Attachment
InputAttachmentFieldNameLiteralSampleFileName
InputBusinessObjectLiteralRepository Integration Object
InputFileExpression[&Path]+"\"+[&Process Instance Id]+"_"+[&FileName]+".xml"
InputObjectIdProcess PropertyObject Id
InputPrimaryBusinessComponentLiteralRepository Integration Object

Wednesday, January 19, 2011

BI - Upload Limitation

I have recently been designated the BI technical resource on my project so am looking at the BI capabilities (on 7.8) for the first time. Despite a fairly complicated and mistake laden patch upgrade which I do not even want to get into, it is a pretty powerful tool, much better architected than Actuate. Anyway, there are also some pretty glaring limitations as well on how it is administered that require so little effort to fix, I decided to just go ahead and fix them.

My main beef is that the architecture requires your BI report developer to have access to both the BI file system and the Siebel Server file system. I suppose you could set this up in a way that minimizes security risk, but it just seems so unnecessary. Essentially, to upload a new BI Report Template, the developer creates a record in the BI Report Template administration view, attaches the two template files (an RTF and an XLF) and clicks the upload button. So far, so good. The problem is that these template files must also exist in a specific place in the Siebel Server file system as well to generate a report. But the code behind that button does not take the extra step to just copy the files to where they need to go. Also, there is an existing product defect where modifications to an existing report record require the developer to go into the BI File system and delete the entire directory containing that report template. So that is where I step in.

First I added two new System Parameters indicating the locations of the BI and Siebel home directories. There is a way to grab environment variables through script but I did not feel like investigating this so let's call that phase II. For example, here are my two:


NameValue
BIHomeDirectoryE:\OraHome
SiebelHomeDirectoryE:\sea78\siebsrvr


Then, we need to trap the call to upload the templates file. This call is performed from 'Report Template BC' by the 'Upload' method. We need to always delete the directory before this upload is called. We also want to delete the existing template file from the Siebel server file system. Here is a script to place in the PreInvoke method to accomplish that (there are also some references to the Log and Frame objects):

switch (MethodName) {
case "Upload":
try {
Log.StartStack("Business Component", this.Name()
+".PreInvoke", MethodName, 1);
this.WriteRecord();
var sReturn, sCommand;
var sSiebel = Frame.GetSysPref("SiebelHomeDirectory")
+"\\XMLP\\TEMPLATES";
var sPath = Frame.GetSysPref("BIHomeDirectory");
var sFile = this.GetFieldValue("ReportTmplFileName")
+"."+this.GetFieldValue("ReportTmplFileExt");

sPath = sPath
+"\\XMLP\\XMLP\\Reports\\SiebelCRMReports\\"
+this.GetFieldValue("Report Name");
Log.stepVars("BI Report Path", sPath, 3);

sCommand = 'rmdir "'+sPath+'" /S /Q';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);

sCommand = 'del "'+sSiebel+'\\'+sFile+'"';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);
} catch(e) {
Log.RaiseError(e);
} finally {
Log.Unstack("", 1);
}
break;
}
return (ContinueOperation);
Ok. That addresses the product defect for updates. Now the second part is to copy these template files to the Siebel server file system once the template files are uploaded. The following script can be added to the InvokeMethod event:

switch (MethodName) {
case "Upload":
try {
Log.StartStack("Business Component", this.Name()
+".Invoke", MethodName, 1);
var sReturn, sCommand;

var sSiebel = Frame.GetSysPref("SiebelHomeDirectory")+
"\\XMLP\\TEMPLATES";
var sPath = Frame.GetSysPref("BIHomeDirectory");
var sFile = this.GetFieldValue("ReportTmplFileName")
+"."+this.GetFieldValue("ReportTmplFileExt");

sPath = sPath
+"\\XMLP\\XMLP\\Reports\\SiebelCRMReports\\"
+this.GetFieldValue("Report Name");
Log.stepVars("Source Path",sPath,"Target Path",
sSiebel,"File to copy",sFile, 3);
sCommand = 'copy "'+sPath+'\\'+sFile+'" "'+sSiebel
+'\\'+sFile+'"';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);
} catch(e) {
Log.RaiseError(e);
} finally {
Log.Unstack("", 1);
}
break;
}

And there you go.

Friday, January 7, 2011

ADM - List Of Values

There are plenty of posts on support web discussing the issues with migrating LOVs, but for my own sanity, I thought I would summarize all of the relevant issues in one place.

First, we need to address the defects. These are documented on support in Document 731411.1 but I will summarize here:
(1) Go to BC 'List Of Values Child (UDA)'
(2) Add a new field 'Parent Type' based on Join 'Parent LOV' and Column 'TYPE' with Text Length '30'.
(3) Expand the pickmap for the 'Parent' field. Replace pickmap field 'Type' with 'Parent Type' and uncheck Constrain flg.
(4) Go to the integration object 'UDA List Of Values'
(5) Find the Integration component 'List Of Values Child (UDA)'
(6) Add a new field to the integration component with Name = 'Parent Type'. Data Type = 'DTYPE_TEXT', Length = '30', Type = 'Data', External Name = 'Parent Type', External Data Type = 'DTYPE_TEXT', External Length = '30', External Sequence = '38', XML Tag = 'ParentType'
(8) Compile changes.
The SR then goes into some more detail on why after all that it still does not quite work. To understand, we need to see that the LOV ADM Integration Object is Hierarchical in one dimension. That is, there is the LOV_TYPE record and then there are the value records. But LOVs are frequently Hierarchical in two dimensions, by virtue of the Parent value. What I mean is that a given LOV value record will always have one 'parent' record, it's type or technical parent, and may have a second parent record, it's functional parent, if you will.

ADM loads the first, technical parent in the standard way, through the relationships of the Integration Object. To load the functional parent though, ADM must run in two passes, the first to create all the parent and child records, and the second to relate them. This is necessary because we cannot guarantee the sequence with which LOV value records will be placed in the extract file. If these value records do not exist in the target already, and the parent is alphabetically (or however else we chose to sort the records) after the child, then it would error if ADM did not take this approach. So how ADM takes two passes is by virtue of the ADM Data Type explorer. You will notice that the explorer does not actually specify the foreign key fields of an object to link them to each other. Its only purpose is to run ADM in multiple passes. But the twist is that ADM will actually process dependent data types setup in the explorer in reverse order, importing the children before the parent. I personally find this confusing from a terminology perspective. Perhaps a better way of naming these Data Types is to use 'LOV-2ndPass' instead of 'LOV-HierParent' and 'LOV-1stPass' instead of 'LOV-HierChild'. This way when we set up the search specifications for an ADM Export session, it is clear what we are trying to do.

OK, one more wrinkle to throw into the mix (just when you thought it was all making sense). There is actually a third parent relationship involved. That is the records that populate the S_LOV_REL table. I will be honest; I do not use the LOV explorer view that often and I don't really know what the point of this table is. In theory it can make LOVs M:M but I just don't think this is practical. Nevertheless, there are some vanilla uses of LOVs where these records are in fact used that way. The one that comes to mind is in payments, where the PAYMENT_TYPE_CODE values are children of the PAYMENT_METHOD_CODE and there are S_LOV_REL records created to store the relationships. The same issue applies when migrating these relationships. The related value must exist prior to the relationship being built.

One final note. I think the whole not deleting LOVs is well intended but more likely to cause confusion than solve anything. Here is why. Users can and will just change the Name/Value of a value record to something else in which case any sense of history is lost anyway. There are no foreign key relationships to LOVs so business data using these values is unaffected regardless. But others may disagree so this step is completely optional. I remove the no delete properties from the 'List Of Values Child (UDA)' BC and Integration Component. (I also allow deletes from the GUI but that is a separate issue). So my migration methodology is to synchronize values between environments for an initial release. You would take a different approach on a point release where values are likely to have been added directly to production and therefore may not exist in your DEV and TEST environments.

Anyway, what are we trying to do. Quite simply, we are trying to create all the Value records in pass 1, then we need to relate them to each other in Pass 2. I have already discussed how to group LOVs together for a release. This is where I diverge from Siebel's example because I am trying to think of real life scenarios where I am deploying releases, not just one LOV_TYPE. When creating the ADM Project/Session, here are the session items I use:


Data TypeChild DeleteDeployment Filter
LOV-2ndPassY[Release] = '1.1'
LOV-1stPassN[Release] = '1.1' AND [List Of Values Relationship.Name] IS NULL

What this means is that the first pass includes all LOV_TYPE records that have been marked for this release and all LOV value records related to them. The second part of the expression basically just insures that no relationship records are included in the first pass. When ADM attempts to set the parent value on a child, it may not be able to find it so it will log a warning and move on. In the second pass, ADM will load all the Relationship records and set the Parent values that it missed on the first pass. I have also set child delete to true on the second pass so that this job effectively synchronizes the value records for the type records marked.