Monday, November 15, 2010

A Basic Interface - Integration Object User Props

I know this is meant to be a basic interface with so complexity, but let's be realistic about the requirements we are likely to get. Even a simple upsert of something as basic as a service request is likely to require a bit of digging into bookshelf so that the interface is able to mimic basic GUI functionality. I will discuss some of the most commonly used User Properties necessary to implement even an advanced interface. When in doubt about the syntax of any of these properties, take a look for an example in the Tools flat view.

PICKLIST

This is the most common Integration Component Field User Property you will see and it basically tells the EAI Siebel Adapter to validate the picklist in the interface. This property is generally created by the wizard so I bring it up only because validating the picklist here will allow for several different ways to interpret a picklist field value described by some of the user properties below.

PicklistUserKeys

In the GUI, when you type an account name in the Account field on another BC that has a picklist of Accounts, and there is more than one record matching that name (with different locations), a pick applet will pop open with the constrained list of accounts having that name. The GUI is letting a user decide which of the multiple records returned was meant to be picked. An interface does not have that luxury, so the PicklistUserKeys Integration Component Field User Property is provided to mimic this action. The value of this property should be a comma separated list of fields representing the logical key of the picklist record to look up. These fields must all be present in the integration component (though there values can be null). The 'PICKLIST' user property must also exist for the field where this property is used and its value must be 'Y'.

Ignore Bounded PickList

When a picklist is validated in the interface and the value passed is not found, the EAI Siebel Adapter stops processing and returns an error. If the data is expected to sometimes be missing though, you may want the foreign key to just be left blank. For instance, maybe the service request, in our example is tied to an order via a back office order number, but the order was never loaded. Add this user property with a value of 'Y' in combination with the PICKLIST user property with a value of 'Y'. The EAI Siebel Adapter will look up the record by the user key provided (can also be used in combination with PickListUserKeys) but if it is not found, will set the field to blank in the integration object before applying the data. Keep in mind that this property will only work as expected if the Picklist object the underlying BC uses to constrain the field is set to No Insert equals True, otherwise, the EAI Siebel Adapter will try to insert a record. Also note that in bookshelf there is a typo in that there should be spaces between the words of the property name.

FieldDependency

It is easy in the GUI to determine the order of the fields being picked, either by training or by sequencing the fields in a particular way during applet design. This may help set the fields that will be used to constrain the value of another field, frequently in a hierarchical picklist. In EAI, we achieve this result through this user property. It can be used multiple times with a sequence number, just like other BC and applet user properties. The value is a field integration component field name. Siebel claims that pickmapped constraints are automatically taken into account, and that may typically be the case, but I have seen times when it does not work, so this is a good fall back.

Friday, October 29, 2010

A Basic Interface - Web Service Workflow

Just about every interface consists of two basic components: the integration object(s) and the workflow or business service. I will demonstrate a workflow approach which will give you more opportunity to customize down the road.

It is here that we begin to differentiate the integration by the communication mechanism. Because I am designating this integration as a Web Service, that will drive the type of data this workflow will expect as an input and output. The workflow I build will eventually be exposed as a WSDL to be consumed by an external program. That WSDL should have the definition of the message it is expecting, in this case, the XSD, or definition of the Integration Object we just built. How we accomplish this is to set the Input Process Property to a Data Type of 'Integration Object' and to actually specify the integration object we built, in the Integration Object attribute of the process property.


You can also see my place holder for the SR Number that I want to return to the external system in the response message. The 'IncomingXML' property is already in the format needed to be passed to the EAI Siebel Adapter, so there is no conversion necessary. And we are assuming that the data being passed is exactly as it should be applied. You will create the following steps which I will explain (other than Start and End which are self explanatory):
The 'Upsert SR' is a Business Service calling 'EAI Siebel Adapter'. Now here is the another design decision to be made. Each of the available methods differentiate exactly how the data should be applied. But there are two broad determinations. If we were to use the Execute method, then the 'operation' element which exists in each component of the IO would be used to determine how the data should be applied. This gives more control the calling system (or a data map which I will discuss later). The other set of methods essentially comprise a One Size Fit All to applying all the data uniformly. I will use the latter approach here and set the method to 'Upsert'. There is only one component in my IO, so if it exists, it will be updated, otherwise it will be inserted. The input arguments for this step are the IncomingXML message from the external system and a parameter telling the EAI Siebel Adapter to create the Status Object.

There is one Output Argument. We no longer care about the input message at this point because it will have been applied so we just overwrite it with the return, which in this case will be the status key.
The last step in the WF is another Business Service step calling 'PRM ANI Utility Service', 'GetProperty' method. This business service has a plethora of useful methods for manipulating property sets. This particular method will extract the value of a field from an integration object instance. Here are the inputs:
The output is to set the process property 'SRNumber' with the Output Argument, 'Property Value'. When the return message is sent back to the calling system, this property will exist with the generated SR Number.

Simulating/Troubleshooting this WF from within tools is difficult as built so I sometimes add a bypass step off the start branch to read the integration object from a file. I may talk about this later but want to keep this post pretty straightforward. So for now, this workflow can just be deployed, checked in and activated.

A Basic Interface - Building the Integration Object

I am not sure how easy it will be to summarize EAI in a couple of blog posts as there are definately a lot of ifs and buts in the design process. Nevertheless, I think it wold be useful to show how to build a basic interface using a couple of different techniques. Frequently you client's enterprise architecture will drive which to use.

Integration generally takes one of three forms
  • Query - Returns a data set of source data to be displayed in the target system
  • Schema Update - Takes a hierarchical data structure and applies it to the target system
  • Functional Action - Triggers a service to perform some set of business rules
There is perhaps some overlap here and any of these can be inbound or outbound to siebel, but this is a general way of categorizing your interfaces. And within each there are several different ways to implement more specific requirements.

Regardless of approach, the basic component of most interfaces is the structure of how data is viewed or applied. Let's say we need to Upsert a Service Request. A Schema Update assumes a hierarchical organization of data using the Integration Object data structure. Bookshelf provides extensive instruction on how these are built and configured to achieve certain goals so I will only touch on the highlights.

First, create an Integration Object in Siebel: from the Tools File Menu, New Object Wizard, EAI Tab, choose Integration Object. In the wizard, select the Project and choose 'EAI Siebel Wizard' from the second dropdown, and click Next. For the purpose of this example, we can just use the Service Request business object as the source object and the root BC will be Service Request. Enter a name of your choosing and click Next. In the next wizard page, deselect all child objects for which there are no fields to set. In this case that will be all of them except for the root as the more objects and fields in the message, the longer it will take the various architecture components to parse and translate the message. Click Next, then Finish on the next page.

Your Integration Object has been created. The next step is to verify the user keys. An integration object needs to have a valid user key in order to do an upsert. This basically specifies which key fields to use to find a record to update. In my example for Service Request, a key was not generated by the wizard so I will create one. Navigate to Integration Component Key in the explorer under the Service Request Integration Component. Create a new record, provide a name, set the sequence number to 1 and the key type to 'User key'. Create a child record in Integration Component Key Fields, provide a name and set the Field Name to 'Id'.

Another optional step we will use in this example is the Status Key. After creating a service request, I want to return the service request number to the external system as verification of success and so this SR can be referenced later by the customer. To do this we use the Status Key. This is basically a structure of the data set we wish to return from the EAI Siebel Adapter call and pass back to the calling system. A Status Key can be specified for each Integration Component so the final data set is the structure of all the keys combined hierarchically. In this case, navigate to Integration Component Key in the explorer under the Service Request Integration Component, create another new record, provide a name, 'StatusKey, set the sequence number to 1 and the key type to 'Status key'. Create a child record in Integration Component Key Fields, provide a name and set the Field Name to 'SR Number'.

Finally, while not absolutely necessary, you should inactivate all fields you are not using for each Integration Component. For an inbound upsert to Siebel, the calling system does not need to provide all the fields that are active in the IO schema, but if the field is active in the IO, then the external system Could send that data element which may have undesired affects depending on the interface. Make sure all fields used in the key are activated as well as all fields being passed from the external system. Unlike BC field lengths, the length property of an Integration Component Field is more important as when an XSD is generated and provided to the external system, this property will frequently be used by the web development tool to validate the data entered into that field. You can also change the XML Tag attribute to a label recognized by the external system (so long as spaces are removed).

One thing to keep in mind is that if an insert is desired, then the calling system should just pass a constant to the user key field, 'Id' so that Siebel will not find a record and a new one will be created. A value like '_New_Record_' is a safe value because the '_' will never be part of a generated row id.

Tuesday, August 17, 2010

Common (or not) eScript Syntax Errors

I would love to post a comprehensive list of gotchas, but then that would make them not gotchas if you know what I mean as I would know them all. So instead, I will mention what sidelined me for several hours last night and hope to spur some discussion about what other people have come across. If I think of others over time, I will try to update this post.

Space after the function name. I had copied and pasted some functions from somewhere else in my client's repository and the functions had no space between the name and the opening parenthesis of the passed variable declarations. I was not (and I guess still am not) aware of a limitation in this regard, but I saw all sorts of strange behavior afterward. Namely, the calls to these functions seemed to be ignored which took me a long time to realize. They seem to work fine in their original home elsewhere in the repository so this may be related to context, but suffice to say this is some thing to think about when troubleshooting.

Friday, July 23, 2010

My Barcode Promised Land

The effort of trial and error, traversing dead ends, and determining what I could not do, led me eventually to what I could. Let me start by saying that if I was a Siebel engineer (completely unaware of what constraints they had to work with) I would have provided an Application level method called something like BarcodeScan that could be trapped. I could then put a runtime event on it and trigger a wokflow when I was done. But then again I also would not have coded in the limitations I mentioned earlier.

Barring all that, I still needed a couple of basic things:
  • Hook to trigger additional functional logic
  • Do lookups on Serial Numbers
Additionally, it would be nice to:
  • Minimize the number of clicks
  • Do lookups on the child record of a BC
  • Parse the input so that I could do different stuff based on the type of data
Given those must-haves and nice-to-haves, I decided to hack the business service, trap the methods in question and just do my own thing. I should mention, that my initial approach was more from a wrapper perspective than a replace perspective. That is, I thought I could just trap the method, do my stuff, then continue with the vanilla method. Here is the problem though. Since everything that happens in the vanilla method threads occurs out of the GUI context, I cannot leverage any Active... methods. Therefore to do something as simple as update the record returned by the vanilla lookup, I would have to requery for it in my own objects to get it in focus to update it. Well if I am requerying for it, what is the point of doing the same query twice? I can just do my own query once in the Active object and then trigger my post events.

Let me start by walking through the most important Must-Have

Hook to trigger additional functional logic
I have sort of hinted at how this was achieved in general. Once I realized that the 'HTML FS Barcoding Tool Bar' was getting called, I modified the server script on this service to log when its methods are called. The important method here is 'ProcessData' which is the one method called regardless of the processing mode in use. At this point you have the barcode data and the Entry mode. You can also determine what view you are on via ActiveViewName. I trapped the Find, New and Update methods in the PreInvokeMethod event to store the current processing mode in profile attribute:
switch (MethodName) {
case "Find":
case "New":
case "Update":
TheApp.SetProfileAttr("BarcodeProcessMode", MethodName);
break;
}
With these three fields, the View, Process Mode, and Entry Mode, I can query the FS Barcode Mappings BC for a unique record.

boBCMappings = TheApp.GetBusObject("FS Barcode Mappings");
bcBCMappings = boBCMappings.GetBusComp("FS Barcode Mappings");
with (bcBCMappings) {
ClearToQuery();
SetViewMode(AllView);
ActivateField("Field");
ActivateField("Applet BC");
SetSearchSpec("View", sView);
SetSearchSpec("Entry Mode", sEntryMode);
SetSearchSpec("Process Mode", sProcessMode);
ExecuteQuery(ForwardOnly);
bFound = FirstRecord();

if (bFound) {
...
What I want to get from that record for now is the lookup field. I also need to know the Active BC to do the lookup in. Again, I cannot use ActiveBusComp or ActiveApplet so I just added a join to the FS Barcode Mappings BC to the repository S_APPLET table based on the applet name already stored in the Admin BC and added a joined field based on S_APPLET.BUSCOMP_NAME. I still feel like there is a better way to do it, but that is where I am at right now. Anyway, from the admin record I have a BC to instantiate, a field to set a search spec on, and the text value of the search spec.
sField = GetFieldValue("Field");
sBusComp = GetFieldValue("Applet BC");

boObject = TheApp.ActiveBusObject();
bcObject = boObject.GetBusComp(sBusComp);
with (bcObject) {
ClearToQuery();
SetViewMode(AllView);
ActivateField(sField);
SetSearchSpec(sField, sLogicalKey);
ExecuteQuery(ForwardOnly);
bFound = FirstRecord();

if (bFound) {
...
My client has multiple barcode processes so all this could be happening in different places. So the last step is to add some logic to branch out my hook. I am using the BC for now but we could make this more robust:
switch (sBusComp) {
case "Service Request":
ProcessSR();
break;

case "Asset Mgmt - Asset":
ProcessAsset();
break;
}

The Dead Ends of Barcode Hacking

Most technical blog posts are about solutions. Since this series on Barcodes is also about my journey, I thought it might be interesting to also talk about what I tried out but did not work. Who knows, maybe I can save someone the effort of trying these. Or perhaps the patterns I am finding through these dead ends will help someone head off into a totally new direction as it has helped me.

Auto Enabling
So the first thing I though would be cool would be to auto enable the Barcode tool bar and the natural place to do this seemed to be on the Application Start event. After a lot of trial and error, my application kept crashing after trying to invoke the 'Active' method. The 'Active' method receives as an input the Active View Name and Active Applet Name. The startup page is not actually instantiated yet when the Application Start event executes so even hard coding a startup page into the input property set results in an application crash. So Application Start is not the right place.

Applet Context
When trying to call various barcode service methods through script, many of them require the applet name as an input parameter. Trying to use ActiveApplet though results in an error you would typically receive when you are not in a GUI context, such as when using EAI. ActiveViewName does work though so it is only the applet. I think what is happening is that clicking on a toolbar button, even though an applet appears to remain in focus (via the color pattern of the applets) focus is actually on the toolbar and hence active applet does not work. Well that is my theory anyway.

Default to Find Mode
My client will mainly be using the Find process mode so I thought it would be good that if I could not Auto Enable the tool bar, at least I could auto default the toolbar to Find mode once it is enabled. So I trapped the Active method on the business service and called the Find method from the InvokeMethod event after the Active method runs. But this does not quite work. If I click the Enable button twice though it does. It appears that this is a context issue. It is as if GUI context has been returned to the user prior to the Find script executing.

I noticed that a series of barcode events trigger anyway when the Application starts. I therefore tried triggering my auto enable scripts from the tail end of one of these events, again through the InvokeMethod event, but again ran into the context issue.

SWE From Script
The interesting thing to me is that the input parameters to all of these methods are a series of SWE Commands, Methods and parameters. It seems as though another browser thread or frame is being used where SWE commands are the language Siebel uses to initiate the logic. There is probably a way to call a SWE command directly through script but I am not aware of it. What I am thinking is to use SWE command to refresh the context of the GUI thread after a Barcode method has been called, then to explicitly call a followup method. I cannot do this directly as the results of the second method call appear to get lost as the context has been returned to the GUI before the second call.

Thursday, July 22, 2010

Hacking the 'HTML FS Barcoding Tool Bar' Business Service

In case you were curious what happens in the black box, once the Barcode toolbar is up and running, here is a dump of the Input and Output property sets from each Method that is called:

When the application starts up, the 'IsBarcodeEnabled' method is called about 15 times, is passed an empty property set and returns:
01  Prop 01: IsBarcodeEnabled           / 1

Also on startup, the 'ResetButton' method is called which appears to set the set which buttons on the toolbar are turned on or off and which buttons are active. Resetting them makes the enable button Active and off, and the process mode buttons inactive and off, as you can see from the outputs. Here are the Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / ResetButton
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 1
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: NEW_ENABLED                 / N
01 Prop 02: ACTIVE_ENABLED / Y
01 Prop 03: ACTIVE_CHECKED / N
01 Prop 04: UPDATE_CHECKED / N
01 Prop 05: FIND_ENABLED / N
01 Prop 06: FIND_CHECKED / N
01 Prop 07: NEW_CHECKED / N
01 Prop 08: UPDATE_ENABLED / N

The control keys are then determined. First the 'GetStartKeyCode' method is called with these Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / GetStartKeyCode
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 2
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: KeyCode                     / 220

Lastly, the End key via the 'GetEndKeyCode' method with these Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / GetEndKeyCode
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 3
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: KeyCode                     / 220

Clicking the enable button triggers the 'Active' method has these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Active
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 22
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

and these Outputs:
01  Prop 01: OPTION0                     / Service Request
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: OPTION2 / Repair
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: ACTIVE_CHECKED / Y
01 Prop 06: OPTION3 / Pick Ticket
01 Prop 07: UPDATE_CHECKED / N
01 Prop 08: OPTION6 / Serial #
01 Prop 09: FIND_ENABLED / Y
01 Prop 10: Check / 1
01 Prop 11: OPTIONS_LENGTH / 7
01 Prop 12: OPTION4 / Order
01 Prop 13: FIND_CHECKED / Y
01 Prop 14: OPTION5 / Product
01 Prop 15: NEW_CHECKED / N
01 Prop 16: OPTION1 / Asset #
01 Prop 17: UPDATE_ENABLED / Y

Clicking the Find button gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Find
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 11
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Service Request
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: OPTION2 / Repair
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: ACTIVE_CHECKED / Y
01 Prop 06: OPTION3 / Pick Ticket
01 Prop 07: UPDATE_CHECKED / N
01 Prop 08: OPTION6 / Serial #
01 Prop 09: FIND_ENABLED / Y
01 Prop 10: Check / 1
01 Prop 11: OPTIONS_LENGTH / 7
01 Prop 12: OPTION4 / Order
01 Prop 13: FIND_CHECKED / Y
01 Prop 14: OPTION5 / Product
01 Prop 15: NEW_CHECKED / N
01 Prop 16: OPTION1 / Asset #
01 Prop 17: UPDATE_ENABLED / Y

Clicking the New button (on the toolbar) gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / New
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 23
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Serial Number Entry
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: ACTIVE_ENABLED / Y
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: UPDATE_CHECKED / N
01 Prop 06: FIND_ENABLED / Y
01 Prop 07: Check / 1
01 Prop 08: OPTIONS_LENGTH / 1
01 Prop 09: FIND_CHECKED / N
01 Prop 10: NEW_CHECKED / Y
01 Prop 11: OPTIONS_LENGTH / 7

Clicking the Update button (on the toolbar) gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Update
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 24
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Asset
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: ACTIVE_ENABLED / Y
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: UPDATE_CHECKED / Y
01 Prop 06: FIND_ENABLED / Y
01 Prop 07: Check / 1
01 Prop 08: OPTIONS_LENGTH / 1
01 Prop 09: FIND_CHECKED / N
01 Prop 10: NEW_CHECKED / N
01 Prop 11: UPDATE_ENABLED / Y

And perhaps the most important one, scanning the data. This executes the 'ProcessData' method and would occur after the second end control character is received from the scanner. The Inputs are:
01  Prop 01: OPTION                      / Service Request
01 Prop 02: BARCODE / 2-7144002

And these Outputs:
01  Prop 01: Applet Name                 / Service Request List Applet

Keep in mind that in many cases, the actual property values are based on data pulled from the 'FS Barcode Mappings' BC.

Spelunking in the Barcode Cavern

My new client would like to use a Barcode scanner for a whole variety of Field Service applications:
Shipping Label to Lookup and RMA Order and update some fields
Asset Label to Lookup or Create an RMA Order Line Items and update some fields
Asset Label to Lookup a Repair record and update some fields

Siebel Bookshelf and Supported Platforms provides some basic information. There are a couple of approaches to using a Barcode scanner
  • Treat it like any data entry device. In other words, you prepare your record (click new, Clear to Query, etc.), click into a field, scan your barcode, the scanner copies the translated barcode value to the field, then you do what you want (save the record, execute query, etc).
  • Use the Barcode ToolBar. This has some basic modes (New, Update, Find), an administration area that ties a View to one or more modes and a field. So when you navigate to a view, Siebel (when the barcode toolbar is turned on through an object manager parameter), checks to see of any barcode admin records exist for that view and the currently selected mode. If so these appear in a dropdown in the toolbar that a user can select a value from. If the User then scans something, the Application "Processes" the barcode depending on the mode, either doing a query based on a specified field, updating a field on the current record, or creating a new record and populating a specified field.
This sounds groovy until you hear about some of the limitations and start thinking about a more realistic process. So here are the limitations as I understand them:
  • Only some Barcode Types (think fonts) are supported.
  • The processing can only occur in the primary BC of the BO, or the Parent BC in a master detail view.
  • Serial Numbers cannot be looked up (I am still investigating why this is but I am guessing it has to do with them possibly not being unique).
  • Only barcode scanners that support using customizable control character before and after the scanned input will work
  • A single input value is taken (so no splitting of a concatenated value)
  • You basically have to tell the toolbar what value to expect (again, no intelligent parsing)
Prototyping:
  • Insure you have the Field Service, and Barcode license keys
  • In the Field Service cfg file (if using a thick client), set the ShowBarcodeToolbar parameter to TRUE. Intuitively enough, this will make the Barcode toolbar appear in your app upon restart.
  • Click the enable button (far right hand button) on the toolbar
  • As you navigate to a view, the application will perform a query of the 'FS Barcode Mappings' BC, or S_BC_ENTRY_TRGT table for admin records corresponding to the current view and the currently selected processing mode (the three buttons to the left of the dropdown in the toolbar each correspond to a different mode). If you think about it, this is sort of similar to how Actuate reports are tied, except you can actually administer this a bit in the GUI.
  • We can mimic a barcode scan by using <ctrl-\>, followed by the translated value we are trying to scan (SR number for instance), followed by another <ctrl-\>
  • If you want to use different control character than <ctrl-\> (because maybe that one is already taken or something), these are set on the 'HTML FS Barcoding Tool Bar' business service as User Properties. I will leave them be.
So in my real life example, I will:
  1. Navigate to All Service Requests
  2. Click Enable on toolbar
  3. Click the right most left side button on the toolbar, 'Find'
  4. Leave the dropdown as 'Serial Number'
  5. Hit <ctrl-\>
  6. Type in an SR # I can see in the list
  7. Hit <ctrl-\> again
  8. The Application should query for the SR # I entered
I am now going to dive figuring out a better way to customize this behavior. I'll be back.

Wednesday, July 21, 2010

About defaults, picks, maps and SetField events

That is an eclectic list of things in the title, and no I do not intend to talk about them all in detail other than to discuss a bit about how they interact and some of the design implications they may cause. So let me start with another list:
  • Pick Maps do not cascade
  • Fields set by a pick map cause a SetFieldValue event
  • Defaults do not cause a SetFieldValue event
  • On Field Update Set BC User Prop will trigger a SetFieldValue event
  • SetFieldValue event triggers a Pick
  • Setting a field to itself does not trigger a SetFieldValue event
So those are the important findings I had to deal with when implementing a seemingly simple requirement. My client had a contact type and sub type. The contact type should be denormalized from the related account's type. Finally, they want to set the contact sub type dynamically to a different value depending on the contact type. By dynamically, I mean not hard coded, so it can be changed without a release.

Let me put all that in functional terms by providing an example. The Account Type has a static LOV with values 'Bank' and 'Government'. The Contact can potentially be created as a child of an account, inheriting information from the account record, and triggering Parent expression default values, or can be created from the Contact screen without an account, but with the option to set the account later. When an account is specified for a contact, the contact type will be set to match the account type, otherwise the contact type should be set to 'Other'. If the Contact type is 'Bank', the contact sub type should get set to 'Retail', and if the contact type is 'Government', the sub type should be set to 'HUD'. So the basic configuration we started with was to put the desired 'dynamic' sub type value in the Low column on the LOV table. Then set up the pick map for contact type as such:

Field Picklist Field
Contact Type Value
Contact Sub Type Low

It would be convenient to just set the pick map similarly on Account Type as:

Field Picklist Field
Account Type Value
Contact Type Value

But the first rule above states this will not work because pick maps do not cascade. This makes some sense as you could conceivably end up with some circular logic. Or in the case where the contact is created as a child of an account, to predefault the Contact Type to the Account Type. But again, according to the rules above, a predefault will not trigger a SetField and hence no pick map.

So in order to trigger the pick map on Contact Type, we need to trigger a SetFieldValue event on this field. What to do. Oh, and I did not want to use script. My solution had a couple of dimensions.
  1. When a contact is created on the Contact Screen and the account is picked, I am going to trigger a set field value on the Account Type by creating a joined field on the Contact BC called Account Type, and add this field to the Account pick map. So this will trigger my SetFieldValue event. I then will add an 'On Field Update Set' BC User property to the Contact BC so that when the joined Account Type field is updated, set the Contact Type to the Account Type. Using a User Property will then trigger the SetFieldValue event on Contact Type which will then trigger the pick map to set the Contact Sub Type. So far so good.
  2. My approach on the scenario when a Contact is created as a child of an Account is not as clean. The problem here is that Predefaults do not trigger SetFieldValue events. And in this case, all the account information will already have been set via Predefault so there is no field being explicitly set by a user to trigger the User property. So I had to get creative. What I did was similar to above but placed identical user properties on Contact First and Last name fields. Since these are required fields that are typically entered first, they will trigger the user properties to set the contact type and sub type. In order to minimize the UI impacts of this admittedly kloogy design, I wanted the visible Contact Type in the applet to default correctly to the Account Type from the parent record. This means that when the User sets the First Name (or the Last) the Contact Type will already have the correct value so the User Property would essentially set it to itself. The last rule above states this will not trigger the SetFieldValue event. To get around this I create two User Properties in sequence, the first to set the Contact Type to null, and the second to set it back to the Account Type. Because I am putting the properties on both the First and Last name (to accommodate different user's field population sequences), I also want to add a conditional to the user properties to not execute if the Sub Type has already been set.
What does all this leave us with? In addition to the pick map on the Account field mentioned first, here are the On Field Update Set user properties on the Contact BC:
  1. "Account Type", "Contact Type", "[Account Type]"
  2. "First Name", "Contact Type", "", "[Contact Sub Type] IS NULL"
  3. "First Name", "Contact Type", "[Account Type]", "[Contact Sub Type] IS NULL"
  4. "Last Name", "Contact Type", "", "[Contact Sub Type] IS NULL"
  5. "Last Name", "Contact Type", "[Account Type]", "[Contact Sub Type] IS NULL"
I am going to leave it there, but this actually gets even more complicated. Because a contact can be created from a pick applet from a service request, I also had to account for predefaulting the account to the SR's account and this impact this would have on predefaulting Contact Type and Sub Type. If anyone would like to see how this is done, here is where to start.

Wednesday, July 7, 2010

Expectations and Changes

When doing a Siebel project, there will always be a balancing act between managing client expectations and delivering everything the customer wants. I am not even trying to finesse when I say managing client expectations. The way I put it in that sentence, you may have inferred I meant not delivering what the customer wants. But that is not really the case, as frequently, the client does not necessarily know what they want, or their understanding of what they want evolves as they understand the capabilities and the implications of a CRM strategy/product.

We see this unfold in different ways on different projects. In a green field implementation (new to Siebel), Phase I is typically a data model implementation where the majority of the development work revolves around building views. Now there is obviously a lot that goes on behind the scenes, but from a Client's point of view, we are mostly showing them views, and using a view as a way to communicate the concepts of a data model. That is, the view becomes the way to communicate relationships and attributes. The presence or absence of a field on a view becomes a visual indicator of whether a logical attribute exists or not in our build out. An attribute expressed as a single value field in an applet provides a visual cue that a user can only enter one value. Because the views provide extensive visual reinforcement, it is easy for stakeholders to identify gaps through the testing and acceptance process by saying, ahah, I do not see this field, or I need to enter more than one of that value, or their needs to be a view linking these two objects.

Integration based projects tend not to have the same issues when integrating to a legacy system as there are typically a pair of technical architect types that are fairly knowledgeable about the preexisting data models of each application. The project is mainly a matter of synchronizing these efforts. Testing and user acceptance though can again identify visually when a field or record set is blank to recognize that a gap exists.

Where I am leading to with all this is the nature of an automation oriented project. Automation is by it's nature typically new. Perhaps the steps have existed, but the mechanisms we are using to automate, to add speed to the process, have never existed before. This adds some expectation management issues that are a bit different than in other types of projects. The types of changes necessary have an added dimension. Gaps in the specifications will likely be caught dring the testing phase, such as a field not being populated or a decision branch executing on the wrong condition. The added dimension is time and frequency. For instance, a popular way to automate processes is to add reminders to a process when steps are not executed, or to change a status of a record to indicate an escalation in priority or status. I would posit that users do not really know how frequently they will want to be reminded because they do not necessarily have a sense of the scale or frequency of the events. Frequently, during an interdepartmental process, one department may perceive the severity of an issue as higher that the department they are working with. These are important considerations because a user that is reminded too frequently (when in fact they are aware of a task but are waiting on other deliverables in the normal course of performing it) will begin to ignore the reminders. Being informed of a number of outstanding items on a too frequent basis will cause us to phase it out as anything that can be happening so frequently is typically thought to be not too severe.

It is likely that system users will request, some time soon after deployment, that these reminders be scaled back, and if the capability to do so has not been built into the project, to turn them off altogether. This thereby loses the value of that particular automation. So, where am I going with all this? While workflows can be redeployed without a major release, it is unlikely most Siebel project teams are actually prepared to do so on short notice. It is possible to account for this by explicitly adding requirements for it, but of course this adds complexity and scope to the project.

This is all why I built the RARE Engine to be extensively customizable in the GUI, including the turning on and off of email reminders, the setting of the text of the reminder/escalation message, and the delay interval between reminders and escalations both on a per person and per process basis. This means that after the process has been automated and deployed, an administrator can tweak these parameters to the individual needs of the user base.

Saturday, July 3, 2010

eScript Framework - GetRecords

Matt has launched YetAnotherSiebelFramework, a blog about... you get the idea. This is an important step forward in this community's attempt to create a true open source Siebel eScript framework. He adds flesh to the skeleton I have assembled here. He will shortly be adding his own posts to explain his functions in more detail but I thought I would get a head start with starting a discussion about one of his most important pieces, the GetRecords function. I say one of the most important pieces, as the real driver behind this solution is to replace the many plumbing steps, as Matt calls them, that sit in so much of our script. So for instance to query an Account by Id (sId) to get the Location for instance, you would write something like this:
var boAccount = TheApplication().GetBusObject("Account");
var bcAccount = boAccount.GetBusComp("Account");
with (bcAccount) {
ActivateField("Location");
ClearToQuery();
SetViewMode(AllView);
SetSearchSpec("Id", sId);
ExecuteQuery(ForwardOnly);

if (FirstRecord()) {
var sLoc = GetFieldValue("Location");
}
}
You get the idea. His function essentially replaces this with:
var sLoc = oFramework.BusComp.GetRecord("Account.Account", sId, ["Location"]).Location;
So that is pretty cool. What follows is mostly quibbling but I think attracting criticism from our peers is the best way to make this framework the most usable it can be. On a technical note, I am using 7.8 and the T engine for my personal sandbox so have not yet been able to get Matt's entire framework up and running. Nevertheless, I have gotten his individual functions running so I will limit my discussion to that scope. Here are my thoughts:

(1) My biggest point is to think about whether it makes more sense to return a handle of the BC rather than filling an array. I am thinking about this in terms of performance. There are times when having the array would be useful, like say when I want to perform array operations on the data, like doing a join. But often, I may just need to test a field value(s) and perform operations on other values conditionally. In this case, I would only be using a small percentage of the data I would have filled an array with. It may also be useful to have a handle in order to use other Siebel BC functions like GetAssocBusComp or GetMVGBusComp. I do not claim to be a java guru, but I am curious about the performance implications. What I have done with my own framework is to build three functions:
  • Bc_GetArray (this is basically the same as Matt's)
  • Bc_GetObject (stops before filling the array and just returns the handle to the BC)
  • Bc_GetInvertedArray (Same as Matt's but makes the fields the rows and the record the column)
(2)I took out the following two lines:
aRow[aFields[i][0]] = vValue;
if (aFields[i][0].hasSpace()) aRow[aFields[i][0].spaceToUnderscore()]= vValue;
that checks if the field name has a space and if so changes it to an underscore and replaced them with a single line:
aRow[aFields[i][0].spaceToUnderscore()]= vValue;
I think this should be more efficient since a regular expression search is being done regardless, I think just doing the replace in one step saves an operation.

(3) I like the first argument, "Account.Account" syntax for most situations. I think we can make this even more robust though by allowing us to pass in an already instantiated BC. This is probably infrequently necessary moving forward with the pool concept Matt has introduced, but there is a low cost way to handle either. What I have done is to add a test of the data type:
if (typeof(arguments[0])=="string") {
before starting the pool logic. I then added an else to allow us to pass a BC object in and add it to the pool:
else {
oBc = arguments[0];
this.aBc[oBc.Name()] = oBc;
}
(4) I think I understand where Matt is going with the pool as a mechanism to instantiate BCs less frequently. His bResetContext argument, the flag indicating that the pool be flushed, is I think unnecessarily too drastic. If I understand it correctly, setting this flag to true would flush the entire pool. While this may sometimes be desired, it seems more useful to just flush the BO/BC in play. This would allow you to write code for instance in nested loops that jumps between BCs without clearing context when it is not necessary too. I may not be thinking of a situation where this would be necessary though so if anyone can think of one I am all ears. My recommendation would be to make the flush just clear the passed BO/BC but if the "Full flush" is necessary, then perhaps a code indicating one or the other can be used. This could be accomplished by just removing the reference to the FlushObjects function, as the following if/else condition effectively resets the BO/BC variables in the array after evaluating the bResetContext argument.

Economies of Scale - Data Edition

In the process of describing how a typical siebel installation reaches maturity, I summarized it thus:
...for any client, the first release or three are about implementing a robust data model, rolling on as many business units as possible to take advantage of the enterprise nature of that data model and gaining economies of scale, and maybe implementing some integration to get legacy data into Siebel
It strikes me that embedded in that sentence is another big picture concept I want to go into further detail about. Putting a call center on Siebel is nice for the Call Center and the managers of that call center from an operational standpoint. Putting a Sales division on Siebel is nice for those sales people and their managers too. In both cases, whenever a customer calls, the business case of using Siebel as a data model applies when we find that this customer has called before and we leverage that information to assist us on the current call.

Perhaps it is obvious, but it is even better when multiple business units are on Siebel, such that any given business unit can leverage the touchpoint history of the other business units when transacting with a customer who has corresponded with both. In other words, if a customer calls the Call Center, and the operator records information about that call, the Sales person can also leverage that same information, and the marketing division can market to that customer from the same database. This is what we mean when we talk about the enterprise nature of the application. The underlying data is to some extent shared with whatever visibility rules are deemed appropriate.

This is useful in the following ways:
More likely to get a hit when looking up a master data record.
Reduces the need to key in master data information that has been entered before
Increases the speed at which the user can transact the true nature of the call
Reassures the customer that they are known by the business
Allows user (or analyst or system) to identify a trend in the customer's transactions

There will often be a tension between choosing the best application to perform a certain task and gaining the economies of data scale identified above. This tension can be mitigated somewhat through good integration but it is unlikely to go away completely. That is, SAP may be a better inventory management application, so there is a tension between storing my inventory information in SAP which has built in and customizable algorithms, and storing it in Siebel, which while not as robust, has the advantage of making that data available in Siebel views and linking it to Siebel objects easily. Like I said, we can integrate SAP to Siebel, but this adds cost and complexity (and probably lag time). That does not mean it is not the right decision. In the case of inventory management, depending on how important that functionality is to the customer's core business, it may very well be the right decision. I just want to point out the tension between these concepts.

Tuesday, June 29, 2010

eScript Framework - Logging Variables

Here is another entry into the logging framework previously discussed. The idea behind this function is to Log multiple variable values to separate lines of our log file using a single line of script. This keeps our script pristine and makes the log file well organized and easy to read. The script to call the function is as follows:

Log.stepVars("Record Found", bFound, " Account Id", sId);
The expected arguments are name/value pairs where the name is a descriptive string (could just be the name of the variable) and the value is the variable itself that we want to track. There is no limit to the number of pairs. There is an optional last parameter to indicate the enterprise logging level (stored in system parameters) above which this line should be written.

The results will be written to the log file as:

06/29/2010 13:33:53 ................Record Found: Y
06/29/2010 13:33:53 ..................Account Id: 1-ADR45
The script to implement follows. This is meant to be added as a new method in the 'eScript Log Framework' business service.

function stepVars () {
var Args = arguments.length;
var iLvl = (Args % 2 == 0 ? 0 : arguments[Args - 1]);
var iParams = (Args % 2 == 0 ? Args : Args - 1;
var sProp, sValue;

for (var i=0; i < iParams; i++) {
sProp = arguments[i++]+": ";
sValue = arguments[i];
Log.step(sProp.lPad(30, ".")+sValue, iLvl);
}
}
Also, a new line will need to be added to the Init section to instantiate this function on Application start:

Log.prototype.stepVars = stepVars;
I want to draw particular attention to two java features which may be useful in other applications. The first is how to reference a variable number of arguments in a function call. Notice the special array variable, 'arguments'. This array is defined as all of the arguments passed to this function with no special declarations. It can be referenced just like any other array. There are some exceptions with how this array can be manipulated though with push() and pop() not working as you might expect.

The second, is how to assign a variable using an inline if, ( condition ? value if true : value if false). The condition is any expression that will evaluate to either true or false. The first expression after the ? is the value returned if the condition evaluates to true, and the last expression is what is returned if the consition evaluates to false.

Thursday, June 24, 2010

Activity Plans vs SMART Templates

I wrote a post recently about a new service offering I have been working on to greatly speed up process automation. So this is another entry into my teaser series. Let me start by describing what is probably Siebel's first and most basic attempt of process automation: the Activity Plan. Activity Plans have been around in Siebel going back a long time. I remember them in 2000 and they may have been there in 99.5 though to be honest I don't recall exactly when they made their appearance. Basically, an administrator creates an Activity Template consisting of a series of Activities. A user can then either automatically trigger the creation of an instance of this template (an Activity Plan) from an Opportunity Sales Stage transition or manually add one to any other object. Once the Plan is added, the Activities are automatically generated. This sounds great to a lot of business stakeholders as it sounds like something they can apply in many scenarios. It's strengths are:
  • Can set any/all fields on an activity
  • Creates many activities at once, saving manual effort.
Unfortuneately, once you start gathering any sort of requirements for a business process, you will start to stumble across the weaknesses:
  • Fields can only be set to constant values (this really impacts dates when it comes to activities)
  • Activities are created all at once, so any type of sequencing is impossible
  • This functionality only exists for Activities (No Service Requests, or other custom objects)
Now with customization, there are ways to get around some of these limitations, but at some point, you will probably end up either building something completely different or bastardizing the Activity BC itself.

In my last post on this topic I introduced you to the RARE Engine (Rule-based Approvals Routing and Escalations). This is really two parts. I already touched on its features. There is actually a second component of my automation suite which I have branded SMART Templates. What a SMART Template does is to create a task record and set fields on that record, while addressing all of the deficiencies of the Activity Plan:
  • Can create/update records of any type (administrator specifies the BC)
  • Can evaluate fairly complex expressions including date math to set fields
And when used in combination with the RARE Engine:
  • Records can be created in batches at different points in time, dependent on the completion of prior tasks.
OK, now we are getting somewhere. So once this service offering is implemented, any process can be maintained through the Siebel UI. You need to change the threshold at which a VP needs to approve an Order? No problem. You need to notify an additional person at a point in the New Customer Onboarding process? Ok. Or you need to create three new Service Requests when Final Contract approval is given? You got it. You want to update the quote status when the customer approves it through your eSales portal? You betcha. All these things can be done by an administrator in Real Time.

Just remember, most processes are just a series of steps executed by people or systems. What the RARE/SMART Suite provides is a way to implement automation quickly, maintain those steps in the Siebel GUI, and enrich the processes themselves (Reporting, Reliability, Refinement).

Monday, June 21, 2010

SR Class Level Read Only

In case anyone has ever wondered about the intricacies of how a Service Request BC record becomes read only when the Status is set to Closed, I hope to add a bit of enlightenment. The premise, is that in Vanilla Siebel, if I set the Status of an SR to 'Closed', the entire record becomes read only. There are many tickets about this on My Oracle support so this may not be new territory. Instead of implementing this through standard configuration, which seems easily poossible in modern versions, Siebel has done this at the class level, in CSSBCServiceRequest. The interesting thing for me in researching this is what the actual triggering point is that makes the record read only. Basically, Siebel compares the value of the Status field for the particular record against the Display Value of the LOV where Type = 'SR_STATUS' and Name (Language Independent Code) = 'Closed'.

So if for instance the Display value has been changed in the LOV table to Done for the 'Closed' LOV record in type 'SR_STATUS', then the record must have a value of 'Done' in the status in order for this record to be read only. Maybe this result is not very interesting. So far you are right. What is interesting is that if you remove the picklist altogether from the Status field, or change it to a different picklist using a different LOV_TYPE, the same evaluation occurs. The long and short of it is that there is not a lot of opportunity to customize this functionality without doing some more complicated things behind the scenes as the Class does a lot of hard coded checks in order to implement this requirement

Friday, June 18, 2010

A Process Automation offering

I have discussed before that I think business process automation is the ultimate end state of a CRM Architect. What I mean by that, is that the ultimate way CRM pays off and that we as solution architects make value for our clients is to make real business processes that occur in our client organizations, and design mechanisms for the client to perform them in the CRM solution such that they execute faster, with more reliability, and to be measurable. In this case, the CRM solution is Siebel.

So my own work/life balance finds me pursuing three parallel threads on the work side (We won't get too much into the life side in this blog). First are my client responsibilities. I think all my readers have similar ones, so I won't go into them. Second, is this blog. My goal for this blog has always been a technical repository. That is, the solutions I post about here are geared towards technical Siebel professionals and can be leveraged against any functional design. My thoughts on why an open source approach for this basically comes down to the fact that any technical expert who sees this logic can and will take it for themselves. I mean that in the best way possible. What makes a technical expert truly valuable, is not only to understand new concepts, but to have a repository of all the things they have seen and done before. While, I think applying the algorithms I post about can return tremendous value to a client, I do not think these types of algorithms can be proprietary in the sense of the profitable. You would spend your life fending off those who copy and modify. I think the value to ourselves comes from showing our clients that we can understand these concepts and apply them. For me, writing a blog about them is a way to demonstrate that and I encourage other long term Siebel experts to do the same.

The third aspect of my professional pursuits I have not posted about before, but it is related directly to what I believe should be all of our prime objective when it comes to CRM, process automation. Basically, I have built another alternative to Siebel Workflow/Smartscripts/Task UI which has a different set of strengths for what I feel was missing in the Siebel base product. It is a task automation engine. The idea is that business processes are typically a series of steps or tasks assigned to different people. So I built a framework in which an administrator can setup and maintain these tasks and who they are assigned to in Siebel. The seed of my idea came from the out of the box Siebel Approvals functionality, and the Universal Inbox. Vanilla approvals can assign inbox items to a series of named employees or positions, and it is implemented in the Campaigns and Quotes modules off the top of my head. The approval thread can be either linear or parallel. This is interesting but it is immediately apparent that it is very limited. After all, a well implemented client may have spent a lot of time implementing complex assignment rules to assign objects to people based on all sorts of rules. I may want to assign approvals to people dynamically based on these same assignments. Or perhaps I want to approve dynamically up a position hierarchy. Or maybe what I need to automate is not technically an approval at all, but a series of service requests. Or maybe I want to assign an item to a different person based on attributes of the item being evaluated, say a dollar amount field.

What I have built hopes to address all of these functional requirements and much more into a product offering that can be administered 100% from the GUI and specifically adds value by addressing the three pillars of process automation:
  1. faster execution - Escalation of items after designated time periods, scheduled reminders, Just in time notification to avoid "notification overload"
  2. with more reliability - System controls the next assignee, rule based, version controlled rule matrices
  3. and to be measurable - see current process status, steps stored in DB, can report on overall process metrics (avg length, bottlenecks, etc)
Getting back to my earlier point about what should be open source and what not. I have decided that this engine, which includes many functional algorithms will remain proprietary. I think the approach I have come up with involves some innovations which are not easily reproducible. I hope to use this blog as one mechanism to communicate my expertise in the area of process automation. I appreciate any good will and contacts I receive through this blog so feel free to let me know if something like this may be of value to your clients/employers. I will continue to post information about this engine in Teaser format.

Tuesday, June 15, 2010

Why can't I see that View???

There are times when troubleshooting an issue, that you start searching Oracle support and you find a ticket that has your problem, but the solution does not apply, and then you keep looking and you find another one that looked in a different direction yet still, the solution did not apply. And you think, I wish there was a single place that listed all the reasons that this could happen. Siebel does do this sometimes a handful of their generic error messages, listing ten different reasons you could be getting that message. Well I am going to apply that format to some basic configuration items.

The first one is View Visibility. So without further ado, here is a list of reasons you may not see a particular view in the GUI (Please feel free to add additional reasons in comments):
  1. The basics: View has been created in Tools, compiled into the SRF, and the GUI you are looking at matches the SRF you compiled the view into. I know this part should be obvious, but we are aiming at completeness
  2. The View needs to be added to a screen. Make sure the View attribute spelling matches the spelling of the View object. Check the Display In Page unless you want this to be hidden, and probably the Display In Site Map. These default to True.
  3. The View exists in the GUI meta data. Administration - Application -> Views. Again make sure the spelling matches the repository object
  4. The View has been added to a responsibility that your user login has access to
  5. The Responsibility Cache has been cleared. Doh!
  6. Log Out and Log Back In (Views are not immediately visible in the session where it was added)

Ok those are the basics. Now for the advanced:

  1. Navigate to the Application - Administration -> Responsibilities -> Tab Layout view. Check the responsibility which is primary for your user login in the top applet. Query for the Application you are logged into in the middle applet. Find the Screen you have placed the view under in the third applet, and in the fourth applet, insure the Hide flag is not checked. This is more applicable for trying to figure out why a vanilla view is not exposed
  2. Navigate to the Application - Personalization -> Views view. Query for the view which is you are having issues with. Review the Condition Expression. This expression should either be null, or should evaluate to true for the logged in user. The date range should either be null or should include today's date in its range.
  3. Last but not least, it may be a license key issue. Siebel implement license keys through their views. One way to test whether this is the issue*** is to copy the view, add the copied view to a responsibility, add the copied view to the View admin and to a responsibility and clear the cache, log out and back in. If you can see the view now, then you need to get the license key. Not all license keys indicate additional purchases. There are a couple of instances I have found where a view just dropped out of the standard set as a defect and the fix was to provide a license key to get it back (Remote System Preferences view is an example I ran into in 8.o)

Some Additional Pointers:

  • Always copy and paste view names between objects or between Tools and the UI to avoid spelling errors, as they are one of the most common problems in this area.
  • Avoid the use of apostrophes to indicate possesive as this will often cause issues down the road. (I know siebel has some vanilla instances where theu use it with ...Manager's... but trust me, avoid this).
  • When copying views, insure the Thread and Visibility applets are actually present as View Web Template Items for that view. No error is thown to the UI if they do not match, but buried in the Siebel.log, you will find them. They manifest by not executing the correct search when navigating across views. For instance if you are on an All view and navigate to the correlated My view, but one that has an invalid applet, the view will appear to change in the UI, but the data does not, so the My view will show All view data.
*** This should be done just to test that the license key is the issue. A copied view really should not go into production as this is a slippery slope which is really not a good idea in the long term (and its against the license agreement too)

UPDATE: Dos, in comments, points out a better way to see if Licensing (or some other problem) is at issue. Just paste the following into your URL after the start.swe? replacing the view name in bold with the view you are having problems with:

SWECmd=GotoView&SWEView=Quote+List+View

You need to replace a space with a +. Since 'Quote List View' is part of the Orders module requiring a specific license key, you get the following message if you do not have the key:

View 'Quote List View' is not licensed for this site.(SBL-DAT-00327)

Thursday, June 10, 2010

The Framework - Revised

After some fits and starts, I finally got around to a data dump of my Siebel eScript Framework with some rough instructions on how to implement it. After a very worthwhile back and forth on Jason Le's LinkedIn group I have some structural modifications to make. The new framework will be implemented as a pair of business services. The main advantage of this is the code is more centrally located in a single repository. Multiple applications can reference it there. A fairly good case has been made that the logic can all sit in a single business service underneath one object, TheApplication. I think there are decent reasons to do either but preferences may vary.

Create a new BS, called 'eScript Framework' and check the Cache flag.
It's PreInvoke should have the following:


try {
var bReturn = ContinueOperation;
switch (MethodName) {
case "Init":
bReturn = CancelOperation;
break;
}
return (bReturn);
}
catch(e) {
throw(e);
}


Then create a method for each function in the framework from the previous post. So far the Methods I have are:
AddToDate
DateToString
StringToDate
DiffDays
GetLocalTime
GetSysPref
SetSysPref
QueryExprArrayReturn

Now, create the Logging BS. Create a new Business Service named, 'eScript Log Framework', and check the Cache flag. Its PreInvoke should have the following:


try {
var bReturn = ContinueOperation;
switch (MethodName) {
case "Init":
var sPath = Frame.GetSysPref("Framework Log Path");
sPath = sPath.replace(/\\$/, ""); //Remove trailing backslash if used
gsOutPutFileName = sPath+"\\Trace-"+
TheApp.LoginName()+"-"+
Frame.GetLocalTime("%02d%02d%d%02d%02d%02d")+".txt";

//Get the System Preference Log Level. Get the Log Level set for this user (if provided)
//and then set the log level for this session
var sLogLevel = Frame.GetSysPref("CurrentLogLevel");
if (TheApp.GetProfileAttr("User Log Level") != "")
TheApp.SetProfileAttr("CurrentLogLevel", TheApp.GetProfileAttr("User Log Level"));
else TheApp.SetProfileAttr("CurrentLogLevel", sLogLevel);
Log.step("Session Logging Level: "+TheApp.GetProfileAttr("CurrentLogLevel"), 1);
bReturn = CancelOperation;
break;
}
return (bReturn);
}
catch(e) {
throw(e);
}


Set the Declarations section to the following:


var gsOutPutFileName;
var giIndent = 2; //Indent child prop sets this many spaces for each level down.
var giPSDepth = 0; // How deep in the property set tree, what levelvar CurrentLogLevel = 2;
var gaFunctionStack = new Array(); //used in debugStack function to store called functions
var giStackIndex = 0; //Where in the function stack the current function resides
var gsIndent = ''; //used in debug methods to identify stack indents
var giLogBuffer = Frame.GetSysPref("Log Buffer");
var giLogLines = 0;
var gsLogCache = "";


Then create a method for each function in the framework from the previous post. So far the Methods I have are:
step
StartStack
Stack
Unstack
RaiseError
PropSet
DumpBuffer

Now open up the Server script for the Application object you are using (this should be done in every Application being used where framework functions may be referenced). Add this to the Declarations section:


Object.prototype.TheApp = this;
Object.prototype.Frame = TheApp.GetService("eScript Framework");
Object.prototype.Log = TheApp.GetService("eScript Log Framework");

Frame.InvokeMethod("Init", NewPropertySet(), NewPropertySet());
Log.InvokeMethod("Init", NewPropertySet(), NewPropertySet());


Your done. Log and Frame functions can now be referenced from anywhere in your scripts.

Friday, May 28, 2010

Framework Logging Performance

I have been working with my eScript logging framework for a couple of months now and it has been extremely helpful with debugging complicated script procedures. What keeps it from being truly useful in a production environment though is it is too damn slow. In an ideal world, I could leave logging on at all times, or at least for long periods of times for users or groups I want to keep track of. In order to do this, I need to make sure they do not notice a significant degradation in their work day.

The way the logging object functions work in the framework is basically to send one line at a time to the step function which then opens a file, writes the content along with a timestamp, and closes the file. It is these file operations that take the great majority of performance time. A given complicated script takes about 7 seconds with logging turned up to 5 which outputs a 63kb log file on my thick client. While I could expect server performance to be a bit faster, just turning logging off, reduces the performance time to under 2 seconds. What to do...

Buffer the output. It strikes me that I will do the exact same thing Siebel does with its own log files. Ever notice that the SQL Spool file or server log files are not always completely up to date with what you are executing in the GUI? This is because the application keeps a buffer of output and only writes the buffer when it is full. So I will do the same thing. I will store a new system parameter called 'Log Buffer' which will equal the number of lines to buffer. I will then create some new global variables to keep a running line count and one to buffer the output.


var giLogBuffer = Frame.GetSysPref("Log Buffer");
var giLogLines = 0;
var giLogCache = "";


All I have to do is modify my step function to leverage these values. Here is my step function again from the framework Log object with my mods:


step : function ( text, Lvl ) {
if (((Lvl == null)||(TheApp.GetProfileAttr("CurrentLogLevel") >= Lvl))&&
(giLogLines >= giLogBuffer)) {
var fp = Clib.fopen(OutPutFileName, "a");
Clib.fputs(giLogCache, fp);
Clib.fputs(Frame.GetLocalTime()+" "+gsIndent + text + "\n", fp);
Clib.fclose(fp);
giLogLines = 0;
giLogCache = "";
}
else {
giLogLines++;
giLogCache += Frame.GetLocalTime()+" "+gsIndent + text + "\n"
}
}


After setting the new buffer parameter to 20, performance improved drastically. My old 7 second run went down to 2 seconds. Could not even notice that logging was ramped up. My only concern is that things I really want to see don't appear in the log right away, like errors. So I need to modify the RaiseError function to artificially fill the buffer and force a log dump. Here is the new line I inserted (followed by the existing line):


giLogLines = giLogBuffer; //insure errors gets written to log file
Log.step("---End Function "+sFunction+"\n");

I would probably need to do something similar on the Application Close event. Not sure if user's hitting windows X will trigger this though. Something to think about.

Wednesday, May 26, 2010

eScript Framework - Query return Array

In my post introducing my eScript Framework, I glossed over the functional content of the functions I included so let me go into some more detail before proceeding. The last function in the declaration was called QueryExprArrayReturn. This function will execute a query against the business component whose string name is passed to the function using a complete expression also passed. The last parameter is an array of fields whose values will be returned, via an associative array.

QueryExprArrayReturn : function( sBO, sBC, sExpr, aFields) {
// Use : Frame.QueryExprArrayReturn (sBO : string nsme of Business
Object,
// sBC : string nsme of Business Component,
// sExpr : search expression to be applied,
// aFields : array of fields to be returned)
// Returns : string replacing fields and parameters from the lookup BC and/or property set
var aFnd, bFound, iRecord, sField;
var aValues = new Array();
var rePattern = /(?<=\[)[^[]*(?=\])/g;
with (TheApp.GetBusObject(sBO).GetBusComp(sBC)) {
while ((aFnd = rePattern.exec(sExpr)) != null) ActivateField(aFnd[0]);
for (var c=0; c < aFields.length; c++) ActivateField(aFields[c]);
ClearToQuery();
SetViewMode(AllView);
SetSearchExpr(sExpr);
ExecuteQuery(ForwardOnly);
if (FirstRecord()) {
iRecord = 0;
for (var i=0; i < aFields.length; i++) {
aValues[aFields[i]] = new Array();
aValues[aFields[i]][iRecord] = GetFieldValue(aFields[i]);
}
while (NextRecord()) {
iRecord++;
for (var i=0; i < aFields.length; i++) {
aValues[aFields[i]][iRecord] = GetFieldValue(aFields[i]);
}
}
}
return(aValues)
}


What is occurring here is pretty straightforward:
  1. Instantiate a BC using the passed BO/BC strings
  2. Activate any fields used in the Search Expression
  3. Activate the fields needing to be returned
  4. Query using the passed search expression
  5. If at least one record is found, create an associtive array of field values where the first index is the field name and the second index is the record number
  6. Continue populating this array for each record found

Probably the most interesting aspect of this query is the use of a regular expression search of the passed in expression to activate and BC fields present there by identifying them in enclosing square brackets [].

Monday, May 24, 2010

eScript Framework - Logging

The ABS Framework apparently has a logging module that Jason describes. This is interesting because I have been building my own logging technique over the past couple of years that largely parallels what I believe the framework does. Understanding object prototyping redirected my thoughts a bit and helped me centralize the effort. My previous post introduced the Frame and TheApp objects. This post will introduce a new object: Log. The following script will also be added to the Application declarations section.


Object.prototype.Log = function() {
return {
step : function ( text, Lvl ) {
if ((Lvl == null)(TheApp.GetProfileAttr("CurrentLogLevel") >= Lvl)) {
var fp = Clib.fopen(OutPutFileName, "a");
Clib.fputs(Frame.GetLocalTime()+" "+gsIndent + text + "\n", fp);
Clib.fclose(fp);
}
},
RaiseError : function ( e ) {
if(!defined(e.errText)) e.errText = e.toString();
var sFunction = gaFunctionStack.pop();

Log.step("".rPad(100, "*"));
Log.step("*** - ERROR - "+sFunction+" - "+e.errCode);
Log.step("*** "+e.errText.replace(/\n/g, "\n"+gsIndent+"".rPad(20," ")+"*** "));
Log.step("".rPad(100, "*")+"\n");
Log.step("---End Function "+sFunction+"\n");

var sLength = gaFunctionStack.length;
gsIndent = "".rPad(giIndent*sLength, ' ');

if (sLength>0) Log.step("<<-Returning to Function "+gaFunctionStack[sLength-1]+"\n"); throw(e); }, StartStack : function ( sType, sName, sMethod, Lvl ) { gaFunctionStack.push(sName+"."+sMethod); gsIndent = "".rPad(giIndent*gaFunctionStack.length, ' '); if (TheApp.GetProfileAttr("CurrentLogLevel") >= Lvl) {
Log.step(" ");
Log.step("".rPad(100, "-"));
Log.step("".rPad(100, "-"));
Log.step(sType+": "+sName);
Log.step("Method: "+sMethod+"\n");
}
},
Stack : function ( sFunction, Lvl ) {
gaFunctionStack.push(sFunction);
gsIndent = "".rPad(giIndent*gaFunctionStack.length, ' ');

if (TheApp.GetProfileAttr("CurrentLogLevel") >= Lvl) {
Log.step(" ");
Log.step(">".rPad(100, "-"));
Log.step("Function: "+sFunction+"\n");
}
},
Unstack : function ( sReturn, Lvl ) {
var sFunction = gaFunctionStack.pop();
if (TheApp.GetProfileAttr("CurrentLogLevel") >= Lvl) {
var sString = "";
if (sReturn != "") sString = " - Return: "+sReturn;
Log.step("---End Function "+sFunction+sString+"\n");
}
var sLength = gaFunctionStack.length;
gsIndent = "".rPad(giIndent*sLength, ' ');

if ((TheApp.GetProfileAttr("CurrentLogLevel") >= Lvl)&&(sLength>0))
Log.step("<<-Returning to Function "+gaFunctionStack[sLength-1]+"\n"); }, PropSet : function (Inputs, Lvl) { // Print out the contents of a property set. if (TheApp.GetProfileAttr("CurrentLogLevel") >= Lvl) {
PSDepth++; // Dive down a level
var InpChildCount, inprop, inpropval, inpropcnt;
var BlankLine = ' ';
var Indent = "".lPad(giIndent*PSDepth, " ") + ToString(PSDepth).lPad(2, "0") + ' ';

Log.step(BlankLine);
Log.step(Indent + '---- Starting a new property set ----');
InpChildCount = Inputs.GetChildCount();
Log.step(Indent + 'Value is ........ : "' + Inputs.GetValue() + '"');
Log.step(Indent + 'Type is ........ : "' + Inputs.GetType() + '"');
Log.step(Indent + 'Child count ..... : ' + ToString(InpChildCount));

var PropCounter = 0;
inprop = Inputs.GetFirstProperty();
while (inprop != "") { // Dump the properties of this property set
PropCounter++;
inpropval = Inputs.GetProperty(inprop);
Log.step(BlankLine);

var PropCountStr = ToString(PropCounter).lPad(2, "0");
Log.step(Indent+'Property '+PropCountStr+' name : <'+inprop + '>');
Log.step(Indent+'Property '+PropCountStr+' value : <'+inpropval + '>');
inprop = Inputs.GetNextProperty();
}

// Dump the children of this PropertySet
if (InpChildCount != 0) {
for (var ChildNumber = 0; ChildNumber < InpChildCount; ChildNumber++) {
Log.step(BlankLine);
Log.step(Indent + 'Child Property Set ' + ToNumber(ChildNumber + 1) + ' of ' + ToNumber(InpChildCount) + ' follows below.');
Log.step(Indent + 'This child is on level ' + ToNumber(PSDepth));

// Recursive call for children, grandchildren, etc.
Log.PropSet(Inputs.GetChild(ChildNumber), Lvl);
}
}
PSDepth--; // Pop up a level
}
}
}
}();
var OutPutFileName;
//Indent child prop sets this many spaces to the right for each level down.
var giIndent = 2;
var PSDepth = 0; // How deep in the property set tree, what level
var CurrentLogLevel = 2;
//used in debugStack function so store called functions
var gaFunctionStack = new Array();
var giStackIndex = 0; //Where in the function stack the current function resides
var gsIndent = ''; //used in debug methods to identify stack indents


In addition, I added the following to the Application Start event. The prerequisite for this is the new profile attribute I created in this post, and the creation of a new system parameter, 'Framework Log Path':


var sPath = Frame.GetSysPref("Framework Log Path");
sPath = sPath.replace(/\\$/, ""); //Remove trailing backslash if used
OutPutFileName = sPath+"\\Trace-"+
TheApp.LoginName()+"-"+
Frame.GetLocalTime("%02d%02d%d%02d%02d%02d")+".txt";
try {
Log.step("Log Application Start Event", 1);
}
catch(e) {
//default to OOTB Log File Location:
OutPutFileName = "Trace-"+TheApp.LoginName()+"-"+
Frame.GetLocalTime("%02d%02d%d%02d%02d%02d")+".txt";
Log.step("Invalid Preference - Framework Log Path: "+sPath, 0);
}
//Get the System Preference Log Level. Get the Log Level set for this user (if provided) and
//then set the log level for this session
var sLogLevel = Frame.GetSysPref("CurrentLogLevel");
if (TheApp.GetProfileAttr("User Log Level") != "")
TheApp.SetProfileAttr("CurrentLogLevel", TheApp.GetProfileAttr("User Log Level"));
else TheApp.SetProfileAttr("CurrentLogLevel", sLogLevel);
Log.step("Session Logging Level: "+TheApp.GetProfileAttr("CurrentLogLevel"), 1);


Here is an example of these functions in use in a PreInvokeMethod event of a BC:


function BusComp_PreInvokeMethod (MethodName) {
try {
Log.StartStack("Business Component", this.Name(), MethodName, 1);
var bReturn;
var sVar1 = "TEST"
switch(MethodName) {
case "TestMethod":
Log.step("Variable 1: "+sVar1 ,1);
TestMethod(sVar1 );
bReturn = CancelOperation;
break;
}

Log.Unstack(bReturn,0);
return(bReturn);
}
catch(e) {
Log.RaiseError(e);
}
}


And this is how it would be used in a method:


function TestMethod (sVar1) {
try {
Log.Stack("TestMethod", 1);
Log.step("sVar1: ".lPad(30,".")+sVar1+"\n", 2);
sVar1 += sVar1 + sVar1;
Log.Unstack("N", 2);
}
catch(e) {
Log.RaiseError(e);
}
}


The result of this provides an individual log file placed in the directory specified by the system parameter. Each method call is indented 2 spaces.

UPDATE: I am no HTML wizard but am learning. I updated tags to make code easier to read

The Framework

It has been a while but I want to return to the eScript Framework. I have already put up a couple of posts about some potential functions the Framework should include. So lets put a wrapper around this based on Jason's post. My thinking is that this code will all be placed in the Application declarations sections. I am still working on understanding the Object prototypes for modifying BCs which I hope will add significantly more functionality. Here is the script so far:


Object.prototype.TheApp = this;
Object.prototype.Frame = function() {
return {
AddToDate : function ( srcDate, iDays, iHrs, iMin, iSec, nSign ) {
//Use : Frame.AddToDate ( srcDate : Date Object
// iDays, iHrs, iMin, iSec : Integer Numbers
// nSign : 1 or -1 {1 to ADD to the srcDate
// -1 to SUBTRACT from the srcDate } )
//Returns : date object, after adding/subtracting iDays, iHrs, iMin and iSec to the srcDate
var retDate = srcDate;
retDate.setDate(retDate.getDate()+nSign*iDays);
retDate.setHours(retDate.getHours()+nSign*iHrs);
retDate.setMinutes(retDate.getMinutes()+nSign*iMin);
retDate.setSeconds(retDate.getSeconds()+nSign*iSec);
return(retDate);
},
DateToString : function (dDate) {
//Use: Frame.DateToString ( dDate : Date Object )
//Returns: A string with the format "mm/dd/yyyy" or "mm/dd/yyyy hh:mi:ss"
var sMon = ToString(dDate.getMonth()+1);
if (sMon.length==1) sMon = "0" + sMon;
var sDay = ToString(dDate.getDate());
if (sDay.length==1) sDay = "0" + sDay;
var sHrs = ToString(dDate.getHours());
if (sHrs.length==1) sHrs = "0" + sHrs;
var sMin = ToString(dDate.getMinutes());
if (sMin.length==1) sMin = "0" + sMin;
var sSec = ToString(dDate.getSeconds());
if (sSec.length==1) sSec = "0" + sSec;
if (sHrs == "00" && sMin == "00" && sSec == "00")
return(sMon+"/"+sDay+"/"+dDate.getFullYear());
else return(sMon+"/"+sDay+"/"+dDate.getFullYear()+" "+sHrs+":"+sMin+":"+sSec);
},
StringToDate : function ( sDate ) {
//Use: Frame.StringToDate(sDate: A string with format "mm/dd/yyyy" or "mm/dd/yyyy hh:mi:ss"
//Returns: a Date Object
var aDateTime = sDate.split(" ");
var sDate = aDateTime[0];
var aDate = sDate.split("/");
if (aDateTime.length==1)
return (new Date(ToNumber(aDate[2]),
ToNumber(aDate[0])-1,
ToNumber(aDate[1])));
else {
var ArTime = aDateTime[1];
var aTime = ArTime.split(":");
if (aTime[0]=="00" && aTime[1]=="00" && aTime[2]=="00")
return (new Date(ToNumber(aDate[2]),
ToNumber(aDate[0])-1,
ToNumber(aDate[1])));
else {
return (new Date(ToNumber(aDate[2]),
ToNumber(aDate[0])-1,
ToNumber(aDate[1]),
ToNumber(aTime[0]),
ToNumber(aTime[1]),
ToNumber(aTime[2])));
}
}
},
GetSysPref : function ( sPreference ) {
//Use: Frame.GetSysPref( sPreference: the preference name in the system preference view )
//Returns: the value in the system preference view for this preference name
var boPref = TheApp.GetBusObject("System Preferences");
var bcPref = boPref.GetBusComp("System Preferences");
with (bcPref) {
ClearToQuery();
SetSearchSpec("Name", sPreference);
ActivateField("Value");
ExecuteQuery(ForwardOnly);
if (FirstRecord()) return(GetFieldValue("Value"));
else return("");
}
},
SetSysPref : function ( sPreference, sValue ) {
//Use: Frame.SetSysPref( sPreference: the preference name in the system preference view,
// sValue: the value of the preference )
var boPref = TheApp.GetBusObject("System Preferences");
var bcPref = boPref.GetBusComp("System PreferencesUpd");

with (bcPref) {
ClearToQuery();
ActivateField("Value");
SetSearchSpec("Name", sPreference);
ExecuteQuery(ForwardOnly);

if (FirstRecord()) SetFieldValue("Value", sValue);
else {
NewRecord(NewBefore);
SetFieldValue("Name", sPreference);
SetFieldValue("Value", sValue);
WriteRecord();
}
}
},
DiffDays : function (date1, date2) {
// Use : Frame.DiffDays ( date1 : Starting Date object, date2 : Another Date object )
// Returns : Number of days between date1 and date2
return ((date2.getTime()-date1.getTime())/(1000*60*60*24));
},
GetLocalTime : function (sFormat) {
// Use : Frame.GetLocalTime ()
// Returns : string of the current timestamp
var dNow = new Date();
var sNow;
if (sFormat != null)
Clib.sprintf(sNow, sFormat, dNow.getMonth()+1, dNow.getDate(),
dNow.getFullYear(), dNow.getHours(), dNow.getMinutes(), dNow.getSeconds());
else Clib.sprintf(sNow, "%02d/%02d/%d %02d:%02d:%02d", dNow.getMonth()+1,
dNow.getDate(), dNow.getFullYear(), dNow.getHours(), dNow.getMinutes(),
dNow.getSeconds());
return (sNow);
},
QueryExprArrayReturn : function( sBO, sBC, sExpr, aFields) {
// Use : Frame.QueryExprArrayReturn (sBO : string nsme of Business Object,
// sBC : string nsme of Business Component,
// sExpr : search expression to be applied,
// aFields : array of fields to be returned)
// Returns : string replacing fields and parameters from the lookup BC and/or property set
var aFnd, bFound, iRecord, sField;
var aValues = new Array();
var rePattern = /(?<=\[)[^[]*(?=\])/g;
with (TheApp.GetBusObject(sBO).GetBusComp(sBC)) {
while ((aFnd = rePattern.exec(sExpr)) != null) ActivateField(aFnd[0]);
for (var c=0; c < aFields.length; c++) ActivateField(aFields[c]);
ClearToQuery();
SetViewMode(AllView);
SetSearchExpr(sExpr);
ExecuteQuery(ForwardOnly);

if (FirstRecord()) {
iRecord = 0;
for (var i=0; i %lt; afields.length; i++) {
aValues[aFields[i]] = new Array();
aValues[aFields[i]][iRecord] = GetFieldValue(aFields[i]);
}
while (NextRecord()) {
iRecord++;
for (var i=0; i < afields.length; i++)
aValues[aFields[i]][iRecord] = GetFieldValue(aFields[i]);
}
}
}
return(aValues)
}
}
}();


This is in addition to the string prototype modifications discussed in earlier posts.