Friday, July 23, 2010

My Barcode Promised Land

The effort of trial and error, traversing dead ends, and determining what I could not do, led me eventually to what I could. Let me start by saying that if I was a Siebel engineer (completely unaware of what constraints they had to work with) I would have provided an Application level method called something like BarcodeScan that could be trapped. I could then put a runtime event on it and trigger a wokflow when I was done. But then again I also would not have coded in the limitations I mentioned earlier.

Barring all that, I still needed a couple of basic things:
  • Hook to trigger additional functional logic
  • Do lookups on Serial Numbers
Additionally, it would be nice to:
  • Minimize the number of clicks
  • Do lookups on the child record of a BC
  • Parse the input so that I could do different stuff based on the type of data
Given those must-haves and nice-to-haves, I decided to hack the business service, trap the methods in question and just do my own thing. I should mention, that my initial approach was more from a wrapper perspective than a replace perspective. That is, I thought I could just trap the method, do my stuff, then continue with the vanilla method. Here is the problem though. Since everything that happens in the vanilla method threads occurs out of the GUI context, I cannot leverage any Active... methods. Therefore to do something as simple as update the record returned by the vanilla lookup, I would have to requery for it in my own objects to get it in focus to update it. Well if I am requerying for it, what is the point of doing the same query twice? I can just do my own query once in the Active object and then trigger my post events.

Let me start by walking through the most important Must-Have

Hook to trigger additional functional logic
I have sort of hinted at how this was achieved in general. Once I realized that the 'HTML FS Barcoding Tool Bar' was getting called, I modified the server script on this service to log when its methods are called. The important method here is 'ProcessData' which is the one method called regardless of the processing mode in use. At this point you have the barcode data and the Entry mode. You can also determine what view you are on via ActiveViewName. I trapped the Find, New and Update methods in the PreInvokeMethod event to store the current processing mode in profile attribute:
switch (MethodName) {
case "Find":
case "New":
case "Update":
TheApp.SetProfileAttr("BarcodeProcessMode", MethodName);
break;
}
With these three fields, the View, Process Mode, and Entry Mode, I can query the FS Barcode Mappings BC for a unique record.

boBCMappings = TheApp.GetBusObject("FS Barcode Mappings");
bcBCMappings = boBCMappings.GetBusComp("FS Barcode Mappings");
with (bcBCMappings) {
ClearToQuery();
SetViewMode(AllView);
ActivateField("Field");
ActivateField("Applet BC");
SetSearchSpec("View", sView);
SetSearchSpec("Entry Mode", sEntryMode);
SetSearchSpec("Process Mode", sProcessMode);
ExecuteQuery(ForwardOnly);
bFound = FirstRecord();

if (bFound) {
...
What I want to get from that record for now is the lookup field. I also need to know the Active BC to do the lookup in. Again, I cannot use ActiveBusComp or ActiveApplet so I just added a join to the FS Barcode Mappings BC to the repository S_APPLET table based on the applet name already stored in the Admin BC and added a joined field based on S_APPLET.BUSCOMP_NAME. I still feel like there is a better way to do it, but that is where I am at right now. Anyway, from the admin record I have a BC to instantiate, a field to set a search spec on, and the text value of the search spec.
sField = GetFieldValue("Field");
sBusComp = GetFieldValue("Applet BC");

boObject = TheApp.ActiveBusObject();
bcObject = boObject.GetBusComp(sBusComp);
with (bcObject) {
ClearToQuery();
SetViewMode(AllView);
ActivateField(sField);
SetSearchSpec(sField, sLogicalKey);
ExecuteQuery(ForwardOnly);
bFound = FirstRecord();

if (bFound) {
...
My client has multiple barcode processes so all this could be happening in different places. So the last step is to add some logic to branch out my hook. I am using the BC for now but we could make this more robust:
switch (sBusComp) {
case "Service Request":
ProcessSR();
break;

case "Asset Mgmt - Asset":
ProcessAsset();
break;
}

The Dead Ends of Barcode Hacking

Most technical blog posts are about solutions. Since this series on Barcodes is also about my journey, I thought it might be interesting to also talk about what I tried out but did not work. Who knows, maybe I can save someone the effort of trying these. Or perhaps the patterns I am finding through these dead ends will help someone head off into a totally new direction as it has helped me.

Auto Enabling
So the first thing I though would be cool would be to auto enable the Barcode tool bar and the natural place to do this seemed to be on the Application Start event. After a lot of trial and error, my application kept crashing after trying to invoke the 'Active' method. The 'Active' method receives as an input the Active View Name and Active Applet Name. The startup page is not actually instantiated yet when the Application Start event executes so even hard coding a startup page into the input property set results in an application crash. So Application Start is not the right place.

Applet Context
When trying to call various barcode service methods through script, many of them require the applet name as an input parameter. Trying to use ActiveApplet though results in an error you would typically receive when you are not in a GUI context, such as when using EAI. ActiveViewName does work though so it is only the applet. I think what is happening is that clicking on a toolbar button, even though an applet appears to remain in focus (via the color pattern of the applets) focus is actually on the toolbar and hence active applet does not work. Well that is my theory anyway.

Default to Find Mode
My client will mainly be using the Find process mode so I thought it would be good that if I could not Auto Enable the tool bar, at least I could auto default the toolbar to Find mode once it is enabled. So I trapped the Active method on the business service and called the Find method from the InvokeMethod event after the Active method runs. But this does not quite work. If I click the Enable button twice though it does. It appears that this is a context issue. It is as if GUI context has been returned to the user prior to the Find script executing.

I noticed that a series of barcode events trigger anyway when the Application starts. I therefore tried triggering my auto enable scripts from the tail end of one of these events, again through the InvokeMethod event, but again ran into the context issue.

SWE From Script
The interesting thing to me is that the input parameters to all of these methods are a series of SWE Commands, Methods and parameters. It seems as though another browser thread or frame is being used where SWE commands are the language Siebel uses to initiate the logic. There is probably a way to call a SWE command directly through script but I am not aware of it. What I am thinking is to use SWE command to refresh the context of the GUI thread after a Barcode method has been called, then to explicitly call a followup method. I cannot do this directly as the results of the second method call appear to get lost as the context has been returned to the GUI before the second call.

Thursday, July 22, 2010

Hacking the 'HTML FS Barcoding Tool Bar' Business Service

In case you were curious what happens in the black box, once the Barcode toolbar is up and running, here is a dump of the Input and Output property sets from each Method that is called:

When the application starts up, the 'IsBarcodeEnabled' method is called about 15 times, is passed an empty property set and returns:
01  Prop 01: IsBarcodeEnabled           / 1

Also on startup, the 'ResetButton' method is called which appears to set the set which buttons on the toolbar are turned on or off and which buttons are active. Resetting them makes the enable button Active and off, and the process mode buttons inactive and off, as you can see from the outputs. Here are the Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / ResetButton
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 1
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: NEW_ENABLED                 / N
01 Prop 02: ACTIVE_ENABLED / Y
01 Prop 03: ACTIVE_CHECKED / N
01 Prop 04: UPDATE_CHECKED / N
01 Prop 05: FIND_ENABLED / N
01 Prop 06: FIND_CHECKED / N
01 Prop 07: NEW_CHECKED / N
01 Prop 08: UPDATE_ENABLED / N

The control keys are then determined. First the 'GetStartKeyCode' method is called with these Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / GetStartKeyCode
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 2
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: KeyCode                     / 220

Lastly, the End key via the 'GetEndKeyCode' method with these Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / GetEndKeyCode
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 3
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: KeyCode                     / 220

Clicking the enable button triggers the 'Active' method has these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Active
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 22
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

and these Outputs:
01  Prop 01: OPTION0                     / Service Request
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: OPTION2 / Repair
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: ACTIVE_CHECKED / Y
01 Prop 06: OPTION3 / Pick Ticket
01 Prop 07: UPDATE_CHECKED / N
01 Prop 08: OPTION6 / Serial #
01 Prop 09: FIND_ENABLED / Y
01 Prop 10: Check / 1
01 Prop 11: OPTIONS_LENGTH / 7
01 Prop 12: OPTION4 / Order
01 Prop 13: FIND_CHECKED / Y
01 Prop 14: OPTION5 / Product
01 Prop 15: NEW_CHECKED / N
01 Prop 16: OPTION1 / Asset #
01 Prop 17: UPDATE_ENABLED / Y

Clicking the Find button gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Find
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 11
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Service Request
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: OPTION2 / Repair
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: ACTIVE_CHECKED / Y
01 Prop 06: OPTION3 / Pick Ticket
01 Prop 07: UPDATE_CHECKED / N
01 Prop 08: OPTION6 / Serial #
01 Prop 09: FIND_ENABLED / Y
01 Prop 10: Check / 1
01 Prop 11: OPTIONS_LENGTH / 7
01 Prop 12: OPTION4 / Order
01 Prop 13: FIND_CHECKED / Y
01 Prop 14: OPTION5 / Product
01 Prop 15: NEW_CHECKED / N
01 Prop 16: OPTION1 / Asset #
01 Prop 17: UPDATE_ENABLED / Y

Clicking the New button (on the toolbar) gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / New
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 23
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Serial Number Entry
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: ACTIVE_ENABLED / Y
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: UPDATE_CHECKED / N
01 Prop 06: FIND_ENABLED / Y
01 Prop 07: Check / 1
01 Prop 08: OPTIONS_LENGTH / 1
01 Prop 09: FIND_CHECKED / N
01 Prop 10: NEW_CHECKED / Y
01 Prop 11: OPTIONS_LENGTH / 7

Clicking the Update button (on the toolbar) gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Update
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 24
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Asset
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: ACTIVE_ENABLED / Y
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: UPDATE_CHECKED / Y
01 Prop 06: FIND_ENABLED / Y
01 Prop 07: Check / 1
01 Prop 08: OPTIONS_LENGTH / 1
01 Prop 09: FIND_CHECKED / N
01 Prop 10: NEW_CHECKED / N
01 Prop 11: UPDATE_ENABLED / Y

And perhaps the most important one, scanning the data. This executes the 'ProcessData' method and would occur after the second end control character is received from the scanner. The Inputs are:
01  Prop 01: OPTION                      / Service Request
01 Prop 02: BARCODE / 2-7144002

And these Outputs:
01  Prop 01: Applet Name                 / Service Request List Applet

Keep in mind that in many cases, the actual property values are based on data pulled from the 'FS Barcode Mappings' BC.

Spelunking in the Barcode Cavern

My new client would like to use a Barcode scanner for a whole variety of Field Service applications:
Shipping Label to Lookup and RMA Order and update some fields
Asset Label to Lookup or Create an RMA Order Line Items and update some fields
Asset Label to Lookup a Repair record and update some fields

Siebel Bookshelf and Supported Platforms provides some basic information. There are a couple of approaches to using a Barcode scanner
  • Treat it like any data entry device. In other words, you prepare your record (click new, Clear to Query, etc.), click into a field, scan your barcode, the scanner copies the translated barcode value to the field, then you do what you want (save the record, execute query, etc).
  • Use the Barcode ToolBar. This has some basic modes (New, Update, Find), an administration area that ties a View to one or more modes and a field. So when you navigate to a view, Siebel (when the barcode toolbar is turned on through an object manager parameter), checks to see of any barcode admin records exist for that view and the currently selected mode. If so these appear in a dropdown in the toolbar that a user can select a value from. If the User then scans something, the Application "Processes" the barcode depending on the mode, either doing a query based on a specified field, updating a field on the current record, or creating a new record and populating a specified field.
This sounds groovy until you hear about some of the limitations and start thinking about a more realistic process. So here are the limitations as I understand them:
  • Only some Barcode Types (think fonts) are supported.
  • The processing can only occur in the primary BC of the BO, or the Parent BC in a master detail view.
  • Serial Numbers cannot be looked up (I am still investigating why this is but I am guessing it has to do with them possibly not being unique).
  • Only barcode scanners that support using customizable control character before and after the scanned input will work
  • A single input value is taken (so no splitting of a concatenated value)
  • You basically have to tell the toolbar what value to expect (again, no intelligent parsing)
Prototyping:
  • Insure you have the Field Service, and Barcode license keys
  • In the Field Service cfg file (if using a thick client), set the ShowBarcodeToolbar parameter to TRUE. Intuitively enough, this will make the Barcode toolbar appear in your app upon restart.
  • Click the enable button (far right hand button) on the toolbar
  • As you navigate to a view, the application will perform a query of the 'FS Barcode Mappings' BC, or S_BC_ENTRY_TRGT table for admin records corresponding to the current view and the currently selected processing mode (the three buttons to the left of the dropdown in the toolbar each correspond to a different mode). If you think about it, this is sort of similar to how Actuate reports are tied, except you can actually administer this a bit in the GUI.
  • We can mimic a barcode scan by using <ctrl-\>, followed by the translated value we are trying to scan (SR number for instance), followed by another <ctrl-\>
  • If you want to use different control character than <ctrl-\> (because maybe that one is already taken or something), these are set on the 'HTML FS Barcoding Tool Bar' business service as User Properties. I will leave them be.
So in my real life example, I will:
  1. Navigate to All Service Requests
  2. Click Enable on toolbar
  3. Click the right most left side button on the toolbar, 'Find'
  4. Leave the dropdown as 'Serial Number'
  5. Hit <ctrl-\>
  6. Type in an SR # I can see in the list
  7. Hit <ctrl-\> again
  8. The Application should query for the SR # I entered
I am now going to dive figuring out a better way to customize this behavior. I'll be back.

Wednesday, July 21, 2010

About defaults, picks, maps and SetField events

That is an eclectic list of things in the title, and no I do not intend to talk about them all in detail other than to discuss a bit about how they interact and some of the design implications they may cause. So let me start with another list:
  • Pick Maps do not cascade
  • Fields set by a pick map cause a SetFieldValue event
  • Defaults do not cause a SetFieldValue event
  • On Field Update Set BC User Prop will trigger a SetFieldValue event
  • SetFieldValue event triggers a Pick
  • Setting a field to itself does not trigger a SetFieldValue event
So those are the important findings I had to deal with when implementing a seemingly simple requirement. My client had a contact type and sub type. The contact type should be denormalized from the related account's type. Finally, they want to set the contact sub type dynamically to a different value depending on the contact type. By dynamically, I mean not hard coded, so it can be changed without a release.

Let me put all that in functional terms by providing an example. The Account Type has a static LOV with values 'Bank' and 'Government'. The Contact can potentially be created as a child of an account, inheriting information from the account record, and triggering Parent expression default values, or can be created from the Contact screen without an account, but with the option to set the account later. When an account is specified for a contact, the contact type will be set to match the account type, otherwise the contact type should be set to 'Other'. If the Contact type is 'Bank', the contact sub type should get set to 'Retail', and if the contact type is 'Government', the sub type should be set to 'HUD'. So the basic configuration we started with was to put the desired 'dynamic' sub type value in the Low column on the LOV table. Then set up the pick map for contact type as such:

Field Picklist Field
Contact Type Value
Contact Sub Type Low

It would be convenient to just set the pick map similarly on Account Type as:

Field Picklist Field
Account Type Value
Contact Type Value

But the first rule above states this will not work because pick maps do not cascade. This makes some sense as you could conceivably end up with some circular logic. Or in the case where the contact is created as a child of an account, to predefault the Contact Type to the Account Type. But again, according to the rules above, a predefault will not trigger a SetField and hence no pick map.

So in order to trigger the pick map on Contact Type, we need to trigger a SetFieldValue event on this field. What to do. Oh, and I did not want to use script. My solution had a couple of dimensions.
  1. When a contact is created on the Contact Screen and the account is picked, I am going to trigger a set field value on the Account Type by creating a joined field on the Contact BC called Account Type, and add this field to the Account pick map. So this will trigger my SetFieldValue event. I then will add an 'On Field Update Set' BC User property to the Contact BC so that when the joined Account Type field is updated, set the Contact Type to the Account Type. Using a User Property will then trigger the SetFieldValue event on Contact Type which will then trigger the pick map to set the Contact Sub Type. So far so good.
  2. My approach on the scenario when a Contact is created as a child of an Account is not as clean. The problem here is that Predefaults do not trigger SetFieldValue events. And in this case, all the account information will already have been set via Predefault so there is no field being explicitly set by a user to trigger the User property. So I had to get creative. What I did was similar to above but placed identical user properties on Contact First and Last name fields. Since these are required fields that are typically entered first, they will trigger the user properties to set the contact type and sub type. In order to minimize the UI impacts of this admittedly kloogy design, I wanted the visible Contact Type in the applet to default correctly to the Account Type from the parent record. This means that when the User sets the First Name (or the Last) the Contact Type will already have the correct value so the User Property would essentially set it to itself. The last rule above states this will not trigger the SetFieldValue event. To get around this I create two User Properties in sequence, the first to set the Contact Type to null, and the second to set it back to the Account Type. Because I am putting the properties on both the First and Last name (to accommodate different user's field population sequences), I also want to add a conditional to the user properties to not execute if the Sub Type has already been set.
What does all this leave us with? In addition to the pick map on the Account field mentioned first, here are the On Field Update Set user properties on the Contact BC:
  1. "Account Type", "Contact Type", "[Account Type]"
  2. "First Name", "Contact Type", "", "[Contact Sub Type] IS NULL"
  3. "First Name", "Contact Type", "[Account Type]", "[Contact Sub Type] IS NULL"
  4. "Last Name", "Contact Type", "", "[Contact Sub Type] IS NULL"
  5. "Last Name", "Contact Type", "[Account Type]", "[Contact Sub Type] IS NULL"
I am going to leave it there, but this actually gets even more complicated. Because a contact can be created from a pick applet from a service request, I also had to account for predefaulting the account to the SR's account and this impact this would have on predefaulting Contact Type and Sub Type. If anyone would like to see how this is done, here is where to start.

Wednesday, July 7, 2010

Expectations and Changes

When doing a Siebel project, there will always be a balancing act between managing client expectations and delivering everything the customer wants. I am not even trying to finesse when I say managing client expectations. The way I put it in that sentence, you may have inferred I meant not delivering what the customer wants. But that is not really the case, as frequently, the client does not necessarily know what they want, or their understanding of what they want evolves as they understand the capabilities and the implications of a CRM strategy/product.

We see this unfold in different ways on different projects. In a green field implementation (new to Siebel), Phase I is typically a data model implementation where the majority of the development work revolves around building views. Now there is obviously a lot that goes on behind the scenes, but from a Client's point of view, we are mostly showing them views, and using a view as a way to communicate the concepts of a data model. That is, the view becomes the way to communicate relationships and attributes. The presence or absence of a field on a view becomes a visual indicator of whether a logical attribute exists or not in our build out. An attribute expressed as a single value field in an applet provides a visual cue that a user can only enter one value. Because the views provide extensive visual reinforcement, it is easy for stakeholders to identify gaps through the testing and acceptance process by saying, ahah, I do not see this field, or I need to enter more than one of that value, or their needs to be a view linking these two objects.

Integration based projects tend not to have the same issues when integrating to a legacy system as there are typically a pair of technical architect types that are fairly knowledgeable about the preexisting data models of each application. The project is mainly a matter of synchronizing these efforts. Testing and user acceptance though can again identify visually when a field or record set is blank to recognize that a gap exists.

Where I am leading to with all this is the nature of an automation oriented project. Automation is by it's nature typically new. Perhaps the steps have existed, but the mechanisms we are using to automate, to add speed to the process, have never existed before. This adds some expectation management issues that are a bit different than in other types of projects. The types of changes necessary have an added dimension. Gaps in the specifications will likely be caught dring the testing phase, such as a field not being populated or a decision branch executing on the wrong condition. The added dimension is time and frequency. For instance, a popular way to automate processes is to add reminders to a process when steps are not executed, or to change a status of a record to indicate an escalation in priority or status. I would posit that users do not really know how frequently they will want to be reminded because they do not necessarily have a sense of the scale or frequency of the events. Frequently, during an interdepartmental process, one department may perceive the severity of an issue as higher that the department they are working with. These are important considerations because a user that is reminded too frequently (when in fact they are aware of a task but are waiting on other deliverables in the normal course of performing it) will begin to ignore the reminders. Being informed of a number of outstanding items on a too frequent basis will cause us to phase it out as anything that can be happening so frequently is typically thought to be not too severe.

It is likely that system users will request, some time soon after deployment, that these reminders be scaled back, and if the capability to do so has not been built into the project, to turn them off altogether. This thereby loses the value of that particular automation. So, where am I going with all this? While workflows can be redeployed without a major release, it is unlikely most Siebel project teams are actually prepared to do so on short notice. It is possible to account for this by explicitly adding requirements for it, but of course this adds complexity and scope to the project.

This is all why I built the RARE Engine to be extensively customizable in the GUI, including the turning on and off of email reminders, the setting of the text of the reminder/escalation message, and the delay interval between reminders and escalations both on a per person and per process basis. This means that after the process has been automated and deployed, an administrator can tweak these parameters to the individual needs of the user base.

Saturday, July 3, 2010

eScript Framework - GetRecords

Matt has launched YetAnotherSiebelFramework, a blog about... you get the idea. This is an important step forward in this community's attempt to create a true open source Siebel eScript framework. He adds flesh to the skeleton I have assembled here. He will shortly be adding his own posts to explain his functions in more detail but I thought I would get a head start with starting a discussion about one of his most important pieces, the GetRecords function. I say one of the most important pieces, as the real driver behind this solution is to replace the many plumbing steps, as Matt calls them, that sit in so much of our script. So for instance to query an Account by Id (sId) to get the Location for instance, you would write something like this:
var boAccount = TheApplication().GetBusObject("Account");
var bcAccount = boAccount.GetBusComp("Account");
with (bcAccount) {
ActivateField("Location");
ClearToQuery();
SetViewMode(AllView);
SetSearchSpec("Id", sId);
ExecuteQuery(ForwardOnly);

if (FirstRecord()) {
var sLoc = GetFieldValue("Location");
}
}
You get the idea. His function essentially replaces this with:
var sLoc = oFramework.BusComp.GetRecord("Account.Account", sId, ["Location"]).Location;
So that is pretty cool. What follows is mostly quibbling but I think attracting criticism from our peers is the best way to make this framework the most usable it can be. On a technical note, I am using 7.8 and the T engine for my personal sandbox so have not yet been able to get Matt's entire framework up and running. Nevertheless, I have gotten his individual functions running so I will limit my discussion to that scope. Here are my thoughts:

(1) My biggest point is to think about whether it makes more sense to return a handle of the BC rather than filling an array. I am thinking about this in terms of performance. There are times when having the array would be useful, like say when I want to perform array operations on the data, like doing a join. But often, I may just need to test a field value(s) and perform operations on other values conditionally. In this case, I would only be using a small percentage of the data I would have filled an array with. It may also be useful to have a handle in order to use other Siebel BC functions like GetAssocBusComp or GetMVGBusComp. I do not claim to be a java guru, but I am curious about the performance implications. What I have done with my own framework is to build three functions:
  • Bc_GetArray (this is basically the same as Matt's)
  • Bc_GetObject (stops before filling the array and just returns the handle to the BC)
  • Bc_GetInvertedArray (Same as Matt's but makes the fields the rows and the record the column)
(2)I took out the following two lines:
aRow[aFields[i][0]] = vValue;
if (aFields[i][0].hasSpace()) aRow[aFields[i][0].spaceToUnderscore()]= vValue;
that checks if the field name has a space and if so changes it to an underscore and replaced them with a single line:
aRow[aFields[i][0].spaceToUnderscore()]= vValue;
I think this should be more efficient since a regular expression search is being done regardless, I think just doing the replace in one step saves an operation.

(3) I like the first argument, "Account.Account" syntax for most situations. I think we can make this even more robust though by allowing us to pass in an already instantiated BC. This is probably infrequently necessary moving forward with the pool concept Matt has introduced, but there is a low cost way to handle either. What I have done is to add a test of the data type:
if (typeof(arguments[0])=="string") {
before starting the pool logic. I then added an else to allow us to pass a BC object in and add it to the pool:
else {
oBc = arguments[0];
this.aBc[oBc.Name()] = oBc;
}
(4) I think I understand where Matt is going with the pool as a mechanism to instantiate BCs less frequently. His bResetContext argument, the flag indicating that the pool be flushed, is I think unnecessarily too drastic. If I understand it correctly, setting this flag to true would flush the entire pool. While this may sometimes be desired, it seems more useful to just flush the BO/BC in play. This would allow you to write code for instance in nested loops that jumps between BCs without clearing context when it is not necessary too. I may not be thinking of a situation where this would be necessary though so if anyone can think of one I am all ears. My recommendation would be to make the flush just clear the passed BO/BC but if the "Full flush" is necessary, then perhaps a code indicating one or the other can be used. This could be accomplished by just removing the reference to the FlushObjects function, as the following if/else condition effectively resets the BO/BC variables in the array after evaluating the bResetContext argument.

Economies of Scale - Data Edition

In the process of describing how a typical siebel installation reaches maturity, I summarized it thus:
...for any client, the first release or three are about implementing a robust data model, rolling on as many business units as possible to take advantage of the enterprise nature of that data model and gaining economies of scale, and maybe implementing some integration to get legacy data into Siebel
It strikes me that embedded in that sentence is another big picture concept I want to go into further detail about. Putting a call center on Siebel is nice for the Call Center and the managers of that call center from an operational standpoint. Putting a Sales division on Siebel is nice for those sales people and their managers too. In both cases, whenever a customer calls, the business case of using Siebel as a data model applies when we find that this customer has called before and we leverage that information to assist us on the current call.

Perhaps it is obvious, but it is even better when multiple business units are on Siebel, such that any given business unit can leverage the touchpoint history of the other business units when transacting with a customer who has corresponded with both. In other words, if a customer calls the Call Center, and the operator records information about that call, the Sales person can also leverage that same information, and the marketing division can market to that customer from the same database. This is what we mean when we talk about the enterprise nature of the application. The underlying data is to some extent shared with whatever visibility rules are deemed appropriate.

This is useful in the following ways:
More likely to get a hit when looking up a master data record.
Reduces the need to key in master data information that has been entered before
Increases the speed at which the user can transact the true nature of the call
Reassures the customer that they are known by the business
Allows user (or analyst or system) to identify a trend in the customer's transactions

There will often be a tension between choosing the best application to perform a certain task and gaining the economies of data scale identified above. This tension can be mitigated somewhat through good integration but it is unlikely to go away completely. That is, SAP may be a better inventory management application, so there is a tension between storing my inventory information in SAP which has built in and customizable algorithms, and storing it in Siebel, which while not as robust, has the advantage of making that data available in Siebel views and linking it to Siebel objects easily. Like I said, we can integrate SAP to Siebel, but this adds cost and complexity (and probably lag time). That does not mean it is not the right decision. In the case of inventory management, depending on how important that functionality is to the customer's core business, it may very well be the right decision. I just want to point out the tension between these concepts.