Friday, April 30, 2010

Escript Framework - lTrim & rTrim

I want to make sure I am making regular contributions to our open source eScript framework function library, so today's theme will be to continue with some tools for the string object not included in the standard escript string library. Last time I talked about lPad and rPad, so let's continue with their inverse, lTrim and rTrim. These are pretty basic concepts, present in VB and SQL, but the idea is simply to remove all spaces from either the front or back of a string. Metalink actually provides a couple of approaches to this, on based on regular expressions from Siebel 7.5 and earlier (before the replace function),
String.prototype.lTrim = function() {
var aReturn = this.split(/^\s*/);
return(aReturn.join(""));
}
String.prototype.rTrim = function() {
var aReturn = this.split(/\s*$/);
return(aReturn.join(""));
}
And here is a looping approach:
String.prototype.rTrim = function() {
var iCount = 0;
var iLength = this.length;
while ((iCount<iLength)&&(this.substring(iLength-iCount-1, iLength-iCount)==" ")) iCount++;
return(this.substring(0, iLength - iCount));
}
String.prototype.lTrim = function() {
var iCount = 0;
var iLength = this.length;
while ((iCount<iLength)&&(this.substring(iCount, iCount+1)==" ")) iCount++;
return(this.substring(iCount, iLength));
}
But if you are using Siebel 7.5 or later, using the replace function with regular expressions that find end of string anchored and beginning of string anchored white space, respectively, should be the best approach:
String.prototype.rTrim = function() {
//Use: String.rTrim()
//Returns string trimmed of any space characters at the end of it
return(this.replace(/\s*$/, ""));
}
String.prototype.lTrim = function() {
//Use: String.lTrim()
//Returns string trimmed of any space characters at the beginning of it
return(this.replace(/^\s*/, ""));
}

Wednesday, April 28, 2010

Modifying the Personalization Profile

You may have found Profile Attributes to be a useful addition in Siebel 7 as a better way to use global variables. The most common use of profile attributes is exactly that. You use the script expression:
TheApplication().SetProfileAttr("CurrentLogLevel", 5);
This variable can then be referenced elsewhere in script:
var iLogLevel = TheApplication().GetProfileAttr("CurrentLogLevel");
Alternatively, you can get this value in a calculated field using just:
GetProfileAttr("CurrentLogLevel")
OK, so that is pretty elementary as it is documented pretty well in bookshelf. This assumes a relatively dynamic value of the global variable, one that is determined programatically during a user session, and whose value is lost when the session ends. But what if you want to reference a user attribute that persists. There are already a couple of specialized functions that reference some user attributes:
PositionName()
LoginName()
LoginId()
PositionId()
But what if you want to add your own? So here is my requirement. I want to store a custom logging level attribute that can be set on a user by user basis. I will go into more detail in future posts about what I might want to do with this value. So the first thing to do is to either find a place to put this value or to extend a table to make a place. I prefer the latter, so I am going to add a new number column, X_LOG_LEVEL to the S_USER table:
This field should then be exposed in the Employee BC (it could alternatively be exposed in the User BC if an eService or eSales application was in play but I am going to keep this simple for now). Create a new single value field to expose this column (I added some validation too):
Now, in order to actually use this value, I will need to expose it in the GUI. So lets put it in the Employee List applet so it will be visible in the Administration - User -> Employees view.
Once the list column exists, edit the web layout and add this control to the list in the Edit List mode. Next, I am going to expose this field in a very special BC, Personalization Profile. This BC is instantiated when the user logs in and its fields basically represent all the potential attributes of the logged in user including the User, Employee, Contact, Position, Division, and Organization. As I will show in a minute, its fields are referenceable as profile attributes. There is already a join in this BC to the S_USER table so just create a new SVF to expose this column.Finally lets add some script to the Application Start event for the application you are using. What I want to do here is set a global variable type profile attribute as I described in the beginning of this post called CurrentLogLevel to the value on the User record (which I get from the Personalization Profile) if one was set, otherwise to set it to a constant value. Then if the logging level is sufficiently high, start application tracing and push a line to that new file:
var sLogLevel = 3;
if (TheApplication().GetProfileAttr("User Log Level") != "")
TheApplication().SetProfileAttr("CurrentLogLevel",
TheApplication().GetProfileAttr("User Log Level"));
else TheApplication().SetProfileAttr("CurrentLogLevel", sLogLevel);
if (TheApplication().GetProfileAttr("CurrentLogLevel") > 4) {
TheApplication().TraceOn("Trace-"+TheApplication().LoginName()+".txt", "Allocation", "All");
TheApplication().Trace("Application Started");
}
Ok. Now compile everything. The base case is with no user log level set. When you open this application, the application start event will trigger and since I have not done anything yet with my user log level, it will default to 3 and no trace file will be created. You'll have to trust me so far. Next navigate to Administration - User -> Employees, and query for your login. Right click to show the columns, and move User Log Level to the Selected Columns list. Now set this field value to 5. Log out, and log back in. You will now find a file in you \BIN directory called Trace-SADMIN.txt (I obviously logged in as SADMIN but the filename is dynamic as well) with the line:
Application Started.
Done. I have used this for a log attribute I will talk about more in future posts but you can use this feature to store any type of data on a table linked to the logged in user. This is sometimes useful for referencing functional information relating to certain business processes. You could use an attribute from this BC in a calculated field on a different BC to determine whether a record on that BC should be read only for instance. Good luck

Tuesday, April 27, 2010

The Big Picture - Let the Customer Do the Work

This is another entry in what I hope to be a series of Big Picture posts that are more about CRM technology strategy than step by step config or code snippets. My last post kicket it off, discussing how automation is where the value is. I stand by that statement but I should atleast add one very important addendum. I drew a picture of a CRM implementation as starting a huge data model in which your client can begin collecting lots of detailed information about customers and their transactions. Transactions involve not just their leads and orders, but also the service requests, activities, and other custom task records which together create a profile of a customer's interactions with the client. Automation generally targets the transaction side of that data model. For instance, when a customer calls in to report a problem, an automation project could result in a series of service requests automatically generated and assigned to the right people at the right time with the right attributes set describing work that needs to be done to address that problem.


Another great way to "automate" though is to actually eliminate the layers between the customer and the data representing them. In other words, let customers manage their own data wherever possible. This makes the most sense in updating profile information like name and address corrections, but it can get far more complex depending on the nature of the client's business. The tools to implent this can housed within Siebel through eService, etc, or external (a third party or custom built portal application that integrates to Siebel). Either way, customer updates directly updates master data in the Siebel database (or wherever the customer master is stored).


I have been at clients that are afraid of this paradigm, afraid that user's will enter garbage data, but mainly afraid of losing control. While it makes some sense to build a light validation framework around these interfaces, the goal of the framework should be to make things easier for the customer, not to lock down data. At the end of the day, the customer owns their relationship with the client and it is not in their interest to purposefully corrupt this data. Plus anyone interested in doing so can just as easily do so over the phone. Allowing customer's to directly update as much of their profile as your client's business allows should reduce the service request volume handled by the client's support staff. At a minimum, the creation of service requests through the internet can smooth out call center operations reducing peak call times by rerouting some of those calls to the web, and reducing downtime by providing emails to respond to in the interim between call peaks.


This concept applies to Sales as well as Service organizations. The ability to actually complete orders over the internet though will be largely constrained by the complexity of the product being sold. Clients will have to weigh the incremental value of making some additional sales over the internet (a universe of prospective customers vs just those the in house sales staff can explicitly reach) vs the potentially reduced margins of a simplified product offering capable of being sold in this way.

The Big Picture - Automation

I said in my opening post that this blog would be mostly technical in nature but I also want to wax eloquently on some best practice issues and some general CRM technology strategy. So let me start by talking about a proposition a friend of mine in the industry presented to me several years back. What he said was basically this (paraphrasing): "Siebel is damn expensive. Most clients spend millions on all the license fees and support costs, and never use more than the underlying data model. So instead of wasting all that money, let me put together a team of developers to install an open source CRM product to customize a data model for you on the cheap." The revenue model for him was in hosting that solution but you could make some money off the customization too.

I find this to be a pretty compelling argument because, in general, he is right. Most Siebel customers I have seen basically do use Siebel as a data model. What I mean by that they have deployed various numbers of views (depending on how widespread their user base is) with essentially the functionality to capture all sorts of data in elaborate ways. Now when Siebel first came out, that in and of itself was a pretty powerful tool. Both from the end user's standpoint who could use it to improve their customer interactions by first reviewing their history, but also from a management reporting (trends and forecasts) point of view. But let's face it. Lots of applications can do that now. I mean the basic idea of a table representing an account, a contact, an opportunity, and a bunch of transaction data with some views sitting on top of them is not exactly revolutionary anymore. So buying Siebel and using it in this way is not exactly going to return a lot of value. And the incremental bells and whistles that have been added along the way in the forms of various interface platforms are nice but do not really separate Siebel from the pack.

Well I am still working with Siebel many years later and I don't think it is enjoyable to work with a product you don't believe in so I must have resolved this value proposition. The key is really to drive process automation. Most initial implementations will probably still consist of implementing and customizing a data model, maybe some interfaces to fill that model with data from internal and external sources. But the important thing for clients to do, and for good system integrators to do is to sell their sponsors on the value behind process automation. This does a couple of things.
  1. Increases the return on the investment
  2. Reduces the opportunity cost of not investing in a simpler alternative.
  3. Delivers a tangible benefit to the day to day end user which improves user adoption.
The first two things are two different sides of the same coin. By automating a process, you are increasing the client's bottom line in a number of potential ways:
  • Reducing the number of steps a user is executing, thereby either freeing up user time to do other things, reducing the need to hire additional users, or allowing for maintaining constant operations with fewer users (yeah that is just a really complicated way of saying laying people off)
  • Decreasing the time it takes to do things. This can lead to earlier sales, earlier conversions, and reduced downtime; all equating to higher revenues.
As for improving user adoption; how does it do this? Keep in mind when deploying a data model type implementation, the day to day user may end up actually doing more perceived work. In other words, in order to actually capture all those robust attributes about every customer, every deal, contact, order, etc, that data has to be entered by an end user. The presense of this data in a single system may save that user time in the long run
  • Not having to enter it multiple times themselves
  • Sharing the data with other users so as not to enter it
  • Incentive compensated users will hopefully see a better conversion rate as they use the information better
But let's face it, many day to day end users are not the savvyest user groups, and see a Siebel implementation as more work for them to do. This is largely perception but that is the hand we have been dealt. Automation is a way to counter this. First, it is usually quite visible. User's see the records that have been created "behind the scenes" and know that the system did that job for them (obviously there is a sales job here for us to make sure users know this). This has a certain "oohs and aahs" factor to it. Second, more specifically, it directly reduces the amount of data entry an end user actually has to do.

So a good Siebel implemenation strategic lifecycle will first vastly increase the amount of data captured about the CRM universe, and then automate the way that data is captured and used. This should be a fairly iterative process so that end users are never overwhelmed by it. Siebel provides a whole tool kit of platforms to deliver this automation:
  • Assignment Manager
  • Data Validation Manager/Haley's Engine
  • Custom Workflow Processes
  • Task UI
  • Smart Scripts
  • Analytics (outside of Marketing, this can be a trigger to start process to upsell or solve problems before they happen)
And not to sell this community short, it takes a really good integrator to get the job done right. Knowing how to use one of the above tools from a technical standpoint does practically nothing if they are not implemented in a way that makes strategic sense for the client. I have done a lot with Assignement Manager, Workflow, and DVM so I may touch on those in future posts.

Tuesday, April 20, 2010

How to call Workflow - Custom Remote

My previous posts discussed some of the mechanisms to call a workflow process along with their pros and cons. I alluded to how the mode that was desired would determine which business service to call. Workflow Process Manager is the vanilla business process which allows for executing a workflow process in Local Synchronous mode. Siebel provides another business service, Server Requests, which is capable of starting a task in a different component, hence running the process remotely. Unfortuneately, the necessary inputs to this business service must be passed in a child property set, so this business service cannot be called directly from the action of an action set using the business service context attribute. What needs to be done is to is to call a custom shell business service which fills a child property set with the correct attributes, then calls the vanilla Server Request business service.

Let's see how this is done. First of all, a new custom business service needs to be created:

Now create a new custom method for this service:

Finally, create the following method arguments for this new method:

Ok. Now for the script. The script I will about to specify has several features:

  • Calls the Server Requests business service which runs a workflow process in the Workflow Process Manager server somponent in its own thread
  • Input Properties to the Server Requests business service cannot be passed through the business service context attribute of an Action, as Siebel expects these properties as a child property set.
  • Can be called from a Run Time Event-Action by setting all properties as Profile Attributes
  • Can be called from a business service or workflow process by setting all properties directly as Input properties
  • Takes input properties, either from from profile attributes or from input property set, and passes them as custom input properties to a workflow process.
  • Profile Attribute name will be "PassThrough1", for instance, and the profile attribute value must be in the format "", ""
  • Can optionally specify a specific application server to run on, otherwise will allow the load balancer to assign the server
  • By default, will run in Remote Synchronous mode if user is connected, otherwise will run Remote Asynchronously.
  • Can optionally be set to be run as explicitly Remote Asynchronously

This is written in eScript and there are plenty of different ways to go about this, but this is a rough guide (actually, not so rough):

Monday, April 19, 2010

How to call Workflow - Custom

An alternative to the "Managed" runtime events, meaning that the process of activating workflow process creates and updates runtime events and action sets, is to create your own runtime events and action sets. This allows you to create complex sequences of workflow process calls, perhaps setting several profile attributes that are needed in the workflow, calling different versions a workflow process based on mutually exclusive conditional expressions, and calling workflow in different calling modes. With great power comes great responsibility. Walking away from the simple management the activation process provides gives you a lot of flexibility but it also makes your system more complicated and deployment a bit trickier. I think I have said deployment is trick either way now, which when you are talking about complicated processes is probably true regardless. I tend to think the custom route is easier to understand because you control all the steps and the names are more logical because they are your names.

So let me start by defining what I mean by custom managed events. You can achieve the same affect as the managed architecture by creating your own runtime event in the GUI (Administration - Runtime Events) with the same parameters used in the tools workflow pallet, start branch step attributes.

Then you create a custom Action Set with as many custom Actions as necessary. Conditional expressions can be used to prevent the instantiation of a workflow process manager component unnecessarily.The step that needs to actually calls the workflow will have an Action Type of 'BusService'. The Business Service Name will depend on what mode you want the workflow to run in. To simply replicate what the managed process does, just set the Business Process Name to 'Workflow Process Manager', the Method to 'RunProcess', and the Context to "ProcessName", "workflow name" where workflow name is. . . you know. This is a Name, Value pair format for properties accepted by the business service.
It is important to recognize that dynamic process properties cannot be passed to the WF through the business service context. This is because process properties are expected as a child property set. Plus the business service context cannot resolve expressions. If you need to pass dynamic process properties to the workflow, you have two options. You can either set a profile attribute that can be referenced in the workflow or you can create a shell service which I will get into in my next post.

How to call Workflow - Managed

Workflow can be called a variety of ways, with a variety of advantages for each. This will be the first post in a series about the pros and cons of these various mechanisms, along with a tool to assist with some of these approaches.

The most basic, declarative, out of the box way to call workflow is through a run time event managed by the start step of the Workflow process. Let’s take a basic workflow that is meant to call a custom business service when the Account is updated. If you select the branch from the start step, the WF Step Branch applet appears beneath the pallet.

The Event attributes specify a Runtime Event and Action Set that will be created in the GUI when the workflow is deployed and activated. The Action Set has the naming convention Workflow_Row_ID where Row_ID is that of the Action Set record. The Action Set has a single action whose name is Workflow_Row_ID_Row_ID where the first Row_ID is that of the deployed workflow process (can be found in the Active Workflow Processes applet of the Workflow Deployment view). I am not really sure what the second is.

A workflow process called in this way is considered to be running in Local Synchronous mode. The Local refers to the fact that is executing from the same thread that is was called from. So if a user action in the GUI triggers the run time event, then the workflow would be executed within the Sales Object Manager component (or whichever); if the an EAI update were to trigger the runtime event, the workflow would be executed in the EAI Object Manager. Therefore turning up event logging to troubleshoot a workflow called in this manner would require doing so to the Object Manager. The Synchronous part refers to the fact the user session will wait until the workflow has completed execution before control is returned to the user.

If instead of a business service, you had Siebel operation steps in the workflow, it is important to realize that any queries executed would take the most restrictive visibility of the object being queried. In the workflow above which is based on an update to the Account BC, adding a Siebel Operation Query on the Account BC would automatically apply Sales Team visibility to the query. Where this sometimes has an impact is if Assignment Manager runs synchronously between the time the triggering record is committed and when the workflow executes. In that case, the account may no longer be visible to the person who updated the record.

Some of the complexities in managing workflow this way are when you need to do a series of things in a particular order. Finding Workflow with this naming convention can be confusing, but in theory, knowing the naming convention means you can identify the action set you need, and you could then add your own custom actions to that action set, say to set some profile attributes prior to calling the workflow. Or you can add a conditional expression to the action. The hard part is when you need to deploy a new version of that workflow. The deployment and activation process which manages the naming convention can corrupt your custom actions, requiring you to set them up all over again. Deployment in general can get tricky with this approach because the Row Ids are used in the names and Row Ids between environments will differ for the same process. This means that you need to deploy and activate workflows in each environment to generate the runtime events and action sets, but if you need to migrate runtime events and action sets for other reasons, either for personalization or other custom development, you will need to be careful that your workflow managed runtime events from your source environment to not migrate and hence become duplicated in the target environment.

UPDATE: I recalled another serious limitation of this approach.  Workflow called in Local Synchronous mode will only execute if the active business object at the time the runtime event is triggered matches the business object of the workflow.  This implies that if you have a runtime event on the write of an Activity, you would need multiple workflows, one for each business object the Action BC appears in that needs to trigger the workflow.  If you have determined this is still the best approach to use, I suggest a series of shell workflow processes that call a single primary workflow as a subprocess where the primary is based on the main Activity BO, and the shells are based on the various other BOs, all having a single step.

Thursday, April 15, 2010

Migrating Meta Data - The Release field

Something I have now implemented on a couple of clients and I find quite useful is an enhancement to ADM (Application Deplyment Manager) to group meta data items together for the purpose of a release. Out of the box, ADM has data types for migrating LOVs, Views, Assignment Rules, etc from a source environment to a target environment. The mechanism provides for a search specification to determine which records of that data type should be migrated. This search specification can get quite complicated over time though.

What I propose is to extend the base record of each meta data object with a new column signifying the release. Simply add the column X_RELEASE to the parent table (S_ASGN_GRP for Assignment Rules for instance). Expose this column in the business component the table is based on, and in the applet the BC is based on. I recommend a varchar data type which will provide a little flexibility in how you version your releases. Finally, modify the ADM integration object. These are the ones prefixed with 'UDA'. You will need to add the same Release field (corresponding to the BC field name) to the Integration Component corresponding to the BC the field was added to.

Now, in the GUI you can mark all the record of that meta data object with the release description, such as '1.0'. In the Deployment Filter of the Deployment Project/Session, you can then specify, [Release] = '1.0'.

Tuesday, April 13, 2010

Escript Framework - lPad & rPad

So I have been following the discussion that started on Impossible Siebel 2.0: Introduction to the ABS Framework, which actually started earlier than that on other siebel blogs, to use the escript prototype feature to create a more robust siebel scripting framework. This got me pretty excited as I think it has a lot of potential for a lot of clients that use extensive script. Jason at Impossible Siebel is in the process of elaborating on defining this but I want to jump right in and start posting about some real world examples that could become part of an eventual library. For instance, since escript does not have an lpad or rpad function, I think those are pretty good starts. These are relevant to strings, so we need to modify the String prototype. Here is the script I came up with:

String.prototype.lPad = function(PadLength, PadChar) {
//Use: String.lPad(Number of Characters, Character to Pad)
//Returns: string with the PadChar appended to the left until the string is PadLength in length
return Array((PadLength + 1) - this.length).join(PadChar)+this;
}
String.prototype.rPad = function(PadLength, PadChar) {
//Use: String.rPad(Number of Characters, Character to Pad)
//Returns: string with the PadChar appended to the right until the string is PadLength in length
return this + Array((PadLength + 1) - this.length).join(PadChar);
}

Usage:
var s = "Short";
var r = s.rPad(10, "_"); // "Short_____"
var l = s.lPad(10, "_"); // "_____Short"

UPDATE: Thanks to commenter Jason who points out a more efficient algorithm. I have updated the original post.

Thursday, April 8, 2010

Predefaulting Joined Fields on Created By

My client had a requirement to expose the Created By user in the list applet but instead of just showing the row id of the user record, or even the user login, they wanted a more intuitive expression, which I suggested to be [First Name] + ' ' + [Last Name] + ' (' + [Login] + ')'. This works great for them. The only problem is that the field would not predefault correctly when the record was first created. It would only look right after a refresh or requery of the BC. I did a search of metalink and came up with SR 38-1137205351 which basically said what I wanted could not be done because the join to S_CONTACT to get the first and last name would not be done until a requery and there is no system function (like LoginId() or PositionName() ) for the creators full name.

But then I realized I could use a Profile Attribute from the 'Personalization Profile' BC. This BC if you are not familiar with it contains a collection of attributes about the logged in user and all the entities linked to that user (position, organization, contact, division, etc). You can even customize this BC to add your own custom attributes that can be referenced from anywhere. That was not necessary for what I needed though. The attributes in this BC can be referenced using a simple GetProfileAttr statement which can be used in an expression for calculated fields and pre/post default expressions. So here is what I ended up with:



A predefault expression of -> Expr: 'GetProfileAttr(First Name)'. The Created By Display field is what is exposed in the GUI. In addition a pick map will be necessary on this field in the BC: