Process Flow

Lawson AGS Caching Pt 2

I previously posted about what Lawson AGS caching was here. After some questions and reviewing the post, I realize that I didn’t fully explain how to use it within the context of Process Automation or Process Integrator. Hopefully, this post will answer those questions. Since we’re on Process Automation, my example will be from PA. The only difference between Process Automation and Processflow Integrator is that PA will return the XML from the Lawson Transaction Node (AGS). You will have to use a WebRun in Processflow integrator. Neither product (to my knowledge) will return the _TRANSID as one of the fields from the AGS node.

In this example, I will inquire on GL10.1 with cache set to true and then update the Company name field. For those who are just joining us (and didn’t read part 1), by turning on caching, I can update just the company name field in the update node instead of having to pass in all of the fields again.

Here is my sample flow:

  1. I have two start node variables: strXML and strTRANSID
  2. Inquire on GL10.1. If you’re on Processflow integrator use a Webrun here.
  3. Fix the XML data. XML cannot have a period (.) in a node name. Unfortunately, the AGS response has a period in the node that contains the form name (GL10.1 in this case). We must remove this or the parse will fail
  4. Parse the XML string
  5. Get the TRANSID. This step is not strictly necessary
  6. Update the company name on GL10

GL10.1 Inquire


Update XML string


Parse XML
Action is “Parse XML String” and the input value is strXML that was built in the Assign node



GL10.1 Update




Predicting the future with Process Automation or What Would Process Automation do?

I hate surprises. I hate surprises to the point that I actually don’t care if I know what happens in a movie before I see it. Consequently, I spend a lot of time trying to figure out what is going to happen next, just so I’m not surprised. This got me thinking about predicting the future with Process Automation (yes, I realize this is an odd thing to think about, but you have to have a hobby, right?). It is fairly easy to figure out what happened in the past in a flow. This could be done either through a variable you have set or viewing the output variables from the node, but is it so much harder to figure what is going to happen next? As it turns out, not really.

I’m not sure that I have a business case for this, it’s just an interesting exercise which is why you got the preamble about me and surprises. Back story aside, Process Automation (and Process Flow – although slightly more complicated) can be self aware. By being self aware, and given that a flow is a static set of instructions, you CAN predict the future – as in you can figure out what will happen next within a flow. Before I lose everyone, I guess I should start getting to the good stuff.

It all starts with the fact that a flow is an XML document. With Process Automation you can read the XML that comprises the flow and search for the information that you need using E4X syntax. All of the information about the flow is in the XML, so you can effectively read ahead to what the flow will do next and thus predict the future.

Here is a simple flow that demonstrates reading ahead. I will attempt to determine the value in the “To” field of the email node at the end.

In order:

  1. Run a landmark query to retrieve the XML of the flow. If you’re using Process Flow integrator, you would need to retrieve the XML from the file system.
  2. Parse the string information into an XML document
  3. Use javascript to retrieve the To value of the email node
  4. Email node – it just exists so we can read it.

Landmark Query
_dataArea="prod" & _module="pfi" & _objectName="PfiFlowDefinition" & _actionName="Find" & _actionOperator="NONE" & _actionType="SingleRecordQuery" & _pageSize="30" & PfiFlowDefinition="ReadFutureNode" & CurrentFlowXml
Note that this assumes that you know the name of the flow and that it has been published to the server

The XML document has the following basic structure:

    <edge />
    <edge />
  • processUserNode – contains information set at the process level
  • activities – contains all of the activity nodes
  • activity – information about the node itself (type, location, etc)
  • prop – the settings in the node. There may be many prop nodes
  • OnActivityError – contains information on what to do on error
  • edges – container node
  • edge – indicates the “next” node

Retrieving the To value
I recommend this page as a reference for using E4X

//Find 'this' node in the edge list
var xmlThisNode = XMLParseFlow_output.edges[0].edge.(@from=='Assign8190'); 
//get the ID of the next node (email)
var strNextNodeID = xmlThisNode[0].@to;  
//Get the activity node for the email
var xmlNextNode = XMLParseFlow_output.activities[0].activity.(@id==strNextNodeID); 
//Pull the To property 
var xmlEmailToNode = xmlNextNode[0].prop.(@name=='to');  
//Get the value of To
strEmailTo = xmlEmailToNode.anyData; 

I suppose you could use this technique to retrieve the initial value of the Start node variables to see if they have changed during the flow. There might be other uses as well. I’m not sure if this is really something you would ever actually do in a production environment, but I feel better knowing I can predict the future.


Calling Process Automation from the command line

I discussed this briefly during my presentations at Inforum this year. The basic business requirements are that there are occasions when you need to trigger a workunit other than from the application (S3/M3/LMK) or from a schedule. In Process Automation you can use Channels to poll for new data (files, JMS, etc), but there are times when you still need more flexibility.

Need some examples? Okay, how about as part of a script or set of processes? For us, a good example is our ACH process for Accounts Payable. Our bank (PNC) will not accept the files produced by the Lawson AP160 job and as a result, we need to manipulate them. The reason we can’t use File Channels in Process Automation is because the bank requires us to send one file with all of our bank accounts in it (currently 85) that has separate batches for each account. The easiest way to accomplish this is to have a script that runs as the last step of a multistep job after all of the AP160 steps. That script simply calls a java program that triggers a process flow to process the files and send to PNC. There are several other examples, such as extending the ability to do scripting. Imagine being able to call a script that can update the RM attributes on a user. Pretty nice option, eh?

Hopefully, I don’t really need to convince you this is a good idea. By being able to call a process flow at will, you can eliminate customizations in your application environment, which is a good thing. Below is java code that will trigger a workunit. The code is fairly verbosely commented regarding what it’s doing, so you should be able to modify to suit your needs without any more commentary from me. You will need to update the appropriate variables (host, user, password, etc) and create a process flow called “TestJavaProcess” and upload to your server. After you run the java program, you can review the workunit that was created to see where the values from the program appear so you know what to update.

 * Description:
 *  Program is designed to call LPA flows from the command line
  ============================================================================== */
import com.lawson.bpm.eprocessserver.interfaces.ProcessRequest;
import com.lawson.bpm.eprocessserver.interfaces.ProcessResponse;
import com.lawson.bpm.eprocessserver.interfaces.ProcessVariable;
import com.lawson.bpm.eprocessserver.interfaces.ProcessFolder;
import com.lawson.bpm.eprocessserver.interfaces.LPSSession;

public class TestCallLPA
    public static LPSSession session;
    public static ProcessRequest request = new ProcessRequest();
    public static ProcessResponse response;
    public static void main(String[] args) {
        try {
            String HostName = "gridhost"; //grid host name
            Integer PortNumber = 50005; //port number that grid is listening on
            String UserName = "user"; //a valid admin LMK user name
            String UserPwd = "password";  //password for UserName
            String LMKProd = "prod"; //note this is the LMK PL
            String ProcessName = "TestJavaProcess";
            String ProcessTitle = "Java API Workunit";
            String KeyString = "123456789"; //This with key value needs to be a unique string
            String KeyValue = "KeyString"; //This with key string needs to be a unique string
            Boolean textOutput = false; //set to true to print return values to screen - infocode, return message, outputdata with | separators
            Boolean returnData = false; //set to true to output data -- will need a return node
            Boolean Async = false; //set to true to trigger WU without waiting for response
            /* For demo purposes, this is commented out.  If you have a service, set here */
            //Criteria 1
            //Criteria 2
            //Criteria 3

            //Start adding variables
            ProcessVariable variable = new ProcessVariable("BOOLEAN","true",ProcessVariable.TYPE_BOOLEAN);
            //Integer variable
            variable = new ProcessVariable("INTEGER","1",ProcessVariable.TYPE_INT);
            //Decimal variable
            variable = new ProcessVariable("DOUBLE","1.00",ProcessVariable.TYPE_DBL);
            //Date variable
            variable = new ProcessVariable("DATE","01/01/2013",ProcessVariable.TYPE_DATE);
            //Long Integer variable
            variable = new ProcessVariable("LONG","123456789",ProcessVariable.TYPE_LONG);
            //Object Variable -- not sure how to pass
            variable = new ProcessVariable("OBJECT","",ProcessVariable.TYPE_OBJECT);
            //Array variable -- not sure how to pass
            variable = new ProcessVariable("ARRAY","",7); //Array process type is not documented and is not TYPE_ARRAY
            //Add input data
            request.setInputData("Some input data");

            //Connect to grid and create a session
            session = LPSSession.createGridSession(HostName,PortNumber,UserName,UserPwd,LMKProd);
            //Create workunit
            //Pass in the built request from above and set Async value
            response = createWU(request,Async);
            //If user selected Async then createWU will exit and we won't get here
            int eRC = response.getErrorCode();

            //Deal with response
            if (textOutput) {
            if (returnData) {

            //Cleanup and close out
        } catch (Exception e) {

    //Trigger workunits based on sync vs async
    public static ProcessResponse createWU(ProcessRequest request,Boolean Async) {
        try {
            if (Async) {
                response = session.createAndReleaseWorkunit(request);
            } else {
                response = session.runRequest(request,true);
        } catch (Exception e) {
        return response;

In order to compile the code, you will need to make sure that the following files are in your classpath (and the directory you save the code file above to):

  • sec-client.jar
  • bpm-interfaces.jar
  • type.jar
  • security.jar
  • clientsecurity.jar
  • grid-client.jar
  • lawutil_logging.jar

A basic classpath, compile, and call command (on Unix) would be:

$ export CLASSPATH=/lawson/lmrkstage/env/java/thirdParty/sec-client.jar:/lawson/lmrkstage/env/java/jar/bpm-interfaces.jar:/lawson/lmrkstage/env/java/jar/type.jar:/lawson/lmrkstage/env/java/jar/security.jar:/lawson/lmrkstage/env/java/jar/clientsecurity.jar:/lawson/lmrkstage/env/java/thirdParty/grid/grid-client.jar:/lawson/lmrkstage/env/java/jar/lawutil_logging.jar
# Compile 
$ javac
# Call program
$ java TestCallLPA



Lawson approvals – do it your way

Lawson approvals my way? What does that even mean and why would I want to do it?

Customizing approvals is changing how workunits route for approvals in Lawson Process Flow and Lawson Process Automation. This post is going to specifically be about S3, but it should apply for Landmark and M3 as well. My company is on Lawson Process Automation, so that’s what will be in the examples, but the same concepts should apply to Process Flow.

Let’s first talk about how Lawson Process Automation (LPA) seeks approvals in the User Action nodes. (The rest of this post will assume that you are familiar with the concept of approvals in Process Flow or LPA. If you’re not, check out the documentation for your product before continuing as it might be confusing otherwise.) User action nodes use what is called “Category Filtering”. First a task is assigned to a user. For that task, a filter category and value are assigned (you can assign multiple categories and values). Next, each User Action in a flow will also have a task (or multiple tasks) assigned to it. Finally, when the workunit is created, it will have a filter category and filter value on it based on data in the source application. The inbasket will use the filter category and filter value from the workunit to display only relevant workunits to the user who is logged in.

Easy, right?

Okay, maybe not.

Keep these key words in mind:

  • Task = Inbasket. There are several philosophies on how to set these up, but they will basically equate to either the person or the type of approval they are performing. Examples might be Manager or Invoice Approver.
  • Filter Category = What kind of information is relevant for this workunit.
  • Filter Value = The actual value that will be filtered on for the workunit.

Here is an example setup:
User: Jdoe
Task: Invoice Approver
Filter Category: ACCTUNIT
Filter Value: 123456

In this scenario, Jdoe would only be able to see workunits in his inbasket that have BOTH the Filter Category of ACCTUNIT and the Filter Value of 123456.

Okay, now that that’s out of the way, let’s get to the fun stuff. Ultimately, the problem comes down to the fact that workunits created by the source applications (S3, M3, Landmark) rarely have Filter Categories that are of any use. Case in point, the Invoice Approval service uses Authority code, which has to be keyed at the time of the invoice (or defaulted from some place like Vendor). This creates a bit of an issue for us. With 2000 Managers responsible for their own accounting units, it means that AP would need to actually know which of the 2000 managers needed to approve the invoice. Not gonna happen. In a much larger context, it also limits us to basically one level of approval if we were to actually set up a code for each manager because we wouldn’t be able to route to another level easily like a Vice President. Each VP would need to have the same setup as each of the managers that might escalate to them instead of something that makes sense like Company. I’m not saying it’s impossible, it would just be extremely messy. If I’m a VP and there are three managers that approve for Accounting Units in my company, I would need to have each of their approval codes assigned to me. If the manager happens to also approve for an accounting unit in another company, the VP responsible for that company would also need that Approver code assigned to them. Add in the fact that Approval code is only 3 characters and we’re going to wind up with codes like 0X0 and 1CF that AP would probably never get right.

Truth be told, we don’t really want to get approvals by approval code anyway. How our invoice approval works is: If the invoice is part of a Capital project, then route it to the Project Manager, if it’s for a Service, route it to the manager of the cost center receiving the service. So not only do we NOT want to use Approval Code, we actually want to use different approvals depending on the type of invoice.

The question is, how do we do that? The answer is we modify the Category Filter and value. There are people thinking right now, “Okay, so we need to modify the 4GL of the library that creates the workunit to have the correct Filter Category and Filter Value, right?”. You would be correct, you could do that. If you’re one of those people (or you’re a process flow person in an environment that thinks like that) I feel sorry for you. You’re doing it the hard way. Not only will you have a modified library that you have to be careful of when you apply patches, you have now created extra effort when you want to upgrade. Good for job security, but bad for your business.

So now you’re thinking, “Okay smarty, since you just insulted me, how do you propose that we do it?”. I’m going to advocate that you dynamically change the category filter and value IN THE FLOW ITSELF. Here’s an interesting bit of information – the Filter Category and Filter Value on the workunit record are NOT the values used to create the inbasket task. What actually happens (as near as I can tell), is that these values are actually used to populate two Flow variables (in LPA called: oCatKey and oCatValue – I believe this to be the same in Process Flow). It is the flow variables that are used to used in the creation of the inbasket task. All you have to do is add an assign node to your flow before the User Action node. Add a javascript expression to set these two values to whatever you want. Voila! The inbasket will now use the Filter Category and Filter Value that you set in your assign node.

Here’s the code to change a workunit to match what I set up for Jdoe above:

oCatKey = "ACCTUNIT"; //Filter category
oCatValue = "123456"; //Filter Value

For practical purposes and debugging, we are in the habit of also making calls to update the workunit itself with the correct Filter Category and Filter Value. It makes them easier to find when doing research. The added bonus to dynamically changing the values is that you can change the Filter Category and Filter Value as many times as you need in a flow. Thinking outside the box (we don’t do this) – you could have an Invoice approved by a manager for an accounting unit. You could then change the filtering to company and route to a VP responsible for that company if it’s over a certain dollar amount. In this case, you would only need to set up the number of companies that you have for VPs instead of having to setup all of the accounting units for a company (which you would have to do if you went the 4GL route). You could change it again and send it to a treasury analyst based on the bank account that it would pay out of to make sure that funds were available.

The possibilities are pretty much endless at that point.



Lawson Process Flow S3 Transaction Node

I definitely learned something new today. I’ve been using Lawson Process Flow for 5 years now. I’ve never had any formal training, but I have read every document that Lawson has put out on it. In addition (and the reason for my discovery), my company is currently participating in a beta project for Lawson Process Automation 10.0.1. It’s not generally available and we signed a Non-disclosure agreement so I can’t discuss it directly. I am however going to be presenting at Inforum on the results of the beta. As part of my testing, I discovered something that I’m pretty sure most people don’t know. I was going to include it in my Inforum presentation until I went back to 9.0.1 Lawson Process Flow and discovered it worked there as well.

Here’s the big secret: the Lawson S3 Transaction node returns ALL field values from the screen, not just the values that process flow shows (RETURN_CODE and MESSAGE_CODE). I had no idea and I certainly have no idea why Lawson doesn’t publicize this or at least document it. Up until today, when I needed to know something about the result of an AGS call, I would make the AGS call with a WebRun node and then parse the output in an XML node (like I talk about here: This isn’t actually necessary as you can get the output from the Transaction node itself.

Let’s say you want to perform an Add action on a form and you need to know something more than the result of the AGS call. Some common examples are adding Employees on HR11 (getting the Employee number), adding a vendor on AP10 (getting the vendor number), adding an asset on AM20.2 (getting the asset number), etc. Chances are you aren’t simply doing an add, but need to also do something else. For vendors it may be adding diversity codes, for assets it would probably involve adding items, and for employees it might be adding benefits.

In order to access the data from a Transaction node you need to append the field name to your transaction node name. If you’re using the _LFN option, then you’ll need to use an underscore (_) instead of a dash (-) in the field names.

Assuming an Inquire call on HR11 for my employee id:


I can get the result in process flow by using the following reference (assuming my transaction node has the id HR11):

Talk about a revelation.



Updating Jobs with Lawson Process Flow

This post is specifically about using Process Flow to update jobs in Lawson, but the technique also applies to updating Lawson forms if you’re not using the _CACHE parameter that I posted about here. The concepts are the same for updating Lawson using a POST action (like in Design Studio).

The business case is that certain Lawson jobs require specific date parameters and either can’t be run for future dates or the impact of doing so is undesirable. A good example of this is GL146 (Batch Journal Control). We have Journal Entry approval turned on, so we use the GL146 program to auto-approve all non GL and RJ journal entries (like AP, Payroll, etc). The problem is that GL146 requires a Period and Year. Unlike GL190 that can be run for the variables “Current Year” and “Current Period”, GL146 requires a numeric value in both. In order to keep them updated, we run a process flow that changes the Period and Year on the GL146 every month.

Updating a job is relatively simple, however as I alluded to above, there is no _CACHE parameter that you can specify on an AGS call to update a Lawson job. In order to avoid wiping out parameters, you must pass all values back to the AGS call. Doing this via a GET is unrealistic for several reasons, not the least of which is that for jobs with many parameters, the GET URL may be too long for the server to process. If you are so inclined, this is a pretty nice technical write-up on the difference.

The basic process is this:

  • Inquire on job via Web Run node
  • Parse job via XML node
  • Update necessary parameters in Assign node
  • Update job via Web Run node
  • Run Job (if necessary)

Inquiring on a Job
To inquire on a job, it’s the same as a normal AGS call with a few minor changes. You cannot use the _CACHE parameter, you must include the _JOB-NAME parameter, the _USER-NAME parameter, and the _STEPNUMBER parameter. The _STEPNUMBER parameter indicates which step of the job you are updating. This is zero-based. A job with only one step will use _STEPNUMBER=0. The _TKN parameter must also match the actual token of the _STEPNUMBER. You can’t be lazy and pass in some default value because the inquire won’t work. Something else to note, you probably should *not* use the _LFN=ALL (or TRUE). Using this parameter will cause the Lawson field name to be returned. The problem is that the Lawson field name has dashes in it which are invalid in XML. If you do return the Lawson field name, you will have to cleanse it prior to parsing it by converting it to underscores, then after you make the updates you will have to convert it back to the Lawson field names. Seems like too much work to me.

A basic inquire looks like This:

This will get you a result like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<_f2 /> 
<_f5>Job Sched</_f5> 
<_f6>Print Mgr</_f6> 
<_f8>Batch Journal Control</_f8> 
<_f10>Job Scheduler</_f10> 
<_f14>Dist Group</_f14> 
<_f21 /> 
<_f27 /> 
<_f33 /> 
<_f37 /> 
<_f38 /> 
<_f39 /> 
<_f41 /> 
<_f42 /> 
<_f43 /> 
<_f44 /> 
<_f45 /> 
<_f47 /> 
<_f48 /> 
<_f49 /> 
<_f50 /> 
<_f51 /> 
<_f53 /> 
<_f54 /> 
<_f55 /> 
<_f56 /> 
<_f57 /> 
<_f59 /> 
<_f60 /> 
<_f61 /> 
<_f62 /> 
<_f63 /> 
<_f65 /> 
<_f66 /> 
<_f67 /> 
<_f68 /> 
<_f69 /> 
<_f71 /> 
<_f72 /> 
<_f73 /> 
<_f74 /> 
<_f75 /> 
<_f76 /> 
<_f77 /> 
<_f78>ALL CO</_f78> 
<_f79>All Companies</_f79> 
<_f83>JE Approve</_f83> 
<_f85 /> 
<_f86 /> 
<_f87 /> 
<_f88 /> 
<_f89 /> 
<_f90 /> 
<_f91 /> 
<_f92 /> 
<_f100 /> 
<_f101 /> 
<_f102 /> 
<_f103 /> 
<Message>Inquiry Complete</Message> 

Updating the parameters
After we have parsed the XML using the XML node (like this), we need to update the XML to the appropriate values. Keep in mind that not only do you have to change the values that you want to update, you must change the “transaction” fields as well. This means changing the action value (_f1) from “I” to “C” and clearing out the response fields (Message, MsgNbr, StatusNbr).

XMLJob_output.GL146.Message = '';
XMLJob_output.GL146.MsgNbr = '';
XMLJob_output.GL146.StatusNbr = '';
XMLJob_output.GL146.FldNbr = '';

Updating the Job
All that’s left at this point is to pass the updated XML back into an AGS call with the Web Run node. This time however, we only send the base AGS call in the URL (/servlet/Router/Transaction/Erp) and we send the XML output in the Post String box. The node looks like this.

Run the job
If it’s necessary, the Job Run URL looks like this:

Here’s a look at what a simplified Process Flow would look like.



ImageNow external messaging agent

Is ImageNow External Messaging Agent the best free bundled product ever? The answer depends on whether you’re using it or not. If you are, then your answer is yes. If you’re not, WHY AREN’T YOU?

ImageNow External Messaging Agent is one of those products that begs for use-cases. Based on the delivered documentation, it’s hard to conceptualize what it can do for you or even how to use it. There is a nice technical paper on the Perceptive support site, but it’s not easy to find. I’m going to give you two actual use cases using Lawson and Accounts Payable.

What is External Messaging agent? Simply put, it is a way to pass information between ImageNow and any external system that can process the messages. External Messaging Agent is table-based unlike other messaging systems like Websphere MQ or JMS. The upside is that if your external system doesn’t support true messaging, you can still use External Messaging Agent (assuming you can read from and write to the tables). There are two tables (IN_EXTERN_MSG:header and IN_EXTERN_MSG_PROP:detail) that contain the messages, and a service on the ImageNow machine monitors the tables. Iscript methods are used to both read and write messages on the ImageNow side, and SQL is used from the external system. Using External Messaging Agent is pretty simple. Write a message from the ImageNow side with your information for the external system to pick up, or read a message written by the external system and do something with it in ImageNow. Once you’ve successfully processed a message, you mark it as complete (or one of the other “processed” statuses – see the documentation for details) and External Messaging Agent cleans out the completed messages based on your config.

Use Case #1 – Messages to ImageNow – Lawson Vendor Merge
The business case is that our keys for AP Invoices in ImageNow are: Vendor, Company, Invoice, PO, Invoice Amount. When we merge vendors in Lawson (because of acquisition or duplicate vendors) Lawson gets updated and the “old” vendor number essentially goes away. However, unless we update ImageNow, we will have a hard time of finding the Invoice in the future and any “Vendor” queries in ImageNow will be incorrect. How we handle this is to have a design studio screen in Lawson where the AP staff can submit a request to merge vendors in ImageNow. The Design Studio screen triggers a Lawson Process Flow that creates the appropriate message records that are then processed in ImageNow by the External Messaging Agent.

Because of the Header-Detail structure, we have to create the header record first, then add the detail, then update the Header to a “Ready to Process” status. As I mentioned, we do this with Lawson Process Flow, but a perl script, PLSQL, etc should all accomplish the same thing.

First we need to generate unique ID’s for the messages. The best way to do this is to use the full date (including time). If you have sequencing turned on for your DB, you might be able to use that as well.

dtDate = new Date();
STRID = 'VendorMerge' + String(dtDate).replace(/ /g,"_");

Here are the SQL statements (my ImageNow is on SQL server, so I’m using their specific date/time functions). Each is executed individually and if any one of them fails, the next is not executed.

--Create Header Record
--Insert Old Vendor ID message property

--Insert New Vendor ID message property

--Update Status of Message Header

On the ImageNow side, an Iscript processes the messages it receives. In this case, we do a document move on any documents that have the “old” Vendor number to the “new” Vendor number. Apologies in advance, I have a handler function for VSL queries that I’m too lazy to “undo” for the demonstration. If you’re interested, drop me a note and I’ll post the code. VSL is a real bear to deal with and it seems to change from version to version, which is why I prefer my handler function.

//Global Variables
var hasMore;
var limit = 1000;
var externalMsgObj = getInputPairs();
var strOldVendorID = externalMsgObj["OLDVENDOR"];
var strNewVendorID = externalMsgObj["NEWVENDOR"];
var boolSuccess = true;

var strVSL = "([drawer] = 'Normal_Invoices' OR [drawer] = 'Vendor_Maintenance') AND [folder] = '" + strOldVendorID + "'";

//Run Query
var results = vslQuery(strVSL,limit,false,false);
if (!results)
    printf("No Vendor Records found.");

//Check for additional records and log
if (results.more)
    printf("There are more documents.  Not all were selected.");

//Loop through result array
for (i = 0; i < results.length; i++)
    //Build keys for new document
    var docid = results[i].id;
    var tokey = new INKeys(results[i].drawer, strNewVendorID, results[i].tab,
        results[i].f3, results[i].f4, results[i].f5, results[i].docTypeName);

    //Move document to new key values and append if file with keys exists
    if(INDocManager.moveDocument(docid, tokey, "APPEND"))
        printf("Old Doc: %s To Doc: %s",docid,;
        printf("Could not move document: %s\n",getErrMsg());
        boolSuccess = false;

//Set message status based on process

Use Case #2 – Messages from ImageNow – Document Notifications
The business case is related to Employee Expenses. Because Employee Expenses are “self-service”, the receipts for the expense are not necessarily recorded when the expense is entered. We do not want to send expenses out for approval until the receipts have been stored in ImageNow. The first step in the Employee Expense approval flow is an Inbasket for receipts. Once the receipts are received by ImageNow, a message is written out. The message is picked up by a separate Process Flow that performs the approval process and allows the expenses to continue in the approval process. I will post separately on how the full Employee Expense process works, but for background, once a linked expense receipt document is routed to the “complete” queue, the following script is triggered.

#define MSG_TYPE "EEX"
#define MSG_NAME "ApproveReceipts"

function main ()
    var wfItem = INWfItem.get(;
    if (wfItem == null)
        printf("Could not retrieve workflow information.  Error: %s",getErrMsg());

    //Set document based on wf item
    var doc = new INDocument(); = wfItem.objectId;
    if (!doc.getInfo())
        printf("Could not retrieve document information for workflow itemID: %s.  Error: %s",,getErrMsg());

    printf("Approving Doc: Vendor: %s Comany: %s Invoice: %s DocID: %s",doc.folder,, doc.f3,;

    //Build Message Object
    var msg = new INExternMsg();
    msg.type = MSG_TYPE; = MSG_NAME;
    msg.direction = ExternMsgDirection.Outbound;
    msg.status = ExternMsgStatus.New;
    msg.startTime = new Date();
    msg.addProperty("DocumentID", ExternMsgPropType.Undefined,;
    msg.addProperty("Vendor", ExternMsgPropType.Undefined, doc.folder);
    msg.addProperty("Company", ExternMsgPropType.Undefined,;
    msg.addProperty("Invoice", ExternMsgPropType.Undefined, doc.f3);

    if (!msg.send())
        printf("Could not send message for doc: Vendor: %s Comany: %s Invoice: %s DocID: %s.  Error is: \n",doc.folder,, doc.f3,,getErrMsg());

On the Process Flow side, we get the messages intended for us, then update the work units. First, the SQL to get the messages. Get the message Header, then get the details of the message.

--Get message
  AND MSG_NAME = 'ApproveReceipts'

--Get Message details

Because we get several records back for the details, we parse them like this. If anyone else has any better ideas (for Lawson Process Flow), I’m all ears.

switch (SQLQueryProps_PROP_NAME)
    case 'Company':
        strCompany = addLeadingZeros(SQLQueryProps_PROP_VALUE, 4);
    case 'Vendor':
        strVendor =addLeadingSpaces(SQLQueryProps_PROP_VALUE, 9, false) ;
    case 'Invoice':
        strInvoice = SQLQueryProps_PROP_VALUE;
    case 'DocumentID':
        strDocID = SQLQueryProps_PROP_VALUE;

We perform the actions on the workunits and then update the message as necessary. See this post for how to perform Inbasket actions with Process Flow.


The possibilities for using ImageNow External Messaging Agent are practically endless which is what makes it so powerful. I hope you’re convinced to start using it.



Everything you need to know about Lawson Comments

I have a love/hate relationship with Lawson comments. They are an awesome feature, but implemented in such a way that you’d think somebody made it hard on purpose. This post is going to be about how to deal with comments (mostly in large quantities). This is everything that you “need” to know, not everything you “want” to know. Probably once a week, I get asked “Can you query for comments?”. The answer is Yes*. The other question I get a lot is: “Can you upload comments?”. Again, the answer is Yes**.

*No, you cannot query them out through DME.
**No, you cannot use MS Add-ins.

Uploading Comments
Let’s first talk about adding comments to Lawson so that we have something to query out. There is only one way to add comments to Lawson, and that’s through the writeattach.exe cgi program. How you choose to implement it is up to you, but I’m going to explain how to do it through Lawson ProcessFlow. You can add comments via either GET or POST, but I prefer the POST because I have more control over how the comments get added and I don’t have to do a lot of encoding work.

The basic XML for the comments looks like:

    <_ANAM>Comment Title</_ANAM> 
       Comment Text is here
    <K4 /> 
    <_USCH /> 

The URL for my WebRun node is: cgi-lawson/writeattach.exe and I send the XML above as a POST string. In order to figure out what to put in the tags, I used dbdef. In order to add comments you must follow these steps. First, verify that the file allows attachments. Second, find the index name and what fields go in the index value fields (you can add more as needed – but the values should be the actual value like 0001 instead of 1). The only thing that I don’t have a good way to get is the value for is the _AUDT tag. This is the comment type, so for forms that allow more than one comment type, this value will change. I’m sure there’s some method to figure this out, I just don’t know what it is. I generally use Fiddler, add a comment, then check the calls.

Note: If you do use process flow, you have to be careful with the CDATA nodes. Process flow will interpret the <! as the beginning of a process flow variable and try to replace it. As a result, if you need the CDATA nodes, you should build the post string either in the XML node or an Assign node and then put that variable name in the WebRun.

Querying comments
There are two ways to query comments. One is using SQL (my personal preference, but not always applicable) and the other is using cgi calls.

For the SQL version, it’s important to talk about the table structure used to actually store the data. For each table that allows attachments/comments, there are two tables to store the comments, a “Header” and a “Detail”. The “Header” contains the name of the comment as well as the start of the comment. The “Detail” table contains the rest of the comment.

For a table that allows comments, the comment tables are always named “L_<Table_Type><Table_Prefix>”. Consequently, comments for the APINVOICE table (prefix is API) would be named L_HAPI and L_DAPI. The H and the D indicate Header and detail.

The relationship between the base table (APINVOICE) and the Header comment table (L_HAPI) is:


The relationship between the Header (L_HAPI) and detail comment table (L_DAPI) is:


The data relationships are:


This gets us data that looks something like this (I’m not displaying duplicated data as the query results would actually be; this is to aid understanding of the relationships):

Invoice L_INDEX ATCHNBR Comment Name Seq Comment
122344 zzzz 1 Comment 1 1 some comment detail
2 More comment detail
2 Comment 2 1 some comment detail

The OBJECT field of the Header and Detail tables contains the comments. For the Header table, the OBJECT field actually contains more than the comment. Technically, it’s comma-delimited and each of the first three “field”s has it’s name as part of it. The fourth field is the beginning of comment itself. However, the data is clearly space padded, so I would strongly urge you to delimit based on places, not on commas, if for not other reason than your comment could contain commas and then you’re in trouble. If you’re wondering, the actual comment data begins at character 96. The “Detail” table has a record for any comment that is longer than 416 characters. Each Detail OBJECT field is 1024 characters, so for each comment that is longer than that, there will be an additional record. You must put all of these records back together to form the full comment.

Here is what a query for invoice comments might look like (Oracle Version).


You should always make sure that you OUTER JOIN your comment tables to your base tables as the comment tables are never required. Because you may return multiple detail records for each base record, you will have to deal with “putting them back together”. My personal preference is to use the Oracle Hierarchy queries (which I posted on here). However, I have been known to use LEAD and LAG when I have a clearly defined recordset. If you are using Crystal, you can use the Hierarchal functions there as well.

Non-SQL Version
Now that you’ve seen what it takes to create a comment and what the data looks like in the tables, it’s time to discuss the non-SQL ways to get the data. There are two options, ListAttachments and getattachrec.exe programs.

The shortcut method (version 9.0.1) is to use ListAttachment:


You follow the same basic rules to build this URL as you would to build the writeattach.exe. The primary difference is that you do not need to give explicit values (like 0001 for company).
A response from the APINVOICE comments might look like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
    <LIST numRecords="1" maxRecordCount="500" hasMoreRecords="false" status="pass">
        <MSG /> 
        <ATTACHMENT attachmentNbr="zz" indexName="APISET1" attachmentSize="680" createDate="20100112" modifiedDate="20100112" createUser="lawsonuser" modifiedUser="lawsonuser" dataArea="PROD" K5="9999" K4="0" K3="3694823" K2="20840" K1="120" createTime="074725" modifiedTime="074725" attachmentCategory="commentAtch" attachmentType="A" lIndex="yEXb" fileName="APINVOICE" status="pass">
            <MSG /> 
                <![CDATA[ Attachment name]]> 
                <![CDATA[ Attachment data ]]> 

As for the getattachrec.exe, I don’t have a reliable method of building these, it’s tedious no matter how you do it. Once you figure out how to get your comment (like the Invoice example above), you might be able to replicate it without having to go through the entire process every time, but you’ll have to use trial and error. The biggest issue you’ll face is that some tables allow for multiple comment types, and you won’t know which ones (if any) exist.

My preferred method to build the URLs the first time around is to either put Lawson in debug mode or use Fiddler. There are three main calls that you will need to focus on.
1) Drill from a Lawson screen on our record (like AP90.1)
2) Based on the data from #1, construct our next URL to get the comment header
3) Based on the data from #2, construct our next URL to get the comment detail

1) Here’s an example of the IDA (Drill) URL. This is for the same invoice as above.


This gets us output like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<IDARETURN productline="PROD" title="">
<LINES count="4">
    <IDACALL type="CMT">
      <![CDATA[ cgi-lawson/getattachrec.exe?_AUDT=A&_IN=APISET1&K1=120&K2=20840&_FN=APINVOICE&K3=3694823&K4=0&_ATYP=C&K5=9999&_TYP=CMT&_OPM=C&_OUT=XML&_ATTR=TRUE&_DRIL=TRUE&_AOBJ=TRUE&_PDL=PROD&_ON=Invoice+Note%2FReport%2FCheck+Comments  ]]> 
      <![CDATA[ lawson-ios/action/ListAttachments?attachmentType=A&indexName=APISET1&K1=120&K2=20840&fileName=APINVOICE&K3=3694823&K4=0&attachmentCategory=C&K5=9999&drillType=CMT&outType=XML&dataArea=PROD&objName=Invoice+Note%2FReport%2FCheck+Comments  ]]> 
        <![CDATA[ Invoice Note/Report/Check Comments  ]]> 
    <KEYFLDS /> 
    <REQFLDS /> 
    <IDACALL type="CMT">
      <![CDATA[ cgi-lawson/getattachrec.exe?_AUDT=N&_IN=APISET1&K1=120&K2=20840&_FN=APINVOICE&K3=3694823&K4=0&_ATYP=C&K5=9999&_TYP=CMT&_OPM=C&_OUT=XML&_ATTR=TRUE&_DRIL=TRUE&_AOBJ=TRUE&_PDL=PROD&_ON=Invoice+Notes  ]]> 
        <![CDATA[ Invoice Notes  ]]> 
    <KEYFLDS /> 
    <REQFLDS /> 
    <IDACALL type="CMT">
      <![CDATA[ cgi-lawson/getattachrec.exe?_AUDT=D&_IN=APISET1&K1=120&K2=20840&_FN=APINVOICE&K3=3694823&K4=0&_ATYP=C&K5=9999&_TYP=CMT&_OPM=C&_OUT=XML&_ATTR=TRUE&_DRIL=TRUE&_AOBJ=TRUE&_PDL=PROD&_ON=Invoice+Report+Comments  ]]> 
        <![CDATA[ Invoice Report Comments  ]]> 
    <KEYFLDS /> 
    <REQFLDS /> 
    <IDACALL type="CMT">
      <![CDATA[ cgi-lawson/getattachrec.exe?_AUDT=C&_IN=APISET1&K1=120&K2=20840&_FN=APINVOICE&K3=3694823&K4=0&_ATYP=C&K5=9999&_TYP=CMT&_OPM=C&_OUT=XML&_ATTR=TRUE&_DRIL=TRUE&_AOBJ=TRUE&_PDL=PROD&_ON=Invoice+Check+Comments  ]]> 
        <![CDATA[ Invoice Check Comments  ]]> 
    <KEYFLDS /> 
    <REQFLDS /> 

2) Run each URL that you find in the IDACALL nodes to try and get a comment header. Running one of these URL’s (the Invoice Note/Report/Check Comments one) will give us something like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<Report cgidir="/cgi-lawson/" executable="getattachrec.exe" productline="PROD" filename="APINVOICE" token="Token" keynbr="KeyNbr">
  <QueryBase exepath="/cgi-lawson/writeattach.exe">
    <![CDATA[ Invoice Note/Report/Check Comments  ]]> 
        <![CDATA[ Invoice Note/Report/Check Comments  ]]> 
        <![CDATA[ none  ]]> 
        <![CDATA[ Invoice Notes  ]]> 
        <![CDATA[ none  ]]> 
        <![CDATA[ Invoice Report Comments  ]]> 
        <![CDATA[ none  ]]> 
        <![CDATA[ Invoice Check Comments  ]]> 
        <![CDATA[ none  ]]> 
      <![CDATA[ _AK=yEXb  ]]> 
    <RecAtt Action="Add">
        <![CDATA[ Add Comment  ]]> 
        <![CDATA[ K1=0120&K2=++++20840&K3=3694823&K4=000&K5=9999&_ATYP=C&_AUDT=A&_USCH=none&_DATA=TRUE&_OPM=M&  ]]> 
    <RecAtt Action="">
        <![CDATA[ 01/12/2010  ]]> 
        <![CDATA[ 07:47:25  ]]> 
        <![CDATA[ 01/12/2010  ]]> 
        <![CDATA[ 07:47:25  ]]> 
        <![CDATA[ Comment Name  ]]> 
        <![CDATA[ K1=0120&K2=++++20840&K3=3694823&K4=000&K5=9999&_ATYP=C&_AUDT=A&_KS=zz&_OPM=A&_DATA=TRUE&  ]]> 
  <ErrMsg ErrNbr="0" ErrStat="MSG">
    <![CDATA[ Success  ]]> 

3) This time, we have to put the URL together (I’ve highlighted the applicable lines). Take the value of the cgi-dir and the executable attributes from the Report node. Add the CDATA value from the QueryBase node. Then add the CDATA value from the QueryVal node that is in the RecAtt node with an action attribute of “” (the one at the bottom). That gives us:


The result of which is our comment detail, and it looks like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<Report cgidir="/cgi-lawson/" executable="getattachrec.exe" productline="PROD" filename="APINVOICE" token="Token" keynbr="KeyNbr">
  <QueryBase exepath="/cgi-lawson/getattachrec.exe">
    <![CDATA[ <NULL>  ]]> 
      <![CDATA[ _AK=yEXb  ]]> 
    <RecAtt Action="">
        <![CDATA[ Comment Name  ]]> 
        <![CDATA[ Comment Text%0A  ]]> 
        <![CDATA[ K1=0120&K2=++++20840&K3=3694823&K4=000&K5=9999&_ATYP=C&_AUDT=A&_KS=zz&_OPM=A&_DATA=TRUE&  ]]> 
  <ErrMsg ErrNbr="0" ErrStat="MSG">
    <![CDATA[ Success  ]]> 

The comment data is in the AttData node. Wasn’t that easy?
Note: Newlines in the AttData node will be represented by %0A.

As for which of the three methods you use, that’s up to you and your requirements. Chances are, you’ll wind up using a combination. I like running a SQL query to get the list of comments, but actually use the ListAttachments to retrieve the data. This saves on the heartache of trying to reconstruct the comment from the SQL and it limits how much work we have to do because we’ll only try to retrieve comments for those records that actually have them.

Comments and Portal
You can add a Comment popup to Design Studio and custom portal pages. This is especially useful when the users are not actually on the forms they might want to view comments for. An example of this might be in an inbasket view. You could give the users a link to view the comments on an invoice without having to leave the inbasket.
The function is:

top.portalObj.drill.doAttachment(window, "lawformRestoreCallback", idaCall, 'CMT');

The idaCall is a URL and is the same as we built in #1 above. You should leave everything else the same.

Anything else you “need” to know? Leave a comment and I’ll answer if I can.



Processflow JDBC

After a lot of frustration trying to get the SQL nodes to work on various versions of Lawson System Foundation, I’m posting what we’ve learned so far. There seems to be a lot of misinformation out there, so here’s what we have working.

On LSF9, for SQL Server, you can only use the SQL 2000 drivers (at least in our experience). In 9.0.1, the 2005 and 2008 drivers seem to work fine. Oracle works the same in both versions.

SQL Server 2000:
JDBC Driver:
URL: jdbc:microsoft:sqlserver://<server>;DatabaseName=<DBName>
Jar files: msbase.jar, mssqlserver.jar, msutil.jar

SQL Server 1.2/2.0
JDBC Driver:
URL: jdbc:sqlserver://<server>;DatabaseName=<DBName>
Jar files: sqljdbc4.jar

JDBC Driver: oracle.jdbc.driver.OracleDriver
URL: jdbc:oracle:thin:@<server>:<port>:<SID>
Jar files: classes12.jar,, ojdbc6.jar*

* The correct ojdbc.jar file depends on your version. This is the jar file for Oracle 11g. Oracle 10g/9i (I’m not sure) uses ojdbc14.jar

To run on the server, you must add the appropriate jar files in the $GENDIR/bpm/jar directory (or %GENDIR%/bpm/jar if you’re on Windows).

To run on your machine, it depends. Using the “old” version of process flow designer, you need to make sure that the jar files are on your local machine and you’ve added the jar files to the CLASSPATH in the designer.bat file.

It should look something like this (for SQL server 2005/2008):
SET CLASSPATH=.;.\lib\secLS.jar;.\lib\bpmsso.jar;.\lib\MvxAPI.jar;.\lib\bpm-clients.jar;.\lib\bpm-commons.jar;.\lib\activation.jar;.\lib\collections.jar;.\lib\jakarta-oro-2.0.jar;.\lib\jbcl.jar;.\lib\jcfield450K.jar;.\lib\js.jar;.\lib\xbean.jar;.\lib\mailapi.jar;.\lib\mercjava.jar;.\lib\pf-xmljava.jar;.\lib\pf-rel802.jar;.\lib\;.\lib\smtp.jar;.\lib\xercesImpl.jar;.\lib\xml-apis.jar;.\lib\lawson-httpclient.jar;C:\JDBC\sqljdbc4.jar

For the 9.0.1 Designer version, you need to add the appropriate jars to the “External Jar” section.
Window>Preferences…>”External Jars”
Click “New…” and select the appropriate jar file.



Lawson Processflow and IBM Transformation Extender

In LSF 9.0.x, Lawson supports calling Transformation Extender maps from Process Flow Integrator. This post is going to be about the two different (okay two and half) ways that I use the TX node. There are technically four ways to call a map from the TX node: RunMap, TransformRealTime, TransformBatch, and TransformBatchToRealTime.

Here’s what they mean:

  • RunMap: Run the map just like you were running it from TX Design Studio. You can override any/all outputs – so long as the card is either File or Echo type.
  • TransformRealTime: like doing RunMap, but with echo in and echo out. The output of the map is available as TXNodeName_outputData.
  • TransformBatch: like doing RunMap, but with File in and File out. I don’t use this one (see below), so you’ll have to rely on the documentation for more details.
  • TransformBatchToRealTime: like doing RunMap with File in and Echo Out

Personally, I think TransformBatch is a waste. With the overrides that you can give on the cards with RunMap, I see no purpose. The TransformRealTime has some value as it allows you to do transformations and then use the data in your flow via the outputData variable.

I’m going to talk about RunMap and TransformBatchToRealTime (if you can figure this out, then TransformRealTime should be easy). The “half” I referenced above (and a viable fifth option) is calling a map via RunMap and having that map call another map. From Lawson process flow, you can only override cards to either File or Echo. However, when using another map to a call a map, you can override ANY of the datasources. Which means you can make Database calls (and pass in parameters), dynamic LDAP calls, etc. These are some fairly complicated concepts, so I’m going to try and use simple examples and keep it high level. If I start getting a lot of hits and/or questions, I’ll do a follow up post.

Disclaimer: We’re testing 9.0.1, so all of the examples will be using the Eclipse version of designer. I think it’s easier to see the details this way.

Run Map
In this example, the map is a simple Input to Output. I use this flow/map whenever we have issues with TX on other flows to validate that the TX server is up and running. As you can see, I’m overriding the input variable with a value of my choosing (it could also be a PF variable) and I’ve also specifically mapped the output to a PF variable. By mapping to a String variable, this is the same as using an Echo override.

Flow is here.

The premise behind this flow is that we will be updating AP14 with data from a file using the Lawson Adapter node. For those of you not familiar with AP14, it is a Vendor contact form. The keys are Vendor Group, Vendor, Location. The form itself has three detail lines, each representing a contact. Because we’re using the Lawson adapter, I used the “maketypetree” command to create the type tree to be used as my output from the TX map. The input is a CSV file that has basic contact information. Because we’re dealing with details, I have a Functional map in the TX map. I have simplified certain aspects for demonstration purposes. There’s no error handling and I’m making a lot of assumptions with the data, so please ignore the obvious shortcomings.

The TX map reads the file and transforms it to a format that the Lawson Adapter can read. Because I’m using the transform to real time option, the data is available as the TXNode_outputData variable. I’m using no overrides in the map and the only data I’m setting is the input to the Lawson Adapter.

Here is the BCI transaction that I defined for the AP14.
Here is the Flow.
Here is the Map – and here is the Functional Map.

Run a map from a map a.k.a. “RunMap Fancy”
The business case for this is that we receive employee data from our affiliate companies that need to be loaded into our HR system. The data that we get is a full employee listing. The purpose of the base map is to compare the data to what is currently in Lawson and determine if the record being provided is an Add (it does not exist in Lawson), a Change (it exists in Lawson, but something about the employee has changed) or Delete (there is a record in Lawson that is not in the file). We do this by running a query over all the employees for a given process level in Lawson to produce our population. The base map then compares this to the file that was provided and produces the A/C/D file as an output. The A/C/D file is then processed by Process Flow, but I’m not showing that here. Using TX to determine the A/C/D takes approximately 30 seconds for a 5,000 employee company. Compared with some of the other methods we could choose, it’s extremely efficient.

The flow itself calls what is called a Control Map using the RunMap. The Control Map takes as it’s inputs all the necessary information to pass as overrides to the base map and calls it as part of the output card. The primary reason this is necessary is because we need to be able to override the SQL query that is run for the correct process level. There are some other options, but this is actually one of the more straight-forward choices.

Here’s the flow.
Here’s the Control Map. In the screen shot, you can also see the other maps that make up the base map. I wouldn’t call myself a TX expert, so there may be better ways to do it, but this works pretty well.

To summarize, there are several ways to call TX maps from a process flow and each has their own specific purpose although there is some overlap. Decide what you’re going to be doing with the output data of your map to help you determine which is the right option for you.