Automating LBI Reporting Services rights

Lawson Business Intelligence (LBI) Reporting Services (RS) offers a lot of great functionality through the concept of “rights”. Rights allow you to do bursting and variable prompts within Crystal Reports. We use this functionality quite a bit for all kinds of things such as creating a right with an employee’s company and employee number so that they only see their own information or loading Activity codes for a project manager and setting that as a prompt so they can select from a list of their activities vs having to type in a cryptic code. This is important to us because we tend to use ODBC rather than DME, so we can’t use the inherent Lawson security that DME offers.

The big downside to using rights is that they’re difficult to load. As of now (LBI 10.4), you can either create the right manually or you can create a file and load it manually. I dislike both of those options because they both involve the word “manual”. We have an enhancement request in to create web calls to allow us to update rights, but it doesn’t exist yet. Consequently, if I really want to automate the loading of rights to LBI, I’m going to have to break the rules.

PLEASE NOTE THAT WHAT I AM ABOUT TO DISCUSS IS NOT SUPPORTED BY LAWSON/INFOR (OR ME). IF YOU CHOOSE TO IMPLEMENT THIS, YOU ARE ON YOUR OWN. YOU HAVE BEEN WARNED.

We have chosen to update the RS rights tables in LBI directly using SQL. You can choose any manner of methodologies to do the updates, but my preference is using TX as part of a process flow to do the updates. We do a full compare between what is currently in Lawson and what is in LBI and then perform the updates accordingly.
In order to add rights to LBI, you need to perform four table updates in the following order (these tables should be in your LawsonRS database):

  1. ERS_RULEMASTER – build the right framework based on an existing structure
  2. ERS_RULEDETAIL – build the filters e.g. how the elements match
  3. ERS_RULEVALUES – set the filter values
  4. ERS_RULEMAPPINGS – Associate the user to the right

If you are deleting rights, you should perform the updates in the reverse order as above.

In the below examples, I will be creating a right for my id called EMPLOYEE. The structure has two elements: COMPANY and EMPLOYEE which correlate to the primary key for the Lawson EMPLOYEE table and will help me to limit HR records for just the person running the report. I’ve chosen to show an example with two elements so that you can see the harder version. It should be easy to to either scale it up or down depending on your needs. These examples are for SQL Server.

--ERS_RULEMASTER
INSERT INTO [LawsonRS].[dbo].[ERS_RULEMASTER]
           ([RULEID]
           ,[RULENAME]
           ,[RULEDESCRIPTION]
           ,[STRUCTUREID]
           ,[RULEOWNER]
           ,[STARTDATE]
           ,[ENDDATE]
           ,[STATUS]
           ,[ALLOWPARTIAL]
           ,[RULEORDER])
     VALUES
           (867
           ,'EMPLOYEE-12345'
           ,'Employee Filter'
           ,11
           ,'APPSUPPORT'
           ,CURRENT_TIMESTAMP
           ,'2080-08-14 04:00:00.000'
           ,'A'
           ,'Y'
           ,0)
GO

--ERS_RULEDETAIL
INSERT INTO [LawsonRS].[dbo].[ERS_RULEDETAIL]
           ([RULEID]
           ,[ELEMENTID]
           ,[RULEGROUPING]
           ,[STRUCTUREID]
           ,[OPERAND]
           ,[ELEMENTVALUE2]
           ,[EXCLUDEINCLUDE]
           ,[ELEMENTORDER])
     VALUES
           (867
           ,1
           ,1
           ,11
           ,'equal to'
           ,null
           ,'i'
           ,1),
           (867
           ,8
           ,1
           ,11
           ,'equal to'
           ,null
           ,'i'
           ,2)
GO

--ERS_RULEVALUES

INSERT INTO [LawsonRS].[dbo].[ERS_RULEVALUES]
           ([RULEID]
           ,[ELEMENTID]
           ,[RULEGROUPING]
           ,[STRUCTUREID]
           ,[SEQUENCEID]
           ,[ELEMENTVALUE1])
     VALUES
           (867
           ,1
           ,1
           ,11
           ,1
           ,'1'), --company number
           (867
           ,8
           ,1
           ,11
           ,1
           ,'12345') --employee id
GO


--ERS_RULEMAPPINGS
INSERT INTO [LawsonRS].[dbo].[ERS_RULEMAPPINGS]
           ([CONSUMERID]
           ,[RULEID]
           ,[CONSUMERTYPE])
     VALUES
           ('myid'
           ,867
           ,1)
GO

Obviously there are some hard-coded values in here that you’re going to have to figure out. First things first, let’s figure out the RULEID that we’re going to create. You will need to add 1 to the value returned by this query.

SELECT MAX(RULEID) AS LASTRULEID 
FROM [LawsonRS].[dbo].[ERS_RULEMASTER]

Next we need to get the Structure ID for the EMPLOYEE structure that we’re updating. You should be able to retrieve this by checking the ERS_STRUCTURE table.

SELECT STUCTUREID 
FROM [LawsonRS].[dbo].[ERS_STRUCTURE]
WHERE STRUCTURENAME = 'EMPLOYEE'

Finally, we need to get the Element IDs for the elements of the structure we updating/creating. This is in the ERS_ELEMENTS table.

SELECT ELEMENTID 
FROM [LawsonRS].[dbo].[ERS_ELEMENTS]
WHERE ELEMENTNAME = 'EMPLOYEE'

Finally, here is a sample query on existing rights for you to compare your Lawson data to to create the updates.

SELECT MAP.CONSUMERID, RM.RULENAME, S.STRUCTUREID, S.STRUCTURENAME,  E.ELEMENTID, 
    E.ELEMENTNAME, RD.OPERAND, RV.ELEMENTVALUE1, RD.RULEGROUPING, RD.ELEMENTORDER,
    RM.RULEOWNER, RM.STARTDATE, RM.ENDDATE
FROM ERS_RULEMASTER AS RM
    INNER JOIN ERS_RULEDETAIL AS RD
        ON RM.RULEID = RD.RULEID
    INNER JOIN ERS_RULEVALUES AS RV
        ON RM.RULEID = RV.RULEID
        AND RD.ELEMENTID = RV.ELEMENTID
        AND RD.RULEGROUPING = RV.RULEGROUPING
        AND RD.STRUCTUREID = RV.STRUCTUREID
    INNER JOIN ERS_STRUCTURE S
        ON RD.STRUCTUREID = S.STRUCTUREID
    INNER JOIN ERS_ELEMENTS E
        ON RD.ELEMENTID = E.ELEMENTID
    INNER JOIN ERS_RULEMAPPINGS MAP
        ON RD.RULEID = MAP.RULEID
WHERE S.STRUCTURENAME = 'EMPLOYEE'
ORDER BY MAP.CONSUMERID

So there you have it. The ability to automate the loading of rights into Lawson LBI Reporting Services.

HTH

Lawson AGS Caching Pt 2

I previously posted about what Lawson AGS caching was here. After some questions and reviewing the post, I realize that I didn’t fully explain how to use it within the context of Process Automation or Process Integrator. Hopefully, this post will answer those questions. Since we’re on Process Automation, my example will be from PA. The only difference between Process Automation and Processflow Integrator is that PA will return the XML from the Lawson Transaction Node (AGS). You will have to use a WebRun in Processflow integrator. Neither product (to my knowledge) will return the _TRANSID as one of the fields from the AGS node.

In this example, I will inquire on GL10.1 with cache set to true and then update the Company name field. For those who are just joining us (and didn’t read part 1), by turning on caching, I can update just the company name field in the update node instead of having to pass in all of the fields again.

Here is my sample flow:
examplelTRANSID

  1. I have two start node variables: strXML and strTRANSID
  2. Inquire on GL10.1. If you’re on Processflow integrator use a Webrun here.
  3. Fix the XML data. XML cannot have a period (.) in a node name. Unfortunately, the AGS response has a period in the node that contains the form name (GL10.1 in this case). We must remove this or the parse will fail
  4. Parse the XML string
  5. Get the TRANSID. This step is not strictly necessary
  6. Update the company name on GL10

GL10.1 Inquire

_PDL=PROD&_TKN=GL10.1&_EVT=CHG&_CACHE=TRUE&_LFN=ALL&FC=I&GLS-COMPANY=10

Update XML string

strXML=AGSInquire_outputData.replace(/GL10.1/g,"GL101");

Parse XML
Action is “Parse XML String” and the input value is strXML that was built in the Assign node

Get the TRANSID

strTRANSID=XMLParse_output.GL101[0]._TRANSID

GL10.1 Update

_PDL=PROD&_TKN=GL10.1&_EVT=CHG&_CACHE=TRUE&_LFN=ALL&FC=C&GLS-COMPANY=10&GLS-NAME=NewCompanyName&_TRANSID=<!strTRANSID>

HTH

Predicting the future with Process Automation or What Would Process Automation do?

I hate surprises. I hate surprises to the point that I actually don’t care if I know what happens in a movie before I see it. Consequently, I spend a lot of time trying to figure out what is going to happen next, just so I’m not surprised. This got me thinking about predicting the future with Process Automation (yes, I realize this is an odd thing to think about, but you have to have a hobby, right?). It is fairly easy to figure out what happened in the past in a flow. This could be done either through a variable you have set or viewing the output variables from the node, but is it so much harder to figure what is going to happen next? As it turns out, not really.

I’m not sure that I have a business case for this, it’s just an interesting exercise which is why you got the preamble about me and surprises. Back story aside, Process Automation (and Process Flow – although slightly more complicated) can be self aware. By being self aware, and given that a flow is a static set of instructions, you CAN predict the future – as in you can figure out what will happen next within a flow. Before I lose everyone, I guess I should start getting to the good stuff.

It all starts with the fact that a flow is an XML document. With Process Automation you can read the XML that comprises the flow and search for the information that you need using E4X syntax. All of the information about the flow is in the XML, so you can effectively read ahead to what the flow will do next and thus predict the future.

Here is a simple flow that demonstrates reading ahead. I will attempt to determine the value in the “To” field of the email node at the end.
ReadNextNodeFlow

In order:

  1. Run a landmark query to retrieve the XML of the flow. If you’re using Process Flow integrator, you would need to retrieve the XML from the file system.
  2. Parse the string information into an XML document
  3. Use javascript to retrieve the To value of the email node
  4. Email node – it just exists so we can read it.

Landmark Query
_dataArea="prod" & _module="pfi" & _objectName="PfiFlowDefinition" & _actionName="Find" & _actionOperator="NONE" & _actionType="SingleRecordQuery" & _pageSize="30" & PfiFlowDefinition="ReadFutureNode" & CurrentFlowXml
Note that this assumes that you know the name of the flow and that it has been published to the server

The XML document has the following basic structure:

<process>
  <processUserNode></processUserNode>
  <activities>
    <activity>
      <prop><anyData><anyData></prop>
      ...
      <prop><anyData><anyData></prop>
      <OnActivityError>
    </activity>
  </activities>
  <edges>
    <edge />
    ...
    <edge />
  </edges>
</process>
  • processUserNode – contains information set at the process level
  • activities – contains all of the activity nodes
  • activity – information about the node itself (type, location, etc)
  • prop – the settings in the node. There may be many prop nodes
  • OnActivityError – contains information on what to do on error
  • edges – container node
  • edge – indicates the “next” node

Retrieving the To value
I recommend this page as a reference for using E4X

//Find 'this' node in the edge list
var xmlThisNode = XMLParseFlow_output.edges[0].edge.(@from=='Assign8190'); 
//get the ID of the next node (email)
var strNextNodeID = xmlThisNode[0].@to;  
//Get the activity node for the email
var xmlNextNode = XMLParseFlow_output.activities[0].activity.(@id==strNextNodeID); 
//Pull the To property 
var xmlEmailToNode = xmlNextNode[0].prop.(@name=='to');  
//Get the value of To
strEmailTo = xmlEmailToNode.anyData; 

I suppose you could use this technique to retrieve the initial value of the Start node variables to see if they have changed during the flow. There might be other uses as well. I’m not sure if this is really something you would ever actually do in a production environment, but I feel better knowing I can predict the future.

HTH

Calling Process Automation from the command line

I discussed this briefly during my presentations at Inforum this year. The basic business requirements are that there are occasions when you need to trigger a workunit other than from the application (S3/M3/LMK) or from a schedule. In Process Automation you can use Channels to poll for new data (files, JMS, etc), but there are times when you still need more flexibility.

Need some examples? Okay, how about as part of a script or set of processes? For us, a good example is our ACH process for Accounts Payable. Our bank (PNC) will not accept the files produced by the Lawson AP160 job and as a result, we need to manipulate them. The reason we can’t use File Channels in Process Automation is because the bank requires us to send one file with all of our bank accounts in it (currently 85) that has separate batches for each account. The easiest way to accomplish this is to have a script that runs as the last step of a multistep job after all of the AP160 steps. That script simply calls a java program that triggers a process flow to process the files and send to PNC. There are several other examples, such as extending the ability to do scripting. Imagine being able to call a script that can update the RM attributes on a user. Pretty nice option, eh?

Hopefully, I don’t really need to convince you this is a good idea. By being able to call a process flow at will, you can eliminate customizations in your application environment, which is a good thing. Below is java code that will trigger a workunit. The code is fairly verbosely commented regarding what it’s doing, so you should be able to modify to suit your needs without any more commentary from me. You will need to update the appropriate variables (host, user, password, etc) and create a process flow called “TestJavaProcess” and upload to your server. After you run the java program, you can review the workunit that was created to see where the values from the program appear so you know what to update.

* TestCallLPA.java
 *
 * Description:
 *  Program is designed to call LPA flows from the command line
 *
  ============================================================================== */
import com.lawson.bpm.eprocessserver.interfaces.ProcessRequest;
import com.lawson.bpm.eprocessserver.interfaces.ProcessResponse;
import com.lawson.bpm.eprocessserver.interfaces.ProcessVariable;
import com.lawson.bpm.eprocessserver.interfaces.ProcessFolder;
import com.lawson.bpm.eprocessserver.interfaces.LPSSession;


public class TestCallLPA
{
    public static LPSSession session;
    public static ProcessRequest request = new ProcessRequest();
    public static ProcessResponse response;
    //===================================================
    public static void main(String[] args) {
        try {
            String HostName = "gridhost"; //grid host name
            Integer PortNumber = 50005; //port number that grid is listening on
            String UserName = "user"; //a valid admin LMK user name
            String UserPwd = "password";  //password for UserName
            String LMKProd = "prod"; //note this is the LMK PL
            String ProcessName = "TestJavaProcess";
            String ProcessTitle = "Java API Workunit";
            String KeyString = "123456789"; //This with key value needs to be a unique string
            String KeyValue = "KeyString"; //This with key string needs to be a unique string
            Boolean textOutput = false; //set to true to print return values to screen - infocode, return message, outputdata with | separators
            Boolean returnData = false; //set to true to output data -- will need a return node
            Boolean Async = false; //set to true to trigger WU without waiting for response
            request.setSystem("Landmark");
            request.setSource("JavaAPI");
            request.setFlowName(ProcessName);
            request.setWorkTitle(ProcessTitle);
            request.setFilterKey("FILTERBY");
            request.setFilterValue("FILTERVALUE");
            request.setKeyString(KeyString,KeyValue);
            /* For demo purposes, this is commented out.  If you have a service, set here */
            //request.setService(setting[1]);
            //Criteria 1
            request.setBizCriteria(request.BIZ_CRITERIA_1,"CRITERION1");
            //Criteria 2
            request.setBizCriteria(request.BIZ_CRITERIA_2,"CRITERION2");
            //Criteria 3
            request.setBizCriteria(request.BIZ_CRITERIA_3,"CRITERION3");

            //Start adding variables
            //Boolean
            ProcessVariable variable = new ProcessVariable("BOOLEAN","true",ProcessVariable.TYPE_BOOLEAN);
            request.addVariable(variable);
            //Integer variable
            variable = new ProcessVariable("INTEGER","1",ProcessVariable.TYPE_INT);
            request.addVariable(variable);
            //Decimal variable
            variable = new ProcessVariable("DOUBLE","1.00",ProcessVariable.TYPE_DBL);
            request.addVariable(variable);
            //Date variable
            variable = new ProcessVariable("DATE","01/01/2013",ProcessVariable.TYPE_DATE);
            request.addVariable(variable);
            //Long Integer variable
            variable = new ProcessVariable("LONG","123456789",ProcessVariable.TYPE_LONG);
            request.addVariable(variable);
            //Object Variable -- not sure how to pass
            variable = new ProcessVariable("OBJECT","",ProcessVariable.TYPE_OBJECT);
            request.addVariable(variable);
            //Array variable -- not sure how to pass
            variable = new ProcessVariable("ARRAY","",7); //Array process type is not documented and is not TYPE_ARRAY
            request.addVariable(variable);
            //Add input data
            request.setInputData("Some input data");

            //Connect to grid and create a session
            session = LPSSession.createGridSession(HostName,PortNumber,UserName,UserPwd,LMKProd);
            //Create workunit
            //Pass in the built request from above and set Async value
            response = createWU(request,Async);
            //If user selected Async then createWU will exit and we won't get here
            int eRC = response.getErrorCode();

            //Deal with response
            if (textOutput) {
                System.out.println(response.getInformationCode()+"|"+response.getReturnMessage()+"|"+response.getOutputData());
            }
            if (returnData) {
                System.out.print(response.getOutputData());
            }

            //Cleanup and close out
            session.close();
            System.exit(eRC);
        } catch (Exception e) {
            e.printStackTrace();
            System.exit(1);
        }
    }

    //===================================================
    //Trigger workunits based on sync vs async
    public static ProcessResponse createWU(ProcessRequest request,Boolean Async) {
        try {
            if (Async) {
                response = session.createAndReleaseWorkunit(request);
                session.close();
                System.exit(response.getErrorCode());
            } else {
                response = session.runRequest(request,true);
            }
        } catch (Exception e) {
            e.printStackTrace();
            System.exit(1);
        }
        return response;
    }
}

In order to compile the code, you will need to make sure that the following files are in your classpath (and the directory you save the code file above to):

  • sec-client.jar
  • bpm-interfaces.jar
  • type.jar
  • security.jar
  • clientsecurity.jar
  • grid-client.jar
  • lawutil_logging.jar

A basic classpath, compile, and call command (on Unix) would be:

$ export CLASSPATH=/lawson/lmrkstage/env/java/thirdParty/sec-client.jar:/lawson/lmrkstage/env/java/jar/bpm-interfaces.jar:/lawson/lmrkstage/env/java/jar/type.jar:/lawson/lmrkstage/env/java/jar/security.jar:/lawson/lmrkstage/env/java/jar/clientsecurity.jar:/lawson/lmrkstage/env/java/thirdParty/grid/grid-client.jar:/lawson/lmrkstage/env/java/jar/lawutil_logging.jar
# Compile 
$ javac TestCallLPA.java
# Call program
$ java TestCallLPA

HTH

Building an LDAP Infoset in Lawson Smart Notes

There is quite a bit of value in bringing your Lawson S3 LDAP or other LDAP data into Smart Notes. Once it’s in an infoset, you can use the data in other Smart Notes for bursting, delivery, etc. We actually use it as an auditing tool for security changes. We build infosets on the Lawson security data and check for new keys each day. This assures the auditors that any changes we make are through the established change control process. Building an LDAP infoset is actually quite simple, but maddeningly hard if you don’t know some of the specifics. Gary Garner from Lawson was the one who originally showed me how to do it. The examples below are of a simple infoset that we use for bursting.

First create a new infoset with an LDAP source as the below image shows.
SNLDAPConnection

  • The user id and password should be for a user that has the ability to query the LDAP directly and is not an ID set up in the Lawson Security Admin tool (although it might be).
  • Provider URL is the URL to your LDAP with the port number. I believe the standard port is 389. Your LDAP administrator should be able to provide this.
  • Enter the Context Factory exactly as you see it here.
  • See below for Search Base.
  • Unless you are familiar with LDAP searches, you’ll probably want to leave the Query string as I have it here. There are several good web resources for LDAP search if you’re not getting the results you want.

Keep in mind that the Lawson LDAP was designed by someone who either never wanted anyone to be able to use it or they didn’t actually know how LDAP works. I’m sure they had a reason for designing they way they did, and I’m pretty sure it’s a bad reason. But I digress…

The Search base follows the LDAP search standards. The full path can be determined by using a tool like JXPlorer. Note that the path is in reverse order from the way you descend the tree.
JXplorerPath

After you click next, you will need to add the field mappings. I would recommend that you do NOT allow SN to auto create the fields. The source name (highlighted) should correspond to the attribute name you see in JXPlorer. Once upon a time when dealing with an LDAP source, if the name didn’t match the source name it caused issues. We continue to do that and I can’t comment as to whether it has been fixed. In this case, we’re bringing in the user’s RMID (cn), Full Name and Email.
SNFieldDefinition

Here’s a screen shot of the Attributes in JXPlorer.
(For those who are wondering – yes the first four entries here correspond to the Query string in the first screen shot.)
JXplorerAttributes

If you want to be able to query the data set using a SQL tool, Crystal reports, etc you should check the box indicating that it’s a large infoset when you save it. This will create a table called INFOSET_NNNN in the LawsonSN database where NNNN is the id of the infoset. The fields in the table for this example will be COLUMN0 (id), COLUMN1 (name), COLUMN2 (email).

HTH

Lawson approvals – do it your way

Lawson approvals my way? What does that even mean and why would I want to do it?

Customizing approvals is changing how workunits route for approvals in Lawson Process Flow and Lawson Process Automation. This post is going to specifically be about S3, but it should apply for Landmark and M3 as well. My company is on Lawson Process Automation, so that’s what will be in the examples, but the same concepts should apply to Process Flow.

Let’s first talk about how Lawson Process Automation (LPA) seeks approvals in the User Action nodes. (The rest of this post will assume that you are familiar with the concept of approvals in Process Flow or LPA. If you’re not, check out the documentation for your product before continuing as it might be confusing otherwise.) User action nodes use what is called “Category Filtering”. First a task is assigned to a user. For that task, a filter category and value are assigned (you can assign multiple categories and values). Next, each User Action in a flow will also have a task (or multiple tasks) assigned to it. Finally, when the workunit is created, it will have a filter category and filter value on it based on data in the source application. The inbasket will use the filter category and filter value from the workunit to display only relevant workunits to the user who is logged in.

Easy, right?

Okay, maybe not.

Keep these key words in mind:

  • Task = Inbasket. There are several philosophies on how to set these up, but they will basically equate to either the person or the type of approval they are performing. Examples might be Manager or Invoice Approver.
  • Filter Category = What kind of information is relevant for this workunit.
  • Filter Value = The actual value that will be filtered on for the workunit.

Here is an example setup:
User: Jdoe
Task: Invoice Approver
Filter Category: ACCTUNIT
Filter Value: 123456

In this scenario, Jdoe would only be able to see workunits in his inbasket that have BOTH the Filter Category of ACCTUNIT and the Filter Value of 123456.

Okay, now that that’s out of the way, let’s get to the fun stuff. Ultimately, the problem comes down to the fact that workunits created by the source applications (S3, M3, Landmark) rarely have Filter Categories that are of any use. Case in point, the Invoice Approval service uses Authority code, which has to be keyed at the time of the invoice (or defaulted from some place like Vendor). This creates a bit of an issue for us. With 2000 Managers responsible for their own accounting units, it means that AP would need to actually know which of the 2000 managers needed to approve the invoice. Not gonna happen. In a much larger context, it also limits us to basically one level of approval if we were to actually set up a code for each manager because we wouldn’t be able to route to another level easily like a Vice President. Each VP would need to have the same setup as each of the managers that might escalate to them instead of something that makes sense like Company. I’m not saying it’s impossible, it would just be extremely messy. If I’m a VP and there are three managers that approve for Accounting Units in my company, I would need to have each of their approval codes assigned to me. If the manager happens to also approve for an accounting unit in another company, the VP responsible for that company would also need that Approver code assigned to them. Add in the fact that Approval code is only 3 characters and we’re going to wind up with codes like 0X0 and 1CF that AP would probably never get right.

Truth be told, we don’t really want to get approvals by approval code anyway. How our invoice approval works is: If the invoice is part of a Capital project, then route it to the Project Manager, if it’s for a Service, route it to the manager of the cost center receiving the service. So not only do we NOT want to use Approval Code, we actually want to use different approvals depending on the type of invoice.

The question is, how do we do that? The answer is we modify the Category Filter and value. There are people thinking right now, “Okay, so we need to modify the 4GL of the library that creates the workunit to have the correct Filter Category and Filter Value, right?”. You would be correct, you could do that. If you’re one of those people (or you’re a process flow person in an environment that thinks like that) I feel sorry for you. You’re doing it the hard way. Not only will you have a modified library that you have to be careful of when you apply patches, you have now created extra effort when you want to upgrade. Good for job security, but bad for your business.

So now you’re thinking, “Okay smarty, since you just insulted me, how do you propose that we do it?”. I’m going to advocate that you dynamically change the category filter and value IN THE FLOW ITSELF. Here’s an interesting bit of information – the Filter Category and Filter Value on the workunit record are NOT the values used to create the inbasket task. What actually happens (as near as I can tell), is that these values are actually used to populate two Flow variables (in LPA called: oCatKey and oCatValue – I believe this to be the same in Process Flow). It is the flow variables that are used to used in the creation of the inbasket task. All you have to do is add an assign node to your flow before the User Action node. Add a javascript expression to set these two values to whatever you want. Voila! The inbasket will now use the Filter Category and Filter Value that you set in your assign node.

Here’s the code to change a workunit to match what I set up for Jdoe above:

oCatKey = "ACCTUNIT"; //Filter category
oCatValue = "123456"; //Filter Value

For practical purposes and debugging, we are in the habit of also making calls to update the workunit itself with the correct Filter Category and Filter Value. It makes them easier to find when doing research. The added bonus to dynamically changing the values is that you can change the Filter Category and Filter Value as many times as you need in a flow. Thinking outside the box (we don’t do this) – you could have an Invoice approved by a manager for an accounting unit. You could then change the filtering to company and route to a VP responsible for that company if it’s over a certain dollar amount. In this case, you would only need to set up the number of companies that you have for VPs instead of having to setup all of the accounting units for a company (which you would have to do if you went the 4GL route). You could change it again and send it to a treasury analyst based on the bank account that it would pay out of to make sure that funds were available.

The possibilities are pretty much endless at that point.

HTH

Imagenow previous queue

We are in the process of doing a complete revamp of how we use ImageNow in AP. We’re moving to a new Workflow process and new Document Types, which is a big deal. We have been using the same setup/process that we designed with Perceptive five years ago. A lot has changed since then and we’ve been putting on bandaids for the last year or so, but we finally decided it was time to pull the trigger and start over.

As part of the redesign, we’re going from seven invoice document types (I’m not sure why we ever thought that was a good idea) to two: PO Invoice and Non-PO Invoice. The business requirements are that all Non-PO Invoices are audited for accuracy by a separate person. PO Invoices do not require auditing because we’re using Lawson, which utilizes a three-way match process between PO-Receiver-Invoice. We are going to have one “Processing” super queue, separated out into several sub-queues. When a clerk completes a document, they need to route PO Invoices to a Complete queue, but the Non-PO Invoices need to be routed to an Auditing queue. After the Auditing queue, the document is routed to Complete.

There are four ways to do this that I can think of:

  1. Assume the users will always route correctly
    Yeah right. Everyone makes mistakes. Even if we only had one mistake, Murphy’s Law says it will be a $1M+ payout that will get picked for audit.
  2. Have separate processing queues
    We’re shooting for less queues, so not really an option.
  3. Set up an outbound script on the queue to verify that they are routing the correct document type to the correct place
    I’m not sure if it’s possible, but I sure don’t like the idea of dynamically changing the route with an outbound script. It’s entirely possible that this is the intent of the outbound action; someone from Perceptive would need to weigh in on that – but the only thing I’ve ever seen them used for is updating values (like setting a value on a custom property).
  4. Set up an inbound script on the Auditing and Complete queues to route back Doc Types as necessary.
    The full requirement is: PO Invoices sent to Auditing should be routed back to where they came from and Non-PO Invoices sent to Complete should be sent back to where they came from UNLESS they came from Auditing. (In case you’re wondering, we’re routing back instead of to the correct place for two reasons 1: teaching tool 2: metrics)

So there’s the rub. In ImageNow 6.5, there is no iScript method to get the previous queue name. There is a method to route back to the previous queue, but I don’t really have the luxury of blindly routing back as it relates to the Complete queue since both the Processing queue and the Audit queue route there. However, there is an iScript method to get the routing history of a workflow item. The basic idea is to work backwards through the history (ignoring the current queue) and look for the last queue that the item was routed from. We can do this by looking for the last “Routed Out” action and examining the Queue Name.

Here is my previousQueue function:
It is expected that you will pass in a INWfItem object. The function will return either a Queue Name as a string or false.

// -------------------------------------------------------------------
// Function: 	getPreviousQueue
// Purpose:	    Finds the most recent queue that the item was in
//              excluding current
// Args(type):	wfitem (INWfItem object)
// Returns:	    Queue Name - String / false - Boolean
// -------------------------------------------------------------------
function getPreviousQueue(wfitem)
{
    var ROUTED_OUT_STATE = 5 /* Finished */; var ROUTED_OUT_STATE_DETAIL = 2 /* Routed out */;
    var queueName = false;
    if (!wfitem)
    {
        printf("no item found\n");
        return queueName;
    }
    var history = wfitem.getHistory();
    //work backwards through history to find last routed out queue
    for (var i=(history.length-1);i>=0;i--)
    {
        if (ROUTED_OUT_STATE == history[i].state && ROUTED_OUT_STATE_DETAIL == history[i].stateDetail)
        {
            queueName = history[i].queueName;
            break;
        }
    }
    return queueName;
}

All in all, pretty simple. Here’s how it’s used as an inbound script (Note that I have one script that is used on both queues).

var wfitem = INWfItem.get(currentWfItem.id);
var wfCurrentQ = currentWfQueue.name;
var msgReason;
var Q_Complete = "AP-Complete";
var Q_Complete_DocType = "PO Invoice";
var Q_Audit = "AP Audit";
var Q_Audit_Doctype = "Non-PO Invoice";
var wfStatus = 1;  //Idle condition

//Verify that a wf item was picked up
if (wfitem == null)
{
    printf("Failed to get item. Error: %s\n", getErrMsg());
    return;
}

//Set document based on wf item
doc.id = wfitem.objectId;

//Get document
if (!doc.getInfo())
{
    printf("Failed to get item. Error: %s\n", getErrMsg());
    return;
}

// snipped bunch of other code

msgReason = "Document sent to wrong queue";

//Route back from Audit Queue
if (wfCurrentQ == Q_Audit && Q_Audit_DocType != wfitem.docTypeName)
{
    return routeItemBack(wfitem,msgReason);
}

//Check if queue is AP Complete
//we only want to route back if it didn't come from Audit
var previousQueue = getPreviousQueue(wfitem);
if (wfCurrentQ == Q_Complete && Q_Audit_Doctype == wfitem.docTypeName && previousQueue != Q_Audit)
{
    return routeItemBack(wfitem,msgReason);
}

For completeness, here is the routeItemBack function referenced in the script.

// -------------------------------------------------------------------
// Function:      routeItemBack
// Purpose:       Routes an item to previous queue
// Args(type):    wfitem (INWfItem object), msgReason (string) - routing reason
//                Note: assumes Idle state of wfStatus set in calling script
// Returns:       Boolean
// -------------------------------------------------------------------
function routeItemBack(wfitem,msgReason)
{
    //Perform Routeback
    if (wfitem.routeBack(msgReason))
        return true;
    else
    {
        printf("Couldn't route item. %s\n", getErrMsg());
        wfitem.setState(wfStatus, "Release Hold");
    }
    return false;
}

HTH

Lawson Security Reporting

Reporting over LAUA was easy because everything was in the database. With later versions of Lawson and the implementation of Lawson Security, everything is stored in your LDAP. (Personally, I think that storing the data in the LDAP is a relative non-issue. However, the implementation that Lawson has done is epically stupid; specifically how it handles the services (Employee, Vendor, Requester and Customer) as strings that must be parsed and cannot be logically searched. Seriously?) Since all of the data is in the LDAP, you have two choices: use the reporting options in the Lawson Security tool or dump the data to a database table. Chances are that you’ll actually use both options, but for very different reasons.

It wasn’t long after we implemented RM (Resource Manager) that we started to understand the potential (and realized) issues. Consequently, we started dumping sections of the LDAP to database tables. Primarily we used the functionality in Lawson Smart Notes to read the LDAP and save the data to a database table. (There is a strong argument for this approach. If you’re interested in how to do it, leave a comment and I’ll make a follow up post). There are several potential problems with this though, namely that we have to have a separate table for each tree in the LDAP and there is no relation between them. A secondary problem is that our LBI is on SQL Server and our Lawson S3 environment is on Oracle. Combining the data is possible in Crystal Reports but cumbersome. Outside of Crytal Reports (and MS Access) it’s “impossible”. We started going down the path of using Lawson Process Flow and Transformation Extender to read the LDAP and write to tables in our Lawson schema, but that’s easier said than done (specifically when you have M:1 relationships like in an employee proxy situation or with a contractor to vendor scenario). We were also doing an “implement as required” approach, which lead to an in-cohesive amalgamation of tables with a fair amount of duplicated data. If you know me at all, you’ll know that this kind of thing makes me crazy. Enter AVAAP. For the record, AVAAP (and their solution) came to us on a very high recommendation from a very trustworthy source (Moffitt Health whom we have a personal connection with and who routinely presents at CUE/Inforum).

If you have implemented Lawson security, or you are planning to implement it, and you own LBI, you should contact AVAAP and buy their Lawson Security Dashboard. As part of the product, you will get a pre-built Framework Services dashboard and 25+ reports that have a fair amount of customization available. I haven’t run across any aspects that it doesn’t report over and they also offer the ability to do Segregation of Duties reporting. Even more importantly, they deliver a COBOL program (that calls a Java program) and dumps the ENTIRE LDAP to a set of tables that can be run on demand (Ours takes about 20 minutes for 36,000 users). I honestly can’t say whether the reports that were delivered are being used except by our Auditing department, but we use the tables for all kinds of things.

Primarily our uses are to add user names to reports. For example, we have a report that we send out for “Unapproved Invoices”. When you run the query over the LOGAN tables, it only has the user name. This is of little help to most people in the case of jbrown45. However, when we join our reports to the tables that AVAAP creates, we’re able to provide the user’s name (James Brown) on the report with the same level of performance that we’ve come to expect from LBI.

My single strongest argument (and how I believe you can justify ROI) is the “Copy User” functionality that is the ultimate point of this post. I’m sure that everyone in a security role has at some point been asked to “set up user x to be exactly like user y”.
This is how we do it:

  • An LBI report is run that accepts a user name
  • The LBI report returns the full security setup in a CSV output (which is formatted to be opened in Excel)
  • The Security Group modifies necessary fields (name, employee, etc)
  • The output is passed through a Perl script that reformats into XML to be loaded by the Lawson “loaduser” script

The report could be formatted to produce the XML required by “loaduser”, but we have several other automated processes built around the same CSV format that are subsequently passed into the Perl script. (If you’re interested in the Perl script, leave a comment and I may post it.)

Without further ado, let me give you the single best reason to buy the AVAAP solution (Note that I am not affiliated with AVAAP, nor am I receiving compensation for my recommendation):
–This is Oracle 11g syntax, although I believe that much of it (like PIVOT) will work on SQL Server as well, but the Hierarchy queries will not

SELECT Trim(RES.ZSUID) AS ID, SSOP.RMID AS RMID, OS.OSID AS OSID, OS.OSPASSWORD AS OSPASSWORD,
  Trim(RES.ZSNAME) AS NAME, Trim(RES.FNAME) AS FirstName, Trim(RES.LNAME) AS LastName, RES.CHECKLS AS CheckLS,
  ZROLES.ROLE_LIST AS "ROLE", ZGROUPS.GROUP_LIST AS "GROUP", Trim(RES.DEFPRODLINE) AS ProductLine, RES.ZSACCESS AS "ACCESS", Trim(RES.ACCTUNTCTRL) AS AccountingUnitControl, Trim(RES.ACTVYGRPCOLCT) AS AcgrpCollect,
  Trim(ATTR.CUST_ATTR3) AS ACTimeApprover, Trim(RES.ACTVYLIST) AS ActivityList, RES.ADDINS AS ADDINS, RES.JOBQUEACCESS AS AllowJobQueue, '' AS COMMENTS, Trim(RES.COMPANYCTRL) AS CompanyControl,
  Trim(EMAIL) AS Email, Trim(ATTR.CUST_ATTR1) AS ESSGroup, Trim(ATTR.CUST_ATTR5) AS GLNode1, Trim(ATTR.CUST_ATTR6) AS GLNode2, GLS.GLS AS GLStructure, Trim(ATTR.CUST_ATTR7) AS HRNode1,
  Trim(ATTR.CUST_ATTR8) AS HRNode2, HRS.HRS AS HRStructure, Trim(ATTR.CUST_ATTR2) AS MSSGroup, 
  Trim(RES.LWSNOLEDB) AS OLEDBC, Trim(RES.PORTALADMIN) AS PortalAdmin, Trim(RES.PORTALROLE) AS PortalRole, '' AS PrimaryGroup, '' AS PrimaryStructure, Trim(RES.PRCSLVLCTRL) AS ProcessLevelControl,
  Trim(ATTR.CUST_ATTR10) AS PROXYACCESS, Trim(RES.WORKFLOWUSR) AS WFUser,
  EMP.SERVICE AS "EMPLOYEE^SERVICE", EMP.COMPANY AS "EMPLOYEE^COMPANY", EMP.EMPLOYEE AS "EMPLOYEE^EMPLOYEE", REQ.SERVICE AS "REQUESTER^SERVICE", REQ.REQUESTER AS "REQUESTER^REQUESTER",
  VEN.SERVICE AS "VENDOR^SERVICE", VEN.VENDOR_GROUP AS "VENDOR^VENDOR_GROUP", VEN.VENDOR AS "VENDOR^VENDOR"
FROM LAWSON.ZSRESOURCE RES
LEFT OUTER JOIN LAWSON.ZSCUSTATTR ATTR
  ON RES.ZSUID = ATTR.ZSUID
LEFT OUTER JOIN (
          SELECT ZSUID, Trim(ATTR_VALUE) AS GLS
          FROM LAWSON.ZSCUSTATR2
          WHERE ATTR_NAME = 'GLStructure'
  ) GLS ON RES.ZSUID = GLS.ZSUID
LEFT OUTER JOIN (
          SELECT ZSUID, Trim(ATTR_VALUE) AS HRS
          FROM LAWSON.ZSCUSTATR2
          WHERE ATTR_NAME = 'HRStructure'
  ) HRS ON RES.ZSUID = HRS.ZSUID
INNER JOIN (
    SELECT ZSUID, LTrim(SYS_CONNECT_BY_PATH(ZSROLE, ','),',') AS ROLE_LIST
    FROM (
      SELECT ZSUID, Trim(ZSROLE) ZSROLE, Row_Number() OVER (PARTITION BY ZSUID ORDER BY ZSUID, ZSROLE) ROWNUMBER
      FROM LAWSON.ZSLDAPRES
    )
    WHERE CONNECT_BY_ISLEAF = 1
    START WITH ROWNUMBER = 1
      CONNECT BY ZSUID = PRIOR ZSUID
        AND ROWNUMBER = (PRIOR ROWNUMBER + 1)
      ORDER SIBLINGS BY ZSUID, ZSROLE
  ) ZROLES ON RES.ZSUID = ZROLES.ZSUID
INNER JOIN (
    SELECT ZSUID, LTrim(SYS_CONNECT_BY_PATH(ZSGROUP, ','),',') AS GROUP_LIST
    FROM (
      SELECT ZSUID, Trim(ATTR_VALUE) ZSGROUP, Row_Number() OVER (PARTITION BY ZSUID ORDER BY ZSUID, ATTR_VALUE) ROWNUMBER
      FROM LAWSON.ZSRESATTR
      WHERE ATTR_NAME = 'Group'
    )
    WHERE CONNECT_BY_ISLEAF = 1
    START WITH ROWNUMBER = 1
      CONNECT BY ZSUID = PRIOR ZSUID
        AND ROWNUMBER = (PRIOR ROWNUMBER + 1)
      ORDER SIBLINGS BY ZSUID, ZSGROUP
  ) ZGROUPS ON RES.ZSUID = ZGROUPS.ZSUID
LEFT OUTER JOIN (
          SELECT ZSUID, Trim(ZSIDENTITY) AS SERVICE, Trim(ZSVALUE) AS REQUESTER
          FROM LAWSON.ZSLDAPIDEN
          WHERE ZSIDENTITY = 'PROD_REQUESTER'
  ) REQ ON RES.ZSUID = REQ.ZSUID
LEFT OUTER JOIN (
          SELECT * FROM (
            SELECT ZSUID, ZSIDENTITY SERVICE, ZSFIELD, Trim(ZSVALUE) AS ZSVALUE FROM LAWSON.ZSLDAPIDEN
            WHERE ZSIDENTITY = 'PROD_EMPLOYEE'
          )
          PIVOT
          ( Max(ZSVALUE) FOR ZSFIELD IN ('COMPANY' AS COMPANY, 'EMPLOYEE' AS EMPLOYEE) )
  ) EMP ON RES.ZSUID = EMP.ZSUID
LEFT OUTER JOIN (
          SELECT * FROM (
            SELECT ZSUID, ZSIDENTITY SERVICE, ZSFIELD, Trim(ZSVALUE) AS ZSVALUE FROM LAWSON.ZSLDAPIDEN
            WHERE ZSIDENTITY = 'PROD_VENDOR'
          )
          PIVOT
          ( Max(ZSVALUE) FOR ZSFIELD IN ('VENDOR_GROUP' AS VENDOR_GROUP, 'VENDOR' AS VENDOR) )
  ) VEN ON RES.ZSUID = VEN.ZSUID
LEFT OUTER JOIN (
          SELECT * FROM (
            SELECT ZSUID, ZSIDENTITY SERVICE, ZSFIELD, Trim(ZSVALUE) AS ZSVALUE FROM LAWSON.ZSLDAPIDEN
            WHERE ZSIDENTITY = 'PROD_CUSTOMER'
          )
          PIVOT
          ( Max(ZSVALUE) FOR ZSFIELD IN ('CUSTOMER_GROUP' AS CUSTOMER_GROUP, 'CUSTOMER' AS CUSTOMER) )
  ) CUST ON RES.ZSUID = CUST.ZSUID
LEFT OUTER JOIN (
          SELECT * FROM (
            SELECT ZSUID, ZSIDENTITY SERVICE, ZSFIELD, Trim(ZSVALUE) AS ZSVALUE FROM LAWSON.ZSLDAPIDEN
            WHERE ZSIDENTITY = 'PROD'
          )
          PIVOT
          ( Max(ZSVALUE) FOR ZSFIELD IN ('LOGIN' AS OSID, 'PASSWORD' AS OSPASSWORD) )
  ) OS ON RES.ZSUID = OS.ZSUID
LEFT OUTER JOIN (
          SELECT ZSUID, Trim(ZSVALUE) AS RMID
          FROM LAWSON.ZSLDAPIDEN
          WHERE ZSIDENTITY = 'SSOP'
  ) SSOP ON RES.ZSUID = SSOP.ZSUID
WHERE RES.ZSUID IN (
  SELECT ZSUID FROM LAWSON.ZSRESOURCE
  WHERE ZSUID = 'jdoe'
  )

Using this report, the subsequent Perl script, and the Lawson provided loaduser, we can produce an EXACT copy of a user from production to test in about 2 minutes. A copy from Prod to Prod takes a little more, as there is some time spent editing the file (username, employee, etc). Still, how does 2 minutes compare to YOUR user “copy” process?

HTH

Lawson Process Flow S3 Transaction Node

I definitely learned something new today. I’ve been using Lawson Process Flow for 5 years now. I’ve never had any formal training, but I have read every document that Lawson has put out on it. In addition (and the reason for my discovery), my company is currently participating in a beta project for Lawson Process Automation 10.0.1. It’s not generally available and we signed a Non-disclosure agreement so I can’t discuss it directly. I am however going to be presenting at Inforum on the results of the beta. As part of my testing, I discovered something that I’m pretty sure most people don’t know. I was going to include it in my Inforum presentation until I went back to 9.0.1 Lawson Process Flow and discovered it worked there as well.

Here’s the big secret: the Lawson S3 Transaction node returns ALL field values from the screen, not just the values that process flow shows (RETURN_CODE and MESSAGE_CODE). I had no idea and I certainly have no idea why Lawson doesn’t publicize this or at least document it. Up until today, when I needed to know something about the result of an AGS call, I would make the AGS call with a WebRun node and then parse the output in an XML node (like I talk about here: http://wp.me/pE8vz-a0). This isn’t actually necessary as you can get the output from the Transaction node itself.

Let’s say you want to perform an Add action on a form and you need to know something more than the result of the AGS call. Some common examples are adding Employees on HR11 (getting the Employee number), adding a vendor on AP10 (getting the vendor number), adding an asset on AM20.2 (getting the asset number), etc. Chances are you aren’t simply doing an add, but need to also do something else. For vendors it may be adding diversity codes, for assets it would probably involve adding items, and for employees it might be adding benefits.

In order to access the data from a Transaction node you need to append the field name to your transaction node name. If you’re using the _LFN option, then you’ll need to use an underscore (_) instead of a dash (-) in the field names.

Example:
Assuming an Inquire call on HR11 for my employee id:

https://%5Bserver%5D/servlet/Router/Transaction/Erp?_PDL=PROD&_TKN=HR11.1&_LFN=ALL&_EVT=CHG&FC=I&EMP-COMPANY=1&EMP-EMPLOYEE=12345

I can get the result in process flow by using the following reference (assuming my transaction node has the id HR11):
HR11_PEM_SEX

Talk about a revelation.

HTH

Updating Jobs with Lawson Process Flow

This post is specifically about using Process Flow to update jobs in Lawson, but the technique also applies to updating Lawson forms if you’re not using the _CACHE parameter that I posted about here. The concepts are the same for updating Lawson using a POST action (like in Design Studio).

The business case is that certain Lawson jobs require specific date parameters and either can’t be run for future dates or the impact of doing so is undesirable. A good example of this is GL146 (Batch Journal Control). We have Journal Entry approval turned on, so we use the GL146 program to auto-approve all non GL and RJ journal entries (like AP, Payroll, etc). The problem is that GL146 requires a Period and Year. Unlike GL190 that can be run for the variables “Current Year” and “Current Period”, GL146 requires a numeric value in both. In order to keep them updated, we run a process flow that changes the Period and Year on the GL146 every month.

Updating a job is relatively simple, however as I alluded to above, there is no _CACHE parameter that you can specify on an AGS call to update a Lawson job. In order to avoid wiping out parameters, you must pass all values back to the AGS call. Doing this via a GET is unrealistic for several reasons, not the least of which is that for jobs with many parameters, the GET URL may be too long for the server to process. If you are so inclined, this is a pretty nice technical write-up on the difference.

The basic process is this:

  • Inquire on job via Web Run node
  • Parse job via XML node
  • Update necessary parameters in Assign node
  • Update job via Web Run node
  • Run Job (if necessary)

Inquiring on a Job
To inquire on a job, it’s the same as a normal AGS call with a few minor changes. You cannot use the _CACHE parameter, you must include the _JOB-NAME parameter, the _USER-NAME parameter, and the _STEPNUMBER parameter. The _STEPNUMBER parameter indicates which step of the job you are updating. This is zero-based. A job with only one step will use _STEPNUMBER=0. The _TKN parameter must also match the actual token of the _STEPNUMBER. You can’t be lazy and pass in some default value because the inquire won’t work. Something else to note, you probably should *not* use the _LFN=ALL (or TRUE). Using this parameter will cause the Lawson field name to be returned. The problem is that the Lawson field name has dashes in it which are invalid in XML. If you do return the Lawson field name, you will have to cleanse it prior to parsing it by converting it to underscores, then after you make the updates you will have to convert it back to the Lawson field names. Seems like too much work to me.

A basic inquire looks like This:
servlet/Router/Transaction/Erp?_PDL=&_TKN=GL146&_EVT=CHG&_RTN=DATA&_TDS=IGNORE&FC=I&JOB-NAME=GL146&USER-NAME=&_STEPNBR=0

This will get you a result like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<XGL146>
<GL146>
<_PDL>PROD</_PDL> 
<_TKN>GL146</_TKN> 
<_PRE>TRUE</_PRE> 
<_JOBTYPE>MULTISTEP</_JOBTYPE> 
<_STEPNBR>0</_STEPNBR> 
<_f0>GL146</_f0> 
<_f1>I</_f1> 
<_f2 /> 
<_f3>Submit</_f3> 
<_f4>Reports</_f4> 
<_f5>Job Sched</_f5> 
<_f6>Print Mgr</_f6> 
<_f7>GL146</_f7> 
<_f8>Batch Journal Control</_f8> 
<_f9>jobsched</_f9> 
<_f10>Job Scheduler</_f10> 
<_f11>PROD</_f11> 
<_f12>4</_f12> 
<_f13>GL146.prt</_f13> 
<_f14>Dist Group</_f14> 
<_f15>CHPMGRS</_f15> 
<_f16>1</_f16> 
<_f17>Yes</_f17> 
<_f18>/lawson/apps/print/jobsched/gl190autop/1</_f18> 
<_f19>bud-err-ac</_f19> 
<_f20>None</_f20> 
<_f21 /> 
<_f22>1</_f22> 
<_f23>Yes</_f23> 
<_f24>/lawson/apps/print/jobsched/gl190autop/1</_f24> 
<_f25>bud-errors</_f25> 
<_f26>None</_f26> 
<_f27 /> 
<_f28>1</_f28> 
<_f29>Yes</_f29> 
<_f30>/lawson/apps/print/jobsched/gl190autop/1</_f30> 
<_f31>error-rpt</_f31> 
<_f32>None</_f32> 
<_f33 /> 
<_f34>1</_f34> 
<_f35>Yes</_f35> 
<_f36>/lawson/apps/print/jobsched/gl190autop/1</_f36> 
<_f37 /> 
<_f38 /> 
<_f39 /> 
<_f40>0</_f40> 
<_f41 /> 
<_f42 /> 
<_f43 /> 
<_f44 /> 
<_f45 /> 
<_f46>0</_f46> 
<_f47 /> 
<_f48 /> 
<_f49 /> 
<_f50 /> 
<_f51 /> 
<_f52>0</_f52> 
<_f53 /> 
<_f54 /> 
<_f55 /> 
<_f56 /> 
<_f57 /> 
<_f58>0</_f58> 
<_f59 /> 
<_f60 /> 
<_f61 /> 
<_f62 /> 
<_f63 /> 
<_f64>0</_f64> 
<_f65 /> 
<_f66 /> 
<_f67 /> 
<_f68 /> 
<_f69 /> 
<_f70>0</_f70> 
<_f71 /> 
<_f72 /> 
<_f73 /> 
<_f74 /> 
<_f75 /> 
<_f76 /> 
<_f77 /> 
<_f78>ALL CO</_f78> 
<_f79>All Companies</_f79> 
<_f80>2012</_f80> 
<_f81>1</_f81> 
<_f82>A</_f82> 
<_f83>JE Approve</_f83> 
<_f84>Main2</_f84> 
<_f85 /> 
<_f86 /> 
<_f87 /> 
<_f88 /> 
<_f89 /> 
<_f90 /> 
<_f91 /> 
<_f92 /> 
<_f93>N</_f93> 
<_f94>No</_f94> 
<_f95>Approve0</_f95> 
<_f96>Y</_f96> 
<_f97>Yes</_f97> 
<_f98>GL</_f98> 
<_f99>RJ</_f99> 
<_f100 /> 
<_f101 /> 
<_f102 /> 
<_f103 /> 
<Message>Inquiry Complete</Message> 
<MsgNbr>000</MsgNbr> 
<StatusNbr>001</StatusNbr> 
<FldNbr>_f0</FldNbr> 
</GL146>
</XGL146>

Updating the parameters
After we have parsed the XML using the XML node (like this), we need to update the XML to the appropriate values. Keep in mind that not only do you have to change the values that you want to update, you must change the “transaction” fields as well. This means changing the action value (_f1) from “I” to “C” and clearing out the response fields (Message, MsgNbr, StatusNbr).

XMLJob_output.GL146._f80=2012;
XMLJob_output.GL146._f81=1;
XMLJob_output.GL146._f1="C";
XMLJob_output.GL146.Message = '';
XMLJob_output.GL146.MsgNbr = '';
XMLJob_output.GL146.StatusNbr = '';
XMLJob_output.GL146.FldNbr = '';

Updating the Job
All that’s left at this point is to pass the updated XML back into an AGS call with the Web Run node. This time however, we only send the base AGS call in the URL (/servlet/Router/Transaction/Erp) and we send the XML output in the Post String box. The node looks like this.

Run the job
If it’s necessary, the Job Run URL looks like this:
cgi-lawson/jobrun.exe?FUNC=run&USER=&JOB=GL146&OUT=XML

Here’s a look at what a simplified Process Flow would look like.

HTH