Lawson

Predicting the future with Process Automation or What Would Process Automation do?

I hate surprises. I hate surprises to the point that I actually don’t care if I know what happens in a movie before I see it. Consequently, I spend a lot of time trying to figure out what is going to happen next, just so I’m not surprised. This got me thinking about predicting the future with Process Automation (yes, I realize this is an odd thing to think about, but you have to have a hobby, right?). It is fairly easy to figure out what happened in the past in a flow. This could be done either through a variable you have set or viewing the output variables from the node, but is it so much harder to figure what is going to happen next? As it turns out, not really.

I’m not sure that I have a business case for this, it’s just an interesting exercise which is why you got the preamble about me and surprises. Back story aside, Process Automation (and Process Flow – although slightly more complicated) can be self aware. By being self aware, and given that a flow is a static set of instructions, you CAN predict the future – as in you can figure out what will happen next within a flow. Before I lose everyone, I guess I should start getting to the good stuff.

It all starts with the fact that a flow is an XML document. With Process Automation you can read the XML that comprises the flow and search for the information that you need using E4X syntax. All of the information about the flow is in the XML, so you can effectively read ahead to what the flow will do next and thus predict the future.

Here is a simple flow that demonstrates reading ahead. I will attempt to determine the value in the “To” field of the email node at the end.
ReadNextNodeFlow

In order:

  1. Run a landmark query to retrieve the XML of the flow. If you’re using Process Flow integrator, you would need to retrieve the XML from the file system.
  2. Parse the string information into an XML document
  3. Use javascript to retrieve the To value of the email node
  4. Email node – it just exists so we can read it.

Landmark Query
_dataArea="prod" & _module="pfi" & _objectName="PfiFlowDefinition" & _actionName="Find" & _actionOperator="NONE" & _actionType="SingleRecordQuery" & _pageSize="30" & PfiFlowDefinition="ReadFutureNode" & CurrentFlowXml
Note that this assumes that you know the name of the flow and that it has been published to the server

The XML document has the following basic structure:

<process>
  <processUserNode></processUserNode>
  <activities>
    <activity>
      <prop><anyData><anyData></prop>
      ...
      <prop><anyData><anyData></prop>
      <OnActivityError>
    </activity>
  </activities>
  <edges>
    <edge />
    ...
    <edge />
  </edges>
</process>
  • processUserNode – contains information set at the process level
  • activities – contains all of the activity nodes
  • activity – information about the node itself (type, location, etc)
  • prop – the settings in the node. There may be many prop nodes
  • OnActivityError – contains information on what to do on error
  • edges – container node
  • edge – indicates the “next” node

Retrieving the To value
I recommend this page as a reference for using E4X

//Find 'this' node in the edge list
var xmlThisNode = XMLParseFlow_output.edges[0].edge.(@from=='Assign8190'); 
//get the ID of the next node (email)
var strNextNodeID = xmlThisNode[0].@to;  
//Get the activity node for the email
var xmlNextNode = XMLParseFlow_output.activities[0].activity.(@id==strNextNodeID); 
//Pull the To property 
var xmlEmailToNode = xmlNextNode[0].prop.(@name=='to');  
//Get the value of To
strEmailTo = xmlEmailToNode.anyData; 

I suppose you could use this technique to retrieve the initial value of the Start node variables to see if they have changed during the flow. There might be other uses as well. I’m not sure if this is really something you would ever actually do in a production environment, but I feel better knowing I can predict the future.

HTH

Advertisements

Calling Process Automation from the command line

I discussed this briefly during my presentations at Inforum this year. The basic business requirements are that there are occasions when you need to trigger a workunit other than from the application (S3/M3/LMK) or from a schedule. In Process Automation you can use Channels to poll for new data (files, JMS, etc), but there are times when you still need more flexibility.

Need some examples? Okay, how about as part of a script or set of processes? For us, a good example is our ACH process for Accounts Payable. Our bank (PNC) will not accept the files produced by the Lawson AP160 job and as a result, we need to manipulate them. The reason we can’t use File Channels in Process Automation is because the bank requires us to send one file with all of our bank accounts in it (currently 85) that has separate batches for each account. The easiest way to accomplish this is to have a script that runs as the last step of a multistep job after all of the AP160 steps. That script simply calls a java program that triggers a process flow to process the files and send to PNC. There are several other examples, such as extending the ability to do scripting. Imagine being able to call a script that can update the RM attributes on a user. Pretty nice option, eh?

Hopefully, I don’t really need to convince you this is a good idea. By being able to call a process flow at will, you can eliminate customizations in your application environment, which is a good thing. Below is java code that will trigger a workunit. The code is fairly verbosely commented regarding what it’s doing, so you should be able to modify to suit your needs without any more commentary from me. You will need to update the appropriate variables (host, user, password, etc) and create a process flow called “TestJavaProcess” and upload to your server. After you run the java program, you can review the workunit that was created to see where the values from the program appear so you know what to update.

* TestCallLPA.java
 *
 * Description:
 *  Program is designed to call LPA flows from the command line
 *
  ============================================================================== */
import com.lawson.bpm.eprocessserver.interfaces.ProcessRequest;
import com.lawson.bpm.eprocessserver.interfaces.ProcessResponse;
import com.lawson.bpm.eprocessserver.interfaces.ProcessVariable;
import com.lawson.bpm.eprocessserver.interfaces.ProcessFolder;
import com.lawson.bpm.eprocessserver.interfaces.LPSSession;


public class TestCallLPA
{
    public static LPSSession session;
    public static ProcessRequest request = new ProcessRequest();
    public static ProcessResponse response;
    //===================================================
    public static void main(String[] args) {
        try {
            String HostName = "gridhost"; //grid host name
            Integer PortNumber = 50005; //port number that grid is listening on
            String UserName = "user"; //a valid admin LMK user name
            String UserPwd = "password";  //password for UserName
            String LMKProd = "prod"; //note this is the LMK PL
            String ProcessName = "TestJavaProcess";
            String ProcessTitle = "Java API Workunit";
            String KeyString = "123456789"; //This with key value needs to be a unique string
            String KeyValue = "KeyString"; //This with key string needs to be a unique string
            Boolean textOutput = false; //set to true to print return values to screen - infocode, return message, outputdata with | separators
            Boolean returnData = false; //set to true to output data -- will need a return node
            Boolean Async = false; //set to true to trigger WU without waiting for response
            request.setSystem("Landmark");
            request.setSource("JavaAPI");
            request.setFlowName(ProcessName);
            request.setWorkTitle(ProcessTitle);
            request.setFilterKey("FILTERBY");
            request.setFilterValue("FILTERVALUE");
            request.setKeyString(KeyString,KeyValue);
            /* For demo purposes, this is commented out.  If you have a service, set here */
            //request.setService(setting[1]);
            //Criteria 1
            request.setBizCriteria(request.BIZ_CRITERIA_1,"CRITERION1");
            //Criteria 2
            request.setBizCriteria(request.BIZ_CRITERIA_2,"CRITERION2");
            //Criteria 3
            request.setBizCriteria(request.BIZ_CRITERIA_3,"CRITERION3");

            //Start adding variables
            //Boolean
            ProcessVariable variable = new ProcessVariable("BOOLEAN","true",ProcessVariable.TYPE_BOOLEAN);
            request.addVariable(variable);
            //Integer variable
            variable = new ProcessVariable("INTEGER","1",ProcessVariable.TYPE_INT);
            request.addVariable(variable);
            //Decimal variable
            variable = new ProcessVariable("DOUBLE","1.00",ProcessVariable.TYPE_DBL);
            request.addVariable(variable);
            //Date variable
            variable = new ProcessVariable("DATE","01/01/2013",ProcessVariable.TYPE_DATE);
            request.addVariable(variable);
            //Long Integer variable
            variable = new ProcessVariable("LONG","123456789",ProcessVariable.TYPE_LONG);
            request.addVariable(variable);
            //Object Variable -- not sure how to pass
            variable = new ProcessVariable("OBJECT","",ProcessVariable.TYPE_OBJECT);
            request.addVariable(variable);
            //Array variable -- not sure how to pass
            variable = new ProcessVariable("ARRAY","",7); //Array process type is not documented and is not TYPE_ARRAY
            request.addVariable(variable);
            //Add input data
            request.setInputData("Some input data");

            //Connect to grid and create a session
            session = LPSSession.createGridSession(HostName,PortNumber,UserName,UserPwd,LMKProd);
            //Create workunit
            //Pass in the built request from above and set Async value
            response = createWU(request,Async);
            //If user selected Async then createWU will exit and we won't get here
            int eRC = response.getErrorCode();

            //Deal with response
            if (textOutput) {
                System.out.println(response.getInformationCode()+"|"+response.getReturnMessage()+"|"+response.getOutputData());
            }
            if (returnData) {
                System.out.print(response.getOutputData());
            }

            //Cleanup and close out
            session.close();
            System.exit(eRC);
        } catch (Exception e) {
            e.printStackTrace();
            System.exit(1);
        }
    }

    //===================================================
    //Trigger workunits based on sync vs async
    public static ProcessResponse createWU(ProcessRequest request,Boolean Async) {
        try {
            if (Async) {
                response = session.createAndReleaseWorkunit(request);
                session.close();
                System.exit(response.getErrorCode());
            } else {
                response = session.runRequest(request,true);
            }
        } catch (Exception e) {
            e.printStackTrace();
            System.exit(1);
        }
        return response;
    }
}

In order to compile the code, you will need to make sure that the following files are in your classpath (and the directory you save the code file above to):

  • sec-client.jar
  • bpm-interfaces.jar
  • type.jar
  • security.jar
  • clientsecurity.jar
  • grid-client.jar
  • lawutil_logging.jar

A basic classpath, compile, and call command (on Unix) would be:

$ export CLASSPATH=/lawson/lmrkstage/env/java/thirdParty/sec-client.jar:/lawson/lmrkstage/env/java/jar/bpm-interfaces.jar:/lawson/lmrkstage/env/java/jar/type.jar:/lawson/lmrkstage/env/java/jar/security.jar:/lawson/lmrkstage/env/java/jar/clientsecurity.jar:/lawson/lmrkstage/env/java/thirdParty/grid/grid-client.jar:/lawson/lmrkstage/env/java/jar/lawutil_logging.jar
# Compile 
$ javac TestCallLPA.java
# Call program
$ java TestCallLPA

HTH

Lawson Security Reporting

Reporting over LAUA was easy because everything was in the database. With later versions of Lawson and the implementation of Lawson Security, everything is stored in your LDAP. (Personally, I think that storing the data in the LDAP is a relative non-issue. However, the implementation that Lawson has done is epically stupid; specifically how it handles the services (Employee, Vendor, Requester and Customer) as strings that must be parsed and cannot be logically searched. Seriously?) Since all of the data is in the LDAP, you have two choices: use the reporting options in the Lawson Security tool or dump the data to a database table. Chances are that you’ll actually use both options, but for very different reasons.

It wasn’t long after we implemented RM (Resource Manager) that we started to understand the potential (and realized) issues. Consequently, we started dumping sections of the LDAP to database tables. Primarily we used the functionality in Lawson Smart Notes to read the LDAP and save the data to a database table. (There is a strong argument for this approach. If you’re interested in how to do it, leave a comment and I’ll make a follow up post). There are several potential problems with this though, namely that we have to have a separate table for each tree in the LDAP and there is no relation between them. A secondary problem is that our LBI is on SQL Server and our Lawson S3 environment is on Oracle. Combining the data is possible in Crystal Reports but cumbersome. Outside of Crytal Reports (and MS Access) it’s “impossible”. We started going down the path of using Lawson Process Flow and Transformation Extender to read the LDAP and write to tables in our Lawson schema, but that’s easier said than done (specifically when you have M:1 relationships like in an employee proxy situation or with a contractor to vendor scenario). We were also doing an “implement as required” approach, which lead to an in-cohesive amalgamation of tables with a fair amount of duplicated data. If you know me at all, you’ll know that this kind of thing makes me crazy. Enter AVAAP. For the record, AVAAP (and their solution) came to us on a very high recommendation from a very trustworthy source (Moffitt Health whom we have a personal connection with and who routinely presents at CUE/Inforum).

If you have implemented Lawson security, or you are planning to implement it, and you own LBI, you should contact AVAAP and buy their Lawson Security Dashboard. As part of the product, you will get a pre-built Framework Services dashboard and 25+ reports that have a fair amount of customization available. I haven’t run across any aspects that it doesn’t report over and they also offer the ability to do Segregation of Duties reporting. Even more importantly, they deliver a COBOL program (that calls a Java program) and dumps the ENTIRE LDAP to a set of tables that can be run on demand (Ours takes about 20 minutes for 36,000 users). I honestly can’t say whether the reports that were delivered are being used except by our Auditing department, but we use the tables for all kinds of things.

Primarily our uses are to add user names to reports. For example, we have a report that we send out for “Unapproved Invoices”. When you run the query over the LOGAN tables, it only has the user name. This is of little help to most people in the case of jbrown45. However, when we join our reports to the tables that AVAAP creates, we’re able to provide the user’s name (James Brown) on the report with the same level of performance that we’ve come to expect from LBI.

My single strongest argument (and how I believe you can justify ROI) is the “Copy User” functionality that is the ultimate point of this post. I’m sure that everyone in a security role has at some point been asked to “set up user x to be exactly like user y”.
This is how we do it:

  • An LBI report is run that accepts a user name
  • The LBI report returns the full security setup in a CSV output (which is formatted to be opened in Excel)
  • The Security Group modifies necessary fields (name, employee, etc)
  • The output is passed through a Perl script that reformats into XML to be loaded by the Lawson “loaduser” script

The report could be formatted to produce the XML required by “loaduser”, but we have several other automated processes built around the same CSV format that are subsequently passed into the Perl script. (If you’re interested in the Perl script, leave a comment and I may post it.)

Without further ado, let me give you the single best reason to buy the AVAAP solution (Note that I am not affiliated with AVAAP, nor am I receiving compensation for my recommendation):
–This is Oracle 11g syntax, although I believe that much of it (like PIVOT) will work on SQL Server as well, but the Hierarchy queries will not

SELECT Trim(RES.ZSUID) AS ID, SSOP.RMID AS RMID, OS.OSID AS OSID, OS.OSPASSWORD AS OSPASSWORD,
  Trim(RES.ZSNAME) AS NAME, Trim(RES.FNAME) AS FirstName, Trim(RES.LNAME) AS LastName, RES.CHECKLS AS CheckLS,
  ZROLES.ROLE_LIST AS "ROLE", ZGROUPS.GROUP_LIST AS "GROUP", Trim(RES.DEFPRODLINE) AS ProductLine, RES.ZSACCESS AS "ACCESS", Trim(RES.ACCTUNTCTRL) AS AccountingUnitControl, Trim(RES.ACTVYGRPCOLCT) AS AcgrpCollect,
  Trim(ATTR.CUST_ATTR3) AS ACTimeApprover, Trim(RES.ACTVYLIST) AS ActivityList, RES.ADDINS AS ADDINS, RES.JOBQUEACCESS AS AllowJobQueue, '' AS COMMENTS, Trim(RES.COMPANYCTRL) AS CompanyControl,
  Trim(EMAIL) AS Email, Trim(ATTR.CUST_ATTR1) AS ESSGroup, Trim(ATTR.CUST_ATTR5) AS GLNode1, Trim(ATTR.CUST_ATTR6) AS GLNode2, GLS.GLS AS GLStructure, Trim(ATTR.CUST_ATTR7) AS HRNode1,
  Trim(ATTR.CUST_ATTR8) AS HRNode2, HRS.HRS AS HRStructure, Trim(ATTR.CUST_ATTR2) AS MSSGroup, 
  Trim(RES.LWSNOLEDB) AS OLEDBC, Trim(RES.PORTALADMIN) AS PortalAdmin, Trim(RES.PORTALROLE) AS PortalRole, '' AS PrimaryGroup, '' AS PrimaryStructure, Trim(RES.PRCSLVLCTRL) AS ProcessLevelControl,
  Trim(ATTR.CUST_ATTR10) AS PROXYACCESS, Trim(RES.WORKFLOWUSR) AS WFUser,
  EMP.SERVICE AS "EMPLOYEE^SERVICE", EMP.COMPANY AS "EMPLOYEE^COMPANY", EMP.EMPLOYEE AS "EMPLOYEE^EMPLOYEE", REQ.SERVICE AS "REQUESTER^SERVICE", REQ.REQUESTER AS "REQUESTER^REQUESTER",
  VEN.SERVICE AS "VENDOR^SERVICE", VEN.VENDOR_GROUP AS "VENDOR^VENDOR_GROUP", VEN.VENDOR AS "VENDOR^VENDOR"
FROM LAWSON.ZSRESOURCE RES
LEFT OUTER JOIN LAWSON.ZSCUSTATTR ATTR
  ON RES.ZSUID = ATTR.ZSUID
LEFT OUTER JOIN (
          SELECT ZSUID, Trim(ATTR_VALUE) AS GLS
          FROM LAWSON.ZSCUSTATR2
          WHERE ATTR_NAME = 'GLStructure'
  ) GLS ON RES.ZSUID = GLS.ZSUID
LEFT OUTER JOIN (
          SELECT ZSUID, Trim(ATTR_VALUE) AS HRS
          FROM LAWSON.ZSCUSTATR2
          WHERE ATTR_NAME = 'HRStructure'
  ) HRS ON RES.ZSUID = HRS.ZSUID
INNER JOIN (
    SELECT ZSUID, LTrim(SYS_CONNECT_BY_PATH(ZSROLE, ','),',') AS ROLE_LIST
    FROM (
      SELECT ZSUID, Trim(ZSROLE) ZSROLE, Row_Number() OVER (PARTITION BY ZSUID ORDER BY ZSUID, ZSROLE) ROWNUMBER
      FROM LAWSON.ZSLDAPRES
    )
    WHERE CONNECT_BY_ISLEAF = 1
    START WITH ROWNUMBER = 1
      CONNECT BY ZSUID = PRIOR ZSUID
        AND ROWNUMBER = (PRIOR ROWNUMBER + 1)
      ORDER SIBLINGS BY ZSUID, ZSROLE
  ) ZROLES ON RES.ZSUID = ZROLES.ZSUID
INNER JOIN (
    SELECT ZSUID, LTrim(SYS_CONNECT_BY_PATH(ZSGROUP, ','),',') AS GROUP_LIST
    FROM (
      SELECT ZSUID, Trim(ATTR_VALUE) ZSGROUP, Row_Number() OVER (PARTITION BY ZSUID ORDER BY ZSUID, ATTR_VALUE) ROWNUMBER
      FROM LAWSON.ZSRESATTR
      WHERE ATTR_NAME = 'Group'
    )
    WHERE CONNECT_BY_ISLEAF = 1
    START WITH ROWNUMBER = 1
      CONNECT BY ZSUID = PRIOR ZSUID
        AND ROWNUMBER = (PRIOR ROWNUMBER + 1)
      ORDER SIBLINGS BY ZSUID, ZSGROUP
  ) ZGROUPS ON RES.ZSUID = ZGROUPS.ZSUID
LEFT OUTER JOIN (
          SELECT ZSUID, Trim(ZSIDENTITY) AS SERVICE, Trim(ZSVALUE) AS REQUESTER
          FROM LAWSON.ZSLDAPIDEN
          WHERE ZSIDENTITY = 'PROD_REQUESTER'
  ) REQ ON RES.ZSUID = REQ.ZSUID
LEFT OUTER JOIN (
          SELECT * FROM (
            SELECT ZSUID, ZSIDENTITY SERVICE, ZSFIELD, Trim(ZSVALUE) AS ZSVALUE FROM LAWSON.ZSLDAPIDEN
            WHERE ZSIDENTITY = 'PROD_EMPLOYEE'
          )
          PIVOT
          ( Max(ZSVALUE) FOR ZSFIELD IN ('COMPANY' AS COMPANY, 'EMPLOYEE' AS EMPLOYEE) )
  ) EMP ON RES.ZSUID = EMP.ZSUID
LEFT OUTER JOIN (
          SELECT * FROM (
            SELECT ZSUID, ZSIDENTITY SERVICE, ZSFIELD, Trim(ZSVALUE) AS ZSVALUE FROM LAWSON.ZSLDAPIDEN
            WHERE ZSIDENTITY = 'PROD_VENDOR'
          )
          PIVOT
          ( Max(ZSVALUE) FOR ZSFIELD IN ('VENDOR_GROUP' AS VENDOR_GROUP, 'VENDOR' AS VENDOR) )
  ) VEN ON RES.ZSUID = VEN.ZSUID
LEFT OUTER JOIN (
          SELECT * FROM (
            SELECT ZSUID, ZSIDENTITY SERVICE, ZSFIELD, Trim(ZSVALUE) AS ZSVALUE FROM LAWSON.ZSLDAPIDEN
            WHERE ZSIDENTITY = 'PROD_CUSTOMER'
          )
          PIVOT
          ( Max(ZSVALUE) FOR ZSFIELD IN ('CUSTOMER_GROUP' AS CUSTOMER_GROUP, 'CUSTOMER' AS CUSTOMER) )
  ) CUST ON RES.ZSUID = CUST.ZSUID
LEFT OUTER JOIN (
          SELECT * FROM (
            SELECT ZSUID, ZSIDENTITY SERVICE, ZSFIELD, Trim(ZSVALUE) AS ZSVALUE FROM LAWSON.ZSLDAPIDEN
            WHERE ZSIDENTITY = 'PROD'
          )
          PIVOT
          ( Max(ZSVALUE) FOR ZSFIELD IN ('LOGIN' AS OSID, 'PASSWORD' AS OSPASSWORD) )
  ) OS ON RES.ZSUID = OS.ZSUID
LEFT OUTER JOIN (
          SELECT ZSUID, Trim(ZSVALUE) AS RMID
          FROM LAWSON.ZSLDAPIDEN
          WHERE ZSIDENTITY = 'SSOP'
  ) SSOP ON RES.ZSUID = SSOP.ZSUID
WHERE RES.ZSUID IN (
  SELECT ZSUID FROM LAWSON.ZSRESOURCE
  WHERE ZSUID = 'jdoe'
  )

Using this report, the subsequent Perl script, and the Lawson provided loaduser, we can produce an EXACT copy of a user from production to test in about 2 minutes. A copy from Prod to Prod takes a little more, as there is some time spent editing the file (username, employee, etc). Still, how does 2 minutes compare to YOUR user “copy” process?

HTH

Lawson Process Flow S3 Transaction Node

I definitely learned something new today. I’ve been using Lawson Process Flow for 5 years now. I’ve never had any formal training, but I have read every document that Lawson has put out on it. In addition (and the reason for my discovery), my company is currently participating in a beta project for Lawson Process Automation 10.0.1. It’s not generally available and we signed a Non-disclosure agreement so I can’t discuss it directly. I am however going to be presenting at Inforum on the results of the beta. As part of my testing, I discovered something that I’m pretty sure most people don’t know. I was going to include it in my Inforum presentation until I went back to 9.0.1 Lawson Process Flow and discovered it worked there as well.

Here’s the big secret: the Lawson S3 Transaction node returns ALL field values from the screen, not just the values that process flow shows (RETURN_CODE and MESSAGE_CODE). I had no idea and I certainly have no idea why Lawson doesn’t publicize this or at least document it. Up until today, when I needed to know something about the result of an AGS call, I would make the AGS call with a WebRun node and then parse the output in an XML node (like I talk about here: http://wp.me/pE8vz-a0). This isn’t actually necessary as you can get the output from the Transaction node itself.

Let’s say you want to perform an Add action on a form and you need to know something more than the result of the AGS call. Some common examples are adding Employees on HR11 (getting the Employee number), adding a vendor on AP10 (getting the vendor number), adding an asset on AM20.2 (getting the asset number), etc. Chances are you aren’t simply doing an add, but need to also do something else. For vendors it may be adding diversity codes, for assets it would probably involve adding items, and for employees it might be adding benefits.

In order to access the data from a Transaction node you need to append the field name to your transaction node name. If you’re using the _LFN option, then you’ll need to use an underscore (_) instead of a dash (-) in the field names.

Example:
Assuming an Inquire call on HR11 for my employee id:

https://%5Bserver%5D/servlet/Router/Transaction/Erp?_PDL=PROD&_TKN=HR11.1&_LFN=ALL&_EVT=CHG&FC=I&EMP-COMPANY=1&EMP-EMPLOYEE=12345

I can get the result in process flow by using the following reference (assuming my transaction node has the id HR11):
HR11_PEM_SEX

Talk about a revelation.

HTH

Updating Jobs with Lawson Process Flow

This post is specifically about using Process Flow to update jobs in Lawson, but the technique also applies to updating Lawson forms if you’re not using the _CACHE parameter that I posted about here. The concepts are the same for updating Lawson using a POST action (like in Design Studio).

The business case is that certain Lawson jobs require specific date parameters and either can’t be run for future dates or the impact of doing so is undesirable. A good example of this is GL146 (Batch Journal Control). We have Journal Entry approval turned on, so we use the GL146 program to auto-approve all non GL and RJ journal entries (like AP, Payroll, etc). The problem is that GL146 requires a Period and Year. Unlike GL190 that can be run for the variables “Current Year” and “Current Period”, GL146 requires a numeric value in both. In order to keep them updated, we run a process flow that changes the Period and Year on the GL146 every month.

Updating a job is relatively simple, however as I alluded to above, there is no _CACHE parameter that you can specify on an AGS call to update a Lawson job. In order to avoid wiping out parameters, you must pass all values back to the AGS call. Doing this via a GET is unrealistic for several reasons, not the least of which is that for jobs with many parameters, the GET URL may be too long for the server to process. If you are so inclined, this is a pretty nice technical write-up on the difference.

The basic process is this:

  • Inquire on job via Web Run node
  • Parse job via XML node
  • Update necessary parameters in Assign node
  • Update job via Web Run node
  • Run Job (if necessary)

Inquiring on a Job
To inquire on a job, it’s the same as a normal AGS call with a few minor changes. You cannot use the _CACHE parameter, you must include the _JOB-NAME parameter, the _USER-NAME parameter, and the _STEPNUMBER parameter. The _STEPNUMBER parameter indicates which step of the job you are updating. This is zero-based. A job with only one step will use _STEPNUMBER=0. The _TKN parameter must also match the actual token of the _STEPNUMBER. You can’t be lazy and pass in some default value because the inquire won’t work. Something else to note, you probably should *not* use the _LFN=ALL (or TRUE). Using this parameter will cause the Lawson field name to be returned. The problem is that the Lawson field name has dashes in it which are invalid in XML. If you do return the Lawson field name, you will have to cleanse it prior to parsing it by converting it to underscores, then after you make the updates you will have to convert it back to the Lawson field names. Seems like too much work to me.

A basic inquire looks like This:
servlet/Router/Transaction/Erp?_PDL=&_TKN=GL146&_EVT=CHG&_RTN=DATA&_TDS=IGNORE&FC=I&JOB-NAME=GL146&USER-NAME=&_STEPNBR=0

This will get you a result like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<XGL146>
<GL146>
<_PDL>PROD</_PDL> 
<_TKN>GL146</_TKN> 
<_PRE>TRUE</_PRE> 
<_JOBTYPE>MULTISTEP</_JOBTYPE> 
<_STEPNBR>0</_STEPNBR> 
<_f0>GL146</_f0> 
<_f1>I</_f1> 
<_f2 /> 
<_f3>Submit</_f3> 
<_f4>Reports</_f4> 
<_f5>Job Sched</_f5> 
<_f6>Print Mgr</_f6> 
<_f7>GL146</_f7> 
<_f8>Batch Journal Control</_f8> 
<_f9>jobsched</_f9> 
<_f10>Job Scheduler</_f10> 
<_f11>PROD</_f11> 
<_f12>4</_f12> 
<_f13>GL146.prt</_f13> 
<_f14>Dist Group</_f14> 
<_f15>CHPMGRS</_f15> 
<_f16>1</_f16> 
<_f17>Yes</_f17> 
<_f18>/lawson/apps/print/jobsched/gl190autop/1</_f18> 
<_f19>bud-err-ac</_f19> 
<_f20>None</_f20> 
<_f21 /> 
<_f22>1</_f22> 
<_f23>Yes</_f23> 
<_f24>/lawson/apps/print/jobsched/gl190autop/1</_f24> 
<_f25>bud-errors</_f25> 
<_f26>None</_f26> 
<_f27 /> 
<_f28>1</_f28> 
<_f29>Yes</_f29> 
<_f30>/lawson/apps/print/jobsched/gl190autop/1</_f30> 
<_f31>error-rpt</_f31> 
<_f32>None</_f32> 
<_f33 /> 
<_f34>1</_f34> 
<_f35>Yes</_f35> 
<_f36>/lawson/apps/print/jobsched/gl190autop/1</_f36> 
<_f37 /> 
<_f38 /> 
<_f39 /> 
<_f40>0</_f40> 
<_f41 /> 
<_f42 /> 
<_f43 /> 
<_f44 /> 
<_f45 /> 
<_f46>0</_f46> 
<_f47 /> 
<_f48 /> 
<_f49 /> 
<_f50 /> 
<_f51 /> 
<_f52>0</_f52> 
<_f53 /> 
<_f54 /> 
<_f55 /> 
<_f56 /> 
<_f57 /> 
<_f58>0</_f58> 
<_f59 /> 
<_f60 /> 
<_f61 /> 
<_f62 /> 
<_f63 /> 
<_f64>0</_f64> 
<_f65 /> 
<_f66 /> 
<_f67 /> 
<_f68 /> 
<_f69 /> 
<_f70>0</_f70> 
<_f71 /> 
<_f72 /> 
<_f73 /> 
<_f74 /> 
<_f75 /> 
<_f76 /> 
<_f77 /> 
<_f78>ALL CO</_f78> 
<_f79>All Companies</_f79> 
<_f80>2012</_f80> 
<_f81>1</_f81> 
<_f82>A</_f82> 
<_f83>JE Approve</_f83> 
<_f84>Main2</_f84> 
<_f85 /> 
<_f86 /> 
<_f87 /> 
<_f88 /> 
<_f89 /> 
<_f90 /> 
<_f91 /> 
<_f92 /> 
<_f93>N</_f93> 
<_f94>No</_f94> 
<_f95>Approve0</_f95> 
<_f96>Y</_f96> 
<_f97>Yes</_f97> 
<_f98>GL</_f98> 
<_f99>RJ</_f99> 
<_f100 /> 
<_f101 /> 
<_f102 /> 
<_f103 /> 
<Message>Inquiry Complete</Message> 
<MsgNbr>000</MsgNbr> 
<StatusNbr>001</StatusNbr> 
<FldNbr>_f0</FldNbr> 
</GL146>
</XGL146>

Updating the parameters
After we have parsed the XML using the XML node (like this), we need to update the XML to the appropriate values. Keep in mind that not only do you have to change the values that you want to update, you must change the “transaction” fields as well. This means changing the action value (_f1) from “I” to “C” and clearing out the response fields (Message, MsgNbr, StatusNbr).

XMLJob_output.GL146._f80=2012;
XMLJob_output.GL146._f81=1;
XMLJob_output.GL146._f1="C";
XMLJob_output.GL146.Message = '';
XMLJob_output.GL146.MsgNbr = '';
XMLJob_output.GL146.StatusNbr = '';
XMLJob_output.GL146.FldNbr = '';

Updating the Job
All that’s left at this point is to pass the updated XML back into an AGS call with the Web Run node. This time however, we only send the base AGS call in the URL (/servlet/Router/Transaction/Erp) and we send the XML output in the Post String box. The node looks like this.

Run the job
If it’s necessary, the Job Run URL looks like this:
cgi-lawson/jobrun.exe?FUNC=run&USER=&JOB=GL146&OUT=XML

Here’s a look at what a simplified Process Flow would look like.

HTH

Modifying Lawson screens for ImageNow LearnMode

A question was posed to me recently regarding how to effectively and efficiently use ImageNow LearnMode on a Lawson screen. Specifically, the problem revolves around fields that exist on Lawson tabs that are not displayed when the page is loaded. This means that no DOM objects are built and ImageNow cannot read them. Take HR11, for example, where my company links the Process Level to a Custom Property. The problem is, the Process Level for an employee is on the ‘Assignment’ tab which is not the default tab. Because of this, Lawson does not build the DOM for the tab until you click on the ‘Assignment’ tab even though the data exists for it (the AGS transaction returned all data, even if the form hasn’t been populated).

From an ImageNow perspective, the tradition is to use the getAppData VB function in LearnMode to parse through the entire hierarchy to find the data you want. This presents several problems as each user in Lawson may have different bookmarks, search bars, etc and thus a different hierarchy. The exact location of the element that you want to get data from (like the process level field) will move it’s relative position on a per user basis. I have used this method myself and I know that a lot of Perceptive implementation consultants do it too, but that doesn’t make it right.

Note: I’m not discussing getAppData here because it could be a whole post itself. If you’re interested, leave a comment and I may do a post on it. After all, I’m doing this post because of a comment that was left.

There are only three reasons to use getAppData in ImageNow LearnMode for Lawson.

  1. You don’t own Design Studio
  2. You’re trying to link on a Detail screen with many lines and you don’t know which line to link (PO64 comes to mind)
  3. You don’t know what you’re doing

My opinion is that #1 is the only valid reason and even that is tenuous. As far as I know, you can customize any Lawson form and Portal will render it for you, Design Studio just provides a development environment. You can create customized form xml without Design Studio in the same way that you can write Java code without an IDE, it’s just a lot easier when you have a development environment.

Let me restate the problem: There is data on a Lawson form that ImageNow IE LearnMode cannot read. This data is on the form, but it is on a tab that is not the default. Therefore, this data is not available to ImageNow until the tab is selected. Once it is selected, it is always available until the user leaves the screen. Make Sense?

There are three relatively easy ways to do this, two using Lawson Design Studio, that I am aware of. Each has it’s pros and cons.

  1. Have the user click on the tab in question after they load the form (and any subsequent time they access it if they go to another form). This is probably the least desirable as you rely on the users to remember to do this. If you’re linking into Custom Properties in ImageNow, chances are good that you’ll wind up with a lot of “default” values.
  2. Create a new field (or fields) that is hidden from the user. Any event or field change, update the field(s). If the field(s) is created on the ‘header’ section of the form, it is always visible to ImageNow and can be linked on. The downside is that you have to control updating your fields on both Form events and on changes to the field itself. This can be especially tricky if the field you need is auto-populated by Lawson as a default (like a name or description).
  3. Activate the tab and then activate the original tab. This is the automated version of #1. I have tested this, but never used it in a production setting, so I can’t guarantee it works. Aside from that, it’s the easiest to implement. However, if you need data from four or five different tabs, you may find it easier to just create the fields as in #2.

Keep in mind, if you need data that is not on the form you are linking, then you MUST use option #2.

Enough of the jibber-jabber, how about some code? All of these examples will use AP20. For our pretend requirements, I need ImageNow to link the “Cash Code” that is located on the “Payment” tab and is the third tab on the AP20 (aka not the default).

Option 3.

function FORM_OnAfterFrameworkInit()
{
		lawForm.setActiveTab("TF0-2");
		lawForm.setActiveTab("TF0-0");
}

Here we activate tab 3 (using the tab name) and then re-activate the default tab.
Add this as early in the event hierarchy on a form load as you can. I believe the earliest you can add it to be the OnAfterFrameworkInit, as demonstrated here. The XML for the form has been loaded, the initial DOM has been built, and any script actions CAN affect DOM elements. Keep in mind that data is not available at this point, even if you have an initial form action set.

Option 2.
First create a empty text field. In this case, I created “text80”.

FIELD_NAME = "text80";  //Global field declaration - Note this is the Name on the field Properties
//Hide field
function FORM_OnInit()
{
	var fElem=lawForm.getFormElement(FIELD_NAME);
	fElem.style.visibility = "hidden";
}

//Populate our field every time an action is performed
function FORM_OnAfterTransaction(data)
{
	var strCashCode = data.getElementsByTagName("_f96")[0].firstChild.nodeValue;  //Pull our data out of AGS return
	lawForm.setFormValue(FIELD_NAME,strCashCode,0); //Update field to be cashcode
}

//Update the field if the user changes it.  This means we can point ImageNow at a single field and not miss any updates
function TEXT_OnBlur(id, row)
{
	if (id == "text38") //ID of the cash code field
		lawForm.setFormValue(FIELD_NAME,lawForm.getFormValue(id,row),row);
	return true;
}

The OnInit event hides our element as the form is loaded. This could be done by directly manipulating the XML that constructs the form. I don’t like to manipulate the XML if I can help it, not that I’m above it, but bad things can happen. I’ve been there, don’t let it happen to you.
The FORM_OnAfterTransaction event is triggered after any event on the form (Inquire, Add, etc). It may not be appropriate to evaluate this on every action, but when in doubt, do it always.
The TEXT_OnBlur event is executed as the text loses focus. Basically, this is after a user changes the field directly. THIS IS IMPORTANT. If the user changes the data in the field, we need to capture it because they may not execute a Lawson action prior to performing the ImageNow link, which means it won’t be trapped by the FORM_OnAfterTransaction event.

All this event talk makes me think I might need to do a primer on the Lawson Portal Event hierarchy. But I digress.

Regarding my previous statement about why you would use getAppData, I’ve basically addressed #1 and #3. #2 is a little trickier, but not much. (Sorry for the confusion, but WordPress doesn’t seem to support non-numbered ordered lists, so it’s difficult to differentiate the lists.) I would propose that you basically use the second #2 and have the users put an X (select) in FC detail field. You would then use the TEXT_OnBlur for the FC field as a trigger. When the even is triggered, have it write the data to fields at the header record that ImageNow can read. This solves problems like linking from AP90 or GL90. Yes, if the user enters X in multiple fields, only the last row where an X was entered would be linked. This is a training issue. You can program around it, but it has to stop somewhere.

Thanks to Matt from Perceptive for the question.

HTH

Processflow JDBC

After a lot of frustration trying to get the SQL nodes to work on various versions of Lawson System Foundation, I’m posting what we’ve learned so far. There seems to be a lot of misinformation out there, so here’s what we have working.

On LSF9, for SQL Server, you can only use the SQL 2000 drivers (at least in our experience). In 9.0.1, the 2005 and 2008 drivers seem to work fine. Oracle works the same in both versions.

SQL Server 2000:
JDBC Driver: com.microsoft.jdbc.sqlserver.SQLServerDriver
URL: jdbc:microsoft:sqlserver://<server>;DatabaseName=<DBName>
Jar files: msbase.jar, mssqlserver.jar, msutil.jar

SQL Server 1.2/2.0
JDBC Driver: com.microsoft.sqlserver.jdbc.SQLServerDriver
URL: jdbc:sqlserver://<server>;DatabaseName=<DBName>
Jar files: sqljdbc4.jar

Oracle
JDBC Driver: oracle.jdbc.driver.OracleDriver
URL: jdbc:oracle:thin:@<server>:<port>:<SID>
Jar files: classes12.jar, classes12.zip, ojdbc6.jar*

* The correct ojdbc.jar file depends on your version. This is the jar file for Oracle 11g. Oracle 10g/9i (I’m not sure) uses ojdbc14.jar

To run on the server, you must add the appropriate jar files in the $GENDIR/bpm/jar directory (or %GENDIR%/bpm/jar if you’re on Windows).

To run on your machine, it depends. Using the “old” version of process flow designer, you need to make sure that the jar files are on your local machine and you’ve added the jar files to the CLASSPATH in the designer.bat file.

It should look something like this (for SQL server 2005/2008):
SET CLASSPATH=.;.\lib\secLS.jar;.\lib\bpmsso.jar;.\lib\MvxAPI.jar;.\lib\bpm-clients.jar;.\lib\bpm-commons.jar;.\lib\activation.jar;.\lib\collections.jar;.\lib\jakarta-oro-2.0.jar;.\lib\jbcl.jar;.\lib\jcfield450K.jar;.\lib\js.jar;.\lib\xbean.jar;.\lib\mailapi.jar;.\lib\mercjava.jar;.\lib\pf-xmljava.jar;.\lib\pf-rel802.jar;.\lib\SheridanActvList.zip;.\lib\smtp.jar;.\lib\xercesImpl.jar;.\lib\xml-apis.jar;.\lib\lawson-httpclient.jar;C:\JDBC\sqljdbc4.jar

For the 9.0.1 Designer version, you need to add the appropriate jars to the “External Jar” section.
Window>Preferences…>”External Jars”
Click “New…” and select the appropriate jar file.

HTH

Lawson Processflow and IBM Transformation Extender

In LSF 9.0.x, Lawson supports calling Transformation Extender maps from Process Flow Integrator. This post is going to be about the two different (okay two and half) ways that I use the TX node. There are technically four ways to call a map from the TX node: RunMap, TransformRealTime, TransformBatch, and TransformBatchToRealTime.

Here’s what they mean:

  • RunMap: Run the map just like you were running it from TX Design Studio. You can override any/all outputs – so long as the card is either File or Echo type.
  • TransformRealTime: like doing RunMap, but with echo in and echo out. The output of the map is available as TXNodeName_outputData.
  • TransformBatch: like doing RunMap, but with File in and File out. I don’t use this one (see below), so you’ll have to rely on the documentation for more details.
  • TransformBatchToRealTime: like doing RunMap with File in and Echo Out

Personally, I think TransformBatch is a waste. With the overrides that you can give on the cards with RunMap, I see no purpose. The TransformRealTime has some value as it allows you to do transformations and then use the data in your flow via the outputData variable.

I’m going to talk about RunMap and TransformBatchToRealTime (if you can figure this out, then TransformRealTime should be easy). The “half” I referenced above (and a viable fifth option) is calling a map via RunMap and having that map call another map. From Lawson process flow, you can only override cards to either File or Echo. However, when using another map to a call a map, you can override ANY of the datasources. Which means you can make Database calls (and pass in parameters), dynamic LDAP calls, etc. These are some fairly complicated concepts, so I’m going to try and use simple examples and keep it high level. If I start getting a lot of hits and/or questions, I’ll do a follow up post.

Disclaimer: We’re testing 9.0.1, so all of the examples will be using the Eclipse version of designer. I think it’s easier to see the details this way.

Run Map
In this example, the map is a simple Input to Output. I use this flow/map whenever we have issues with TX on other flows to validate that the TX server is up and running. As you can see, I’m overriding the input variable with a value of my choosing (it could also be a PF variable) and I’ve also specifically mapped the output to a PF variable. By mapping to a String variable, this is the same as using an Echo override.

Flow is here.

TransformBatchToRealTime
The premise behind this flow is that we will be updating AP14 with data from a file using the Lawson Adapter node. For those of you not familiar with AP14, it is a Vendor contact form. The keys are Vendor Group, Vendor, Location. The form itself has three detail lines, each representing a contact. Because we’re using the Lawson adapter, I used the “maketypetree” command to create the type tree to be used as my output from the TX map. The input is a CSV file that has basic contact information. Because we’re dealing with details, I have a Functional map in the TX map. I have simplified certain aspects for demonstration purposes. There’s no error handling and I’m making a lot of assumptions with the data, so please ignore the obvious shortcomings.

The TX map reads the file and transforms it to a format that the Lawson Adapter can read. Because I’m using the transform to real time option, the data is available as the TXNode_outputData variable. I’m using no overrides in the map and the only data I’m setting is the input to the Lawson Adapter.

Here is the BCI transaction that I defined for the AP14.
Here is the Flow.
Here is the Map – and here is the Functional Map.

Run a map from a map a.k.a. “RunMap Fancy”
The business case for this is that we receive employee data from our affiliate companies that need to be loaded into our HR system. The data that we get is a full employee listing. The purpose of the base map is to compare the data to what is currently in Lawson and determine if the record being provided is an Add (it does not exist in Lawson), a Change (it exists in Lawson, but something about the employee has changed) or Delete (there is a record in Lawson that is not in the file). We do this by running a query over all the employees for a given process level in Lawson to produce our population. The base map then compares this to the file that was provided and produces the A/C/D file as an output. The A/C/D file is then processed by Process Flow, but I’m not showing that here. Using TX to determine the A/C/D takes approximately 30 seconds for a 5,000 employee company. Compared with some of the other methods we could choose, it’s extremely efficient.

The flow itself calls what is called a Control Map using the RunMap. The Control Map takes as it’s inputs all the necessary information to pass as overrides to the base map and calls it as part of the output card. The primary reason this is necessary is because we need to be able to override the SQL query that is run for the correct process level. There are some other options, but this is actually one of the more straight-forward choices.

Here’s the flow.
Here’s the Control Map. In the screen shot, you can also see the other maps that make up the base map. I wouldn’t call myself a TX expert, so there may be better ways to do it, but this works pretty well.

To summarize, there are several ways to call TX maps from a process flow and each has their own specific purpose although there is some overlap. Decide what you’re going to be doing with the output data of your map to help you determine which is the right option for you.

HTH

Lawson: Mass Actions on Process Flow workunits

It finally happened. Thankfully, it was in test, but we let loose a process flow that generated over 1700 workunits that went to a user’s inbasket. Needless to say, we didn’t really expect (or want) all the workunits and we needed a way to “clean them out” quickly so it didn’t impede other testing that was going on.

I did some searching on Lawson support and some quick google searches looking for ways to mass cancel the workunits, but I didn’t find any process to do this that is delivered by Lawson. If you know of one, feel free to share, but in lieu of that this is what I came up with.

The basic idea is to have process flow execute one of the inbasket actions on each of the workunits via the WebRun node. I greatly prefer this approach to a direct database update or simply cancelling the workunits. Even though it’s in test, I don’t really want to run the chance of corrupting the data. If we can use the app to gracefully make the updates, why not use it? Thankfully, there is a specially crafted URL that we can build based on data in the LOGAN Workflow tables. Since we have Process Flow Integrator (PFI), I used the SQL node to gather the data, but you should also be able to use the DME nodes.

Here’s the SQL:

SELECT Q.WORKUNIT, Trim(TASK.TASK) AS TASK, Trim(WU.DATA_AREA) AS DATA_AREA, 
  Q.PROCESS_ID, Trim(WU.PROCESS_NAME) AS PROCESS_NAME, Trim(Q.ACTIVITY_ID) AS ACTIVITY_ID, 
  Trim(ACT.ACTIVITY_NAME) AS ACTIVITY_NAME
FROM LSF9_LOGAN.WFWUQUEUE Q
INNER JOIN LSF9_LOGAN.WFQUETASK TASK
  ON Q.WORKUNIT = TASK.WORKUNIT
    AND Q.PROCESS_ID = TASK.PROCESS_ID
    AND Q.ACTIVITY_ID = TASK.ACTIVITY_ID
INNER JOIN LSF9_LOGAN.WFWORKUNIT WU
  ON Q.WORKUNIT = WU.WORKUNIT
INNER JOIN LSF9_LOGAN.WFACTIVITY ACT
  ON Q.WORKUNIT = ACT.WORKUNIT
    AND Q.PROCESS_ID = ACT.PROCESS_ID
    AND Q.ACTIVITY_ID = ACT.ACTIVITY_ID
WHERE TASK.R_STATUS = 1
  AND Q.WORKUNIT &gt;= 144342
  AND WU.SERVICE = 'MyProcess'
ORDER BY Q.WORKUNIT

This returns almost all the data we need to execute the actions. (Note: R_STATUS of 1 is open tasks.)

The data looks something like this:

WORKUNIT TASK DATA_AREA PROCESS_ID PROCESS_NAME ACTIVITY_ID ACTIVITY_NAME
146086 MyProcess BMH 146086 MyProcessError_V02 UA2 UA4620
146165 MyProcess BMH 146165 MyProcessError_V02 UA2 UA4620
146166 MyProcess BMH 146166 MyProcessError_V02 UA2 UA4620
146167 MyProcess BMH 146167 MyProcessError_V02 UA2 UA4620

Now we just need to take that data and build our URL, which looks like this:

bpm/inbasket?FUNCTION=dispatch&WORKUNIT=<!SQLWorkUnits_WORKUNIT>&TASK=<!SQLWorkUnits_TASK>&APPRODLINE=<!SQLWorkUnits_DATA_AREA>&RDUSER=username&PROCID=<!SQLWorkUnits_PROCESS_ID>&PROCNAME=<!SQLWorkUnits_PROCESS_NAME>&ACTID=<!SQLWorkUnits_ACTIVITY_ID>&ACTNAME=<!SQLWorkUnits_ACTIVITY_NAME>&ACTION=Complete

Note that earlier I said the SQL query returns almost all the data. The data that it does not return is related to the User and Action. The RDUSER value is the name that will be stored on the action. It must be a user that has this task assigned. The other value that is hard-coded here is Action. You can get a list of valid actions from the WFWUQUEUE table (ACTION_LIST field) or from the flow where you defined the list of valid actions. I’ve hard-coded it here because there’s no reason to look it up when I know what I want to set it to.

The basic Process flow I built is:

  1. Run SQL Node
  2. Execute WebRun
  3. Trap/Log any errors
  4. Loop on SQL node

In our system, it completed roughly 1700 workunits in about 5 minutes.

HTH

Lawson: Elapsed Job Time

I work in support. Quite frankly, sometimes the users are their own worst enemies. It’s hard to convince them of this, but every so often it’s possible. If you’re proactive about things and you play your cards right, they might not even mind you telling them this.

One of the best indicators of “something can be better” is long running jobs. There are some jobs that are just going to take forever (I’m looking at you PO135), but there are other jobs where the run time will vary widely depending on the parameters. If your users tend to run jobs wide open, it can impact your system performance and their satisfaction because they have to wade through pages of reports that they don’t want.

My solution to this is to do a review of elapsed job times. I like to do this in the context of specific programs by user. I try to check for a specific program from a user that runs longer than the others. Usually this indicates an issue. Sometimes it’s legit, but if you see the same username (especially if it’s the same jobname) in the same elapsed time range, you’ve got a problem.

The question is, how do you identify these situations?

For those of you familiar with LID, the “jobschd” command runs off the GEN QUEUEDJOB table. The different screens are just different selections over the same table. In the example below, we look only at “Completed” jobs. To do our comparison we do something like this:

(Apologies to SQL server or DB2 readers, this query is as Oracle specific as they get – hopefully the context makes sense. As a decoding reference: Lawson stores the and time as a numeric which means no leading zeroes. You will have to parse the time into time formats, inserting leading 0’s when appropriate).

SELECT TOKEN, USERNAME, JOBNAME, JOBQUEUE, ACTSTARTDATE, 
 SUBSTR(REPLACE(LPAD(' ',6-LENGTH(ACTSTARTTIME))||ACTSTARTTIME,' ','0'),1,2)||':'|| 
    SUBSTR(REPLACE(LPAD(' ',6-LENGTH(ACTSTARTTIME))||ACTSTARTTIME,' ','0'),3,2)||':'|| 
    SUBSTR(REPLACE(LPAD(' ',6-LENGTH(ACTSTARTTIME))||ACTSTARTTIME,' ','0'),5,2) STARTTIME, 
    STOPDATE, 
  SUBSTR(REPLACE(LPAD(' ',6-LENGTH(STOPTIME))||STOPTIME,' ','0'),1,2)||':'|| 
    SUBSTR(REPLACE(LPAD(' ',6-LENGTH(STOPTIME))||STOPTIME,' ','0'),3,2)||':'|| 
    SUBSTR(REPLACE(LPAD(' ',6-LENGTH(STOPTIME))||STOPTIME,' ','0'),5,2) AS ENDTIME, 
  TO_TIMESTAMP(STOPDATE||SUBSTR(REPLACE(LPAD(' ',6-LENGTH(STOPTIME))||STOPTIME,' ','0'),1,2)||':'|| 
    SUBSTR(REPLACE(LPAD(' ',6-LENGTH(STOPTIME))||STOPTIME,' ','0'),3,2)||':'|| 
    SUBSTR(REPLACE(LPAD(' ',6-LENGTH(STOPTIME))||STOPTIME,' ','0'),5,2),'DD-MON-YYHH24:MI:SS') - 
  TO_TIMESTAMP(ACTSTARTDATE||SUBSTR(REPLACE(LPAD(' ',6-LENGTH(ACTSTARTTIME))||ACTSTARTTIME,' ','0'),1,2)||':'|| 
    SUBSTR(REPLACE(LPAD(' ',6-LENGTH(ACTSTARTTIME))||ACTSTARTTIME,' ','0'),3,2)||':'|| 
    SUBSTR(REPLACE(LPAD(' ',6-LENGTH(ACTSTARTTIME))||ACTSTARTTIME,' ','0'),5,2),'DD-MON-YYHH24:MI:SS') ELAPSED 
FROM QUEUEDJOB 
WHERE ACTSTARTDATE BETWEEN '01-OCT-10' AND '31-OCT-10'
  AND QJBSET6_SS_SW = 'N'

The result is something like this:

TOKEN USERNAME JOBNAME JOBQUEUE ACTSTARTDATE STARTTIME STOPDATE ENDTIME ELAPSED
RQ500 lawson TEST RQ 7-Oct-10 16:34:08 7-Oct-10 16:34:16 +000:00:08.00
PR197 batch PR197CHP PR 7-Oct-10 16:39:57 7-Oct-10 17:42:20 +001:02:23.00
GL292 user1 BALSHEET ********** 7-Oct-10 17:05:25 7-Oct-10 17:05:27 +000:00:02.00
GL292 user1 BALSHEET ********** 7-Oct-10 17:06:17 7-Oct-10 17:06:18 +000:00:01.00

I would love to tell you that there’s an easy sort to this, but frankly you need to dump to Excel or Lotus so it’s easier to see where your issues are. I usually start by sorting on ELAPSED descending. If there’s nothing that really jumps out, then I sort by TOKEN and by ELAPSED descending. Then if I hit a big jump between #1 and #2 for a token, we may have issues.

Because I’m including the job queue, it’s also easy to see when users are “jumping the queue”. We’ve recently completed the switch from LAUA security to LS9 security. Sometimes it’s not as dialed in as we need it to be and this is a great identifier. For us, each system has it’s own job queue and no one should ever use ****. I can tell right away that we have an issue with “user1”.

The other big gotcha about this technique is that if the job went into ‘Needs Recovery’ the elapsed time will reflect that, which isn’t necessarily correct.

HTH