Lawson Process Flow

Lawson approvals – do it your way

Lawson approvals my way? What does that even mean and why would I want to do it?

Customizing approvals is changing how workunits route for approvals in Lawson Process Flow and Lawson Process Automation. This post is going to specifically be about S3, but it should apply for Landmark and M3 as well. My company is on Lawson Process Automation, so that’s what will be in the examples, but the same concepts should apply to Process Flow.

Let’s first talk about how Lawson Process Automation (LPA) seeks approvals in the User Action nodes. (The rest of this post will assume that you are familiar with the concept of approvals in Process Flow or LPA. If you’re not, check out the documentation for your product before continuing as it might be confusing otherwise.) User action nodes use what is called “Category Filtering”. First a task is assigned to a user. For that task, a filter category and value are assigned (you can assign multiple categories and values). Next, each User Action in a flow will also have a task (or multiple tasks) assigned to it. Finally, when the workunit is created, it will have a filter category and filter value on it based on data in the source application. The inbasket will use the filter category and filter value from the workunit to display only relevant workunits to the user who is logged in.

Easy, right?

Okay, maybe not.

Keep these key words in mind:

  • Task = Inbasket. There are several philosophies on how to set these up, but they will basically equate to either the person or the type of approval they are performing. Examples might be Manager or Invoice Approver.
  • Filter Category = What kind of information is relevant for this workunit.
  • Filter Value = The actual value that will be filtered on for the workunit.

Here is an example setup:
User: Jdoe
Task: Invoice Approver
Filter Category: ACCTUNIT
Filter Value: 123456

In this scenario, Jdoe would only be able to see workunits in his inbasket that have BOTH the Filter Category of ACCTUNIT and the Filter Value of 123456.

Okay, now that that’s out of the way, let’s get to the fun stuff. Ultimately, the problem comes down to the fact that workunits created by the source applications (S3, M3, Landmark) rarely have Filter Categories that are of any use. Case in point, the Invoice Approval service uses Authority code, which has to be keyed at the time of the invoice (or defaulted from some place like Vendor). This creates a bit of an issue for us. With 2000 Managers responsible for their own accounting units, it means that AP would need to actually know which of the 2000 managers needed to approve the invoice. Not gonna happen. In a much larger context, it also limits us to basically one level of approval if we were to actually set up a code for each manager because we wouldn’t be able to route to another level easily like a Vice President. Each VP would need to have the same setup as each of the managers that might escalate to them instead of something that makes sense like Company. I’m not saying it’s impossible, it would just be extremely messy. If I’m a VP and there are three managers that approve for Accounting Units in my company, I would need to have each of their approval codes assigned to me. If the manager happens to also approve for an accounting unit in another company, the VP responsible for that company would also need that Approver code assigned to them. Add in the fact that Approval code is only 3 characters and we’re going to wind up with codes like 0X0 and 1CF that AP would probably never get right.

Truth be told, we don’t really want to get approvals by approval code anyway. How our invoice approval works is: If the invoice is part of a Capital project, then route it to the Project Manager, if it’s for a Service, route it to the manager of the cost center receiving the service. So not only do we NOT want to use Approval Code, we actually want to use different approvals depending on the type of invoice.

The question is, how do we do that? The answer is we modify the Category Filter and value. There are people thinking right now, “Okay, so we need to modify the 4GL of the library that creates the workunit to have the correct Filter Category and Filter Value, right?”. You would be correct, you could do that. If you’re one of those people (or you’re a process flow person in an environment that thinks like that) I feel sorry for you. You’re doing it the hard way. Not only will you have a modified library that you have to be careful of when you apply patches, you have now created extra effort when you want to upgrade. Good for job security, but bad for your business.

So now you’re thinking, “Okay smarty, since you just insulted me, how do you propose that we do it?”. I’m going to advocate that you dynamically change the category filter and value IN THE FLOW ITSELF. Here’s an interesting bit of information – the Filter Category and Filter Value on the workunit record are NOT the values used to create the inbasket task. What actually happens (as near as I can tell), is that these values are actually used to populate two Flow variables (in LPA called: oCatKey and oCatValue – I believe this to be the same in Process Flow). It is the flow variables that are used to used in the creation of the inbasket task. All you have to do is add an assign node to your flow before the User Action node. Add a javascript expression to set these two values to whatever you want. Voila! The inbasket will now use the Filter Category and Filter Value that you set in your assign node.

Here’s the code to change a workunit to match what I set up for Jdoe above:

oCatKey = "ACCTUNIT"; //Filter category
oCatValue = "123456"; //Filter Value

For practical purposes and debugging, we are in the habit of also making calls to update the workunit itself with the correct Filter Category and Filter Value. It makes them easier to find when doing research. The added bonus to dynamically changing the values is that you can change the Filter Category and Filter Value as many times as you need in a flow. Thinking outside the box (we don’t do this) – you could have an Invoice approved by a manager for an accounting unit. You could then change the filtering to company and route to a VP responsible for that company if it’s over a certain dollar amount. In this case, you would only need to set up the number of companies that you have for VPs instead of having to setup all of the accounting units for a company (which you would have to do if you went the 4GL route). You could change it again and send it to a treasury analyst based on the bank account that it would pay out of to make sure that funds were available.

The possibilities are pretty much endless at that point.

HTH

Lawson Process Flow S3 Transaction Node

I definitely learned something new today. I’ve been using Lawson Process Flow for 5 years now. I’ve never had any formal training, but I have read every document that Lawson has put out on it. In addition (and the reason for my discovery), my company is currently participating in a beta project for Lawson Process Automation 10.0.1. It’s not generally available and we signed a Non-disclosure agreement so I can’t discuss it directly. I am however going to be presenting at Inforum on the results of the beta. As part of my testing, I discovered something that I’m pretty sure most people don’t know. I was going to include it in my Inforum presentation until I went back to 9.0.1 Lawson Process Flow and discovered it worked there as well.

Here’s the big secret: the Lawson S3 Transaction node returns ALL field values from the screen, not just the values that process flow shows (RETURN_CODE and MESSAGE_CODE). I had no idea and I certainly have no idea why Lawson doesn’t publicize this or at least document it. Up until today, when I needed to know something about the result of an AGS call, I would make the AGS call with a WebRun node and then parse the output in an XML node (like I talk about here: http://wp.me/pE8vz-a0). This isn’t actually necessary as you can get the output from the Transaction node itself.

Let’s say you want to perform an Add action on a form and you need to know something more than the result of the AGS call. Some common examples are adding Employees on HR11 (getting the Employee number), adding a vendor on AP10 (getting the vendor number), adding an asset on AM20.2 (getting the asset number), etc. Chances are you aren’t simply doing an add, but need to also do something else. For vendors it may be adding diversity codes, for assets it would probably involve adding items, and for employees it might be adding benefits.

In order to access the data from a Transaction node you need to append the field name to your transaction node name. If you’re using the _LFN option, then you’ll need to use an underscore (_) instead of a dash (-) in the field names.

Example:
Assuming an Inquire call on HR11 for my employee id:

https://[server]/servlet/Router/Transaction/Erp?_PDL=PROD&_TKN=HR11.1&_LFN=ALL&_EVT=CHG&FC=I&EMP-COMPANY=1&EMP-EMPLOYEE=12345

I can get the result in process flow by using the following reference (assuming my transaction node has the id HR11):
HR11_PEM_SEX

Talk about a revelation.

HTH

Everything you need to know about Lawson Comments

I have a love/hate relationship with Lawson comments. They are an awesome feature, but implemented in such a way that you’d think somebody made it hard on purpose. This post is going to be about how to deal with comments (mostly in large quantities). This is everything that you “need” to know, not everything you “want” to know. Probably once a week, I get asked “Can you query for comments?”. The answer is Yes*. The other question I get a lot is: “Can you upload comments?”. Again, the answer is Yes**.

*No, you cannot query them out through DME.
**No, you cannot use MS Add-ins.

Uploading Comments
Let’s first talk about adding comments to Lawson so that we have something to query out. There is only one way to add comments to Lawson, and that’s through the writeattach.exe cgi program. How you choose to implement it is up to you, but I’m going to explain how to do it through Lawson ProcessFlow. You can add comments via either GET or POST, but I prefer the POST because I have more control over how the comments get added and I don’t have to do a lot of encoding work.

The basic XML for the comments looks like:

  <ATTACHXML>
    <_AESC>IE</_AESC> 
    <_ANAM>Comment Title</_ANAM> 
    <_ATXT>
      <![CDATA[ 
       Comment Text is here
      ]]> 
    </_ATXT>
    <_OUT>XML</_OUT> 
    <_PDL>PRODUCTLINE</_PDL> 
    <_FN>FILENAME</_FN> 
    <_IN>INDEXNAME</_IN> 
    <K1>KEY1VALUE</K1> 
    <K2>KEY2VALUE</K2> 
    <K3>KEY3VALUE</K3> 
    <K4 /> 
    <_ATYP>C</_ATYP> 
    <_AUDT>COMMENTTYPE</_AUDT> 
    <_USCH /> 
    <_DATA>TRUE</_DATA> 
    <_ATTR>TRUE</_ATTR> 
    <_AOBJ>TRUE</_AOBJ> 
    <_OPM>M</_OPM> 
    <_ECODE>FALSE</_ECODE> 
  </ATTACHXML>

The URL for my WebRun node is: cgi-lawson/writeattach.exe and I send the XML above as a POST string. In order to figure out what to put in the tags, I used dbdef. In order to add comments you must follow these steps. First, verify that the file allows attachments. Second, find the index name and what fields go in the index value fields (you can add more as needed – but the values should be the actual value like 0001 instead of 1). The only thing that I don’t have a good way to get is the value for is the _AUDT tag. This is the comment type, so for forms that allow more than one comment type, this value will change. I’m sure there’s some method to figure this out, I just don’t know what it is. I generally use Fiddler, add a comment, then check the calls.

Note: If you do use process flow, you have to be careful with the CDATA nodes. Process flow will interpret the <! as the beginning of a process flow variable and try to replace it. As a result, if you need the CDATA nodes, you should build the post string either in the XML node or an Assign node and then put that variable name in the WebRun.

Querying comments
There are two ways to query comments. One is using SQL (my personal preference, but not always applicable) and the other is using cgi calls.

For the SQL version, it’s important to talk about the table structure used to actually store the data. For each table that allows attachments/comments, there are two tables to store the comments, a “Header” and a “Detail”. The “Header” contains the name of the comment as well as the start of the comment. The “Detail” table contains the rest of the comment.

For a table that allows comments, the comment tables are always named “L_<Table_Type><Table_Prefix>”. Consequently, comments for the APINVOICE table (prefix is API) would be named L_HAPI and L_DAPI. The H and the D indicate Header and detail.

The relationship between the base table (APINVOICE) and the Header comment table (L_HAPI) is:

APINVOICE.L_INDEX = L_HAPI.L_INDEX
'APINVOICE' = L_HAPI.TABLE_NAME

The relationship between the Header (L_HAPI) and detail comment table (L_DAPI) is:

L_HAPI.L_INDEX = L_DAPI.L_INDEX
L_HAPI.ATCHNBR = L_DAPI.ATCHNBR

The data relationships are:

APINVOICE 1:M L_HAPI 1:M L_DAPI

This gets us data that looks something like this (I’m not displaying duplicated data as the query results would actually be; this is to aid understanding of the relationships):

Invoice L_INDEX ATCHNBR Comment Name Seq Comment
122344 zzzz 1 Comment 1 1 some comment detail
2 More comment detail
2 Comment 2 1 some comment detail

The OBJECT field of the Header and Detail tables contains the comments. For the Header table, the OBJECT field actually contains more than the comment. Technically, it’s comma-delimited and each of the first three “field”s has it’s name as part of it. The fourth field is the beginning of comment itself. However, the data is clearly space padded, so I would strongly urge you to delimit based on places, not on commas, if for not other reason than your comment could contain commas and then you’re in trouble. If you’re wondering, the actual comment data begins at character 96. The “Detail” table has a record for any comment that is longer than 416 characters. Each Detail OBJECT field is 1024 characters, so for each comment that is longer than that, there will be an additional record. You must put all of these records back together to form the full comment.

Here is what a query for invoice comments might look like (Oracle Version).

SELECT API.COMPANY, API.VENDOR, API.INVOICE, H.L_INDEX, H.ATCHNBR, H.R_NAME, H.OBJECT, D.SEQNBR, D.OBJECT
FROM LAWSON.APINVOICE API
LEFT OUTER JOIN LAWSON.L_HAPI H
ON API.L_INDEX = H.L_INDEX
  AND H.FILENAME = 'APINVOICE'
LEFT OUTER JOIN LAWSON.L_DAPI D
ON H.L_INDEX = D.L_INDEX
  AND H.ATCHNBR = D.ATCHNBR
WHERE API.INVOICE = '123456' 
ORDER BY D.SEQNBR

You should always make sure that you OUTER JOIN your comment tables to your base tables as the comment tables are never required. Because you may return multiple detail records for each base record, you will have to deal with “putting them back together”. My personal preference is to use the Oracle Hierarchy queries (which I posted on here). However, I have been known to use LEAD and LAG when I have a clearly defined recordset. If you are using Crystal, you can use the Hierarchal functions there as well.

Non-SQL Version
Now that you’ve seen what it takes to create a comment and what the data looks like in the tables, it’s time to discuss the non-SQL ways to get the data. There are two options, ListAttachments and getattachrec.exe programs.

The shortcut method (version 9.0.1) is to use ListAttachment:

/lawson-ios/action/ListAttachments?&dataArea=<PRODUCTLINE>&fileName=<TABLENAME>&indexName=<INDEX>&K1=<Key1Value>&K2=<Key2Value>&K3=<Key3Value>&K4=<Key4Value>&K5=<Key5Value>&outType=XML

You follow the same basic rules to build this URL as you would to build the writeattach.exe. The primary difference is that you do not need to give explicit values (like 0001 for company).
A response from the APINVOICE comments might look like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<ACTION actionName="LISTATTACHMENTS">
    <LIST numRecords="1" maxRecordCount="500" hasMoreRecords="false" status="pass">
        <MSG /> 
        <ATTACHMENT attachmentNbr="zz" indexName="APISET1" attachmentSize="680" createDate="20100112" modifiedDate="20100112" createUser="lawsonuser" modifiedUser="lawsonuser" dataArea="PROD" K5="9999" K4="0" K3="3694823" K2="20840" K1="120" createTime="074725" modifiedTime="074725" attachmentCategory="commentAtch" attachmentType="A" lIndex="yEXb" fileName="APINVOICE" status="pass">
            <MSG /> 
            <ATTACHMENT-NAME>
                <![CDATA[ Attachment name]]> 
            </ATTACHMENT-NAME>
            <ATTACHMENT-TEXT>
                <![CDATA[ Attachment data ]]> 
            </ATTACHMENT-TEXT>
        </ATTACHMENT>
    </LIST>
</ACTION>

As for the getattachrec.exe, I don’t have a reliable method of building these, it’s tedious no matter how you do it. Once you figure out how to get your comment (like the Invoice example above), you might be able to replicate it without having to go through the entire process every time, but you’ll have to use trial and error. The biggest issue you’ll face is that some tables allow for multiple comment types, and you won’t know which ones (if any) exist.

My preferred method to build the URLs the first time around is to either put Lawson in debug mode or use Fiddler. There are three main calls that you will need to focus on.
1) Drill from a Lawson screen on our record (like AP90.1)
2) Based on the data from #1, construct our next URL to get the comment header
3) Based on the data from #2, construct our next URL to get the comment detail

1) Here’s an example of the IDA (Drill) URL. This is for the same invoice as above.

/servlet/Router/Drill/Erp?_OUT=XML&keyUsage=PARAM&_TYP=CMT&_PDL=PROD&_SYS=AP&_TKN=AP90.1&_KNB=50&01=120&08=CHPV&AVi=%20%20%20%2020840&50=3694823&AQ=9999&03=%20%20%20%2020840&_LKN=50&_RECSTOGET=0

This gets us output like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<IDARETURN productline="PROD" title="">
<COLUMNS /> 
<FINDNEXT /> 
<PREVPAGE /> 
<NEXTPAGE /> 
<BASEURL /> 
<LASTUSED /> 
<LINES count="4">
  <LINE>
    <IDACALL type="CMT">
      <![CDATA[ cgi-lawson/getattachrec.exe?_AUDT=A&_IN=APISET1&K1=120&K2=20840&_FN=APINVOICE&K3=3694823&K4=0&_ATYP=C&K5=9999&_TYP=CMT&_OPM=C&_OUT=XML&_ATTR=TRUE&_DRIL=TRUE&_AOBJ=TRUE&_PDL=PROD&_ON=Invoice+Note%2FReport%2FCheck+Comments  ]]> 
    </IDACALL>
    <ATTACHMENTCALL type="CMT">
      <![CDATA[ lawson-ios/action/ListAttachments?attachmentType=A&indexName=APISET1&K1=120&K2=20840&fileName=APINVOICE&K3=3694823&K4=0&attachmentCategory=C&K5=9999&drillType=CMT&outType=XML&dataArea=PROD&objName=Invoice+Note%2FReport%2FCheck+Comments  ]]> 
    </ATTACHMENTCALL>
    <COLS>
      <COL>
        <![CDATA[ Invoice Note/Report/Check Comments  ]]> 
      </COL>
    </COLS>
    <KEYFLDS /> 
    <REQFLDS /> 
  </LINE>
  <LINE>
    <IDACALL type="CMT">
      <![CDATA[ cgi-lawson/getattachrec.exe?_AUDT=N&_IN=APISET1&K1=120&K2=20840&_FN=APINVOICE&K3=3694823&K4=0&_ATYP=C&K5=9999&_TYP=CMT&_OPM=C&_OUT=XML&_ATTR=TRUE&_DRIL=TRUE&_AOBJ=TRUE&_PDL=PROD&_ON=Invoice+Notes  ]]> 
    </IDACALL>
    <COLS>
      <COL>
        <![CDATA[ Invoice Notes  ]]> 
      </COL>
    </COLS>
    <KEYFLDS /> 
    <REQFLDS /> 
  </LINE>
  <LINE>
    <IDACALL type="CMT">
      <![CDATA[ cgi-lawson/getattachrec.exe?_AUDT=D&_IN=APISET1&K1=120&K2=20840&_FN=APINVOICE&K3=3694823&K4=0&_ATYP=C&K5=9999&_TYP=CMT&_OPM=C&_OUT=XML&_ATTR=TRUE&_DRIL=TRUE&_AOBJ=TRUE&_PDL=PROD&_ON=Invoice+Report+Comments  ]]> 
    </IDACALL>
    <COLS>
      <COL>
        <![CDATA[ Invoice Report Comments  ]]> 
      </COL>
    </COLS>
    <KEYFLDS /> 
    <REQFLDS /> 
  </LINE>
  <LINE>
    <IDACALL type="CMT">
      <![CDATA[ cgi-lawson/getattachrec.exe?_AUDT=C&_IN=APISET1&K1=120&K2=20840&_FN=APINVOICE&K3=3694823&K4=0&_ATYP=C&K5=9999&_TYP=CMT&_OPM=C&_OUT=XML&_ATTR=TRUE&_DRIL=TRUE&_AOBJ=TRUE&_PDL=PROD&_ON=Invoice+Check+Comments  ]]> 
    </IDACALL>
    <COLS>
      <COL>
        <![CDATA[ Invoice Check Comments  ]]> 
      </COL>
    </COLS>
    <KEYFLDS /> 
    <REQFLDS /> 
  </LINE>
</LINES>
</IDARETURN>

2) Run each URL that you find in the IDACALL nodes to try and get a comment header. Running one of these URL’s (the Invoice Note/Report/Check Comments one) will give us something like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<Report cgidir="/cgi-lawson/" executable="getattachrec.exe" productline="PROD" filename="APINVOICE" token="Token" keynbr="KeyNbr">
  <QueryBase exepath="/cgi-lawson/writeattach.exe">
    <![CDATA[ _OUT=XML&_PDL=PROD&_FN=APINVOICE&_IN=APISET1&  ]]> 
  </QueryBase>
  <WinTitle>
    <![CDATA[ Invoice Note/Report/Check Comments  ]]> 
  </WinTitle>
  <DrillAround>
    <DrillData>
      <DrillType>Comment</DrillType> 
      <DrillName>
        <![CDATA[ Invoice Note/Report/Check Comments  ]]> 
      </DrillName>
      <DrillUserType>A</DrillUserType> 
      <DrillScheme>
        <![CDATA[ none  ]]> 
      </DrillScheme>
      <OneOnly>F</OneOnly> 
      <NoDetail>F</NoDetail> 
      <NoDrill>F</NoDrill> 
      <DisplayOrder>A</DisplayOrder> 
    </DrillData>
    <DrillData>
      <DrillType>Comment</DrillType> 
      <DrillName>
        <![CDATA[ Invoice Notes  ]]> 
      </DrillName>
      <DrillUserType>N</DrillUserType> 
      <DrillScheme>
        <![CDATA[ none  ]]> 
      </DrillScheme>
      <OneOnly>F</OneOnly> 
      <NoDetail>F</NoDetail> 
      <NoDrill>F</NoDrill> 
      <DisplayOrder>A</DisplayOrder> 
    </DrillData>
    <DrillData>
      <DrillType>Comment</DrillType> 
      <DrillName>
        <![CDATA[ Invoice Report Comments  ]]> 
      </DrillName>
      <DrillUserType>D</DrillUserType> 
      <DrillScheme>
        <![CDATA[ none  ]]> 
      </DrillScheme>
      <OneOnly>F</OneOnly> 
      <NoDetail>F</NoDetail> 
      <NoDrill>F</NoDrill> 
      <DisplayOrder>A</DisplayOrder> 
    </DrillData>
    <DrillData>
      <DrillType>Comment</DrillType> 
      <DrillName>
        <![CDATA[ Invoice Check Comments  ]]> 
      </DrillName>
      <DrillUserType>C</DrillUserType> 
      <DrillScheme>
        <![CDATA[ none  ]]> 
      </DrillScheme>
      <OneOnly>F</OneOnly> 
      <NoDetail>F</NoDetail> 
      <NoDrill>F</NoDrill> 
      <DisplayOrder>A</DisplayOrder> 
    </DrillData>
  </DrillAround>
  <ParentRec>
    <QueryVal>
      <![CDATA[ _AK=yEXb  ]]> 
    </QueryVal>
    <RecAtt Action="Add">
      <AttType>C</AttType> 
      <UsrType>A</UsrType> 
      <AttName>
        <![CDATA[ Add Comment  ]]> 
      </AttName>
      <QueryVal>
        <![CDATA[ K1=0120&K2=++++20840&K3=3694823&K4=000&K5=9999&_ATYP=C&_AUDT=A&_USCH=none&_DATA=TRUE&_OPM=M&  ]]> 
      </QueryVal>
    </RecAtt>
    <RecAtt Action="">
      <CrtDate>
        <![CDATA[ 01/12/2010  ]]> 
      </CrtDate>
      <CrtTime>
        <![CDATA[ 07:47:25  ]]> 
      </CrtTime>
      <ModDate>
        <![CDATA[ 01/12/2010  ]]> 
      </ModDate>
      <ModTime>
        <![CDATA[ 07:47:25  ]]> 
      </ModTime>
      <AttSize>619</AttSize> 
      <HdrSize>95</HdrSize> 
      <AttName>
        <![CDATA[ Comment Name  ]]> 
      </AttName>
      <AttType>C</AttType> 
      <UsrType>A</UsrType> 
      <AttCreator>lawsonuser</AttCreator> 
      <AttModifier>lawsonuser</AttModifier> 
      <QueryVal>
        <![CDATA[ K1=0120&K2=++++20840&K3=3694823&K4=000&K5=9999&_ATYP=C&_AUDT=A&_KS=zz&_OPM=A&_DATA=TRUE&  ]]> 
      </QueryVal>
    </RecAtt>
  </ParentRec>
  <ErrMsg ErrNbr="0" ErrStat="MSG">
    <![CDATA[ Success  ]]> 
  </ErrMsg>
</Report>

3) This time, we have to put the URL together (I’ve highlighted the applicable lines). Take the value of the cgi-dir and the executable attributes from the Report node. Add the CDATA value from the QueryBase node. Then add the CDATA value from the QueryVal node that is in the RecAtt node with an action attribute of “” (the one at the bottom). That gives us:

/cgi-lawson/getattachrec.exe?_OUT=XML&_PDL=PROD&_FN=APINVOICE&_IN=APISET1&K1=0120&K2=++++20840&K3=3694823&K4=000&K5=9999&_ATYP=C&_AUDT=A&_KS=zz&_OPM=A&_DATA=TRUE&

The result of which is our comment detail, and it looks like this:

<?xml version="1.0" encoding="ISO-8859-1" ?> 
<Report cgidir="/cgi-lawson/" executable="getattachrec.exe" productline="PROD" filename="APINVOICE" token="Token" keynbr="KeyNbr">
  <QueryBase exepath="/cgi-lawson/getattachrec.exe">
    <![CDATA[ _OUT=XML&_PDL=PROD&_FN=APINVOICE&_IN=APISET1&  ]]> 
  </QueryBase>
  <WinTitle>
    <![CDATA[ <NULL>  ]]> 
  </WinTitle>
  <ParentRec>
    <QueryVal>
      <![CDATA[ _AK=yEXb  ]]> 
    </QueryVal>
    <RecAtt Action="">
      <AttName>
        <![CDATA[ Comment Name  ]]> 
      </AttName>
      <AttData>
        <![CDATA[ Comment Text%0A  ]]> 
      </AttData>
      <QueryVal>
        <![CDATA[ K1=0120&K2=++++20840&K3=3694823&K4=000&K5=9999&_ATYP=C&_AUDT=A&_KS=zz&_OPM=A&_DATA=TRUE&  ]]> 
      </QueryVal>
      </RecAtt>
  </ParentRec>
  <ErrMsg ErrNbr="0" ErrStat="MSG">
    <![CDATA[ Success  ]]> 
  </ErrMsg>
</Report>

The comment data is in the AttData node. Wasn’t that easy?
Note: Newlines in the AttData node will be represented by %0A.

As for which of the three methods you use, that’s up to you and your requirements. Chances are, you’ll wind up using a combination. I like running a SQL query to get the list of comments, but actually use the ListAttachments to retrieve the data. This saves on the heartache of trying to reconstruct the comment from the SQL and it limits how much work we have to do because we’ll only try to retrieve comments for those records that actually have them.

Comments and Portal
You can add a Comment popup to Design Studio and custom portal pages. This is especially useful when the users are not actually on the forms they might want to view comments for. An example of this might be in an inbasket view. You could give the users a link to view the comments on an invoice without having to leave the inbasket.
The function is:

top.portalObj.drill.doAttachment(window, "lawformRestoreCallback", idaCall, 'CMT');

The idaCall is a URL and is the same as we built in #1 above. You should leave everything else the same.

Anything else you “need” to know? Leave a comment and I’ll answer if I can.

HTH

Lawson: Mass Actions on Process Flow workunits

It finally happened. Thankfully, it was in test, but we let loose a process flow that generated over 1700 workunits that went to a user’s inbasket. Needless to say, we didn’t really expect (or want) all the workunits and we needed a way to “clean them out” quickly so it didn’t impede other testing that was going on.

I did some searching on Lawson support and some quick google searches looking for ways to mass cancel the workunits, but I didn’t find any process to do this that is delivered by Lawson. If you know of one, feel free to share, but in lieu of that this is what I came up with.

The basic idea is to have process flow execute one of the inbasket actions on each of the workunits via the WebRun node. I greatly prefer this approach to a direct database update or simply cancelling the workunits. Even though it’s in test, I don’t really want to run the chance of corrupting the data. If we can use the app to gracefully make the updates, why not use it? Thankfully, there is a specially crafted URL that we can build based on data in the LOGAN Workflow tables. Since we have Process Flow Integrator (PFI), I used the SQL node to gather the data, but you should also be able to use the DME nodes.

Here’s the SQL:

SELECT Q.WORKUNIT, Trim(TASK.TASK) AS TASK, Trim(WU.DATA_AREA) AS DATA_AREA, 
  Q.PROCESS_ID, Trim(WU.PROCESS_NAME) AS PROCESS_NAME, Trim(Q.ACTIVITY_ID) AS ACTIVITY_ID, 
  Trim(ACT.ACTIVITY_NAME) AS ACTIVITY_NAME
FROM LSF9_LOGAN.WFWUQUEUE Q
INNER JOIN LSF9_LOGAN.WFQUETASK TASK
  ON Q.WORKUNIT = TASK.WORKUNIT
    AND Q.PROCESS_ID = TASK.PROCESS_ID
    AND Q.ACTIVITY_ID = TASK.ACTIVITY_ID
INNER JOIN LSF9_LOGAN.WFWORKUNIT WU
  ON Q.WORKUNIT = WU.WORKUNIT
INNER JOIN LSF9_LOGAN.WFACTIVITY ACT
  ON Q.WORKUNIT = ACT.WORKUNIT
    AND Q.PROCESS_ID = ACT.PROCESS_ID
    AND Q.ACTIVITY_ID = ACT.ACTIVITY_ID
WHERE TASK.R_STATUS = 1
  AND Q.WORKUNIT &gt;= 144342
  AND WU.SERVICE = 'MyProcess'
ORDER BY Q.WORKUNIT

This returns almost all the data we need to execute the actions. (Note: R_STATUS of 1 is open tasks.)

The data looks something like this:

WORKUNIT TASK DATA_AREA PROCESS_ID PROCESS_NAME ACTIVITY_ID ACTIVITY_NAME
146086 MyProcess BMH 146086 MyProcessError_V02 UA2 UA4620
146165 MyProcess BMH 146165 MyProcessError_V02 UA2 UA4620
146166 MyProcess BMH 146166 MyProcessError_V02 UA2 UA4620
146167 MyProcess BMH 146167 MyProcessError_V02 UA2 UA4620

Now we just need to take that data and build our URL, which looks like this:

bpm/inbasket?FUNCTION=dispatch&WORKUNIT=<!SQLWorkUnits_WORKUNIT>&TASK=<!SQLWorkUnits_TASK>&APPRODLINE=<!SQLWorkUnits_DATA_AREA>&RDUSER=username&PROCID=<!SQLWorkUnits_PROCESS_ID>&PROCNAME=<!SQLWorkUnits_PROCESS_NAME>&ACTID=<!SQLWorkUnits_ACTIVITY_ID>&ACTNAME=<!SQLWorkUnits_ACTIVITY_NAME>&ACTION=Complete

Note that earlier I said the SQL query returns almost all the data. The data that it does not return is related to the User and Action. The RDUSER value is the name that will be stored on the action. It must be a user that has this task assigned. The other value that is hard-coded here is Action. You can get a list of valid actions from the WFWUQUEUE table (ACTION_LIST field) or from the flow where you defined the list of valid actions. I’ve hard-coded it here because there’s no reason to look it up when I know what I want to set it to.

The basic Process flow I built is:

  1. Run SQL Node
  2. Execute WebRun
  3. Trap/Log any errors
  4. Loop on SQL node

In our system, it completed roughly 1700 workunits in about 5 minutes.

HTH

Parsing XML in Lawson Process Flow

This post is about consuming XML in Process Flow Integrator (PFI). In order to do this you must read or receive XML, pass it into the XML node (which will transform it), and then you can use it in the rest of your process.

As an example, let’s say that I have a fairly simple XML document that contains contract information like below (This happens to come from Mediclick):

<Contracts>
 <Contract>
   <Group>PROC</Group>
   <FC>A</FC>
   <Type>C</Type>
   <UseAgmtUom>Y</UseAgmtUom>
   <Agreement>BP50100</Agreement>
   <AgmtDesc1>DURACELL GENERAL BATTERIES</AgmtDesc1>
   <AgmtDesc2></AgmtDesc2>
   <Vendor>10499</Vendor>
   <VendorLoc></VendorLoc>
   <EffectiveDate>2006-01-01</EffectiveDate>
   <ExpireDate>2010-12-31</ExpireDate>
   <Admin>mediclick.</Admin>
   <Lines>
     <Line>
      <FC>A</FC>
      <LawsonItem>16281</LawsonItem>
      <LawsonItemDescription>BATT D</LawsonItemDescription>
      <VenCat>PC1300</VenCat>
      <NewBuyUom>EA</NewBuyUom>
      <NewBuyUomCost>0.5400</NewBuyUomCost>
     </Line>
     <Line>
       <FC>A</FC>
       <LawsonItem>16282</LawsonItem>
       <LawsonItemDescription>BATT F</LawsonItemDescription>
       <VenCat>PC1301</VenCat>
       <NewBuyUom>EA</NewBuyUom>
       <NewBuyUomCost>0.5500</NewBuyUomCost>
      </Line>
   </Lines>
 </Contract>
</Contracts>

I won’t post the schema file, but the basics of the XML are this:  <Contracts> is the root element.  The <Contract> element can appear any number of times and is the only element allowed in <Contracts>.  All other elements are inside the <Contracts> element.  The only other thing to note is that the <Line> element can appear any number of times inside the <Lines> element.

The basic process is to receive XML through one of the provided nodes (WebRun, FileAccess, InputData are the most common), pass that output into the XML node to be parsed, and then use the XML variables in your flow.

Parsing XML

In the XML node, set your schema file (click Load after you select it), and set the Action to Parse.   This tells Process Flow that we will consuming the XML rather than trying to build XML.   The Load button, will load the schema to Process Flow so you can see the variables.  You must do this each time you open the flow.  The “Global Element” tells Process Flow where to start it’s references to your XML file.  In the example XML, if I set the Global Element to Line, then I would only be able access the elements that are below Line (FC, LawsonItem, LawsonItemDescription, VenCat, NewBuyUom, NewBuyUomCost).   I would not be able to access elements like <Type>.  I would, however, be able to access all of the child elements of <Line>, regardless of which <Line> or <Contract> it was a part of.  For those familiar with Javascript, it’s roughly equivalent of to setting a reference using x = document.getElementsByName(“Line”).ChildNodes .

If you provide a schema file to the XML node, then Process Flow will display the XML variables in other nodes in your flow.  If you don’t have a schema, you can still access the information, you just won’t have the variables in the list in the other nodes.  Mostly, references will be using the E4X (javascript) notation like this:

XMLContract_output.Contract[0].Lines.Line[0].LawsonItem

This will return the value “16281”.  Note that the references are all zero-based, so the above returns the <LawsonItem> of the first <Line> in <Lines> of the first <Contract>.  To return the second line, we use this: XMLContract_output.Contract[0].Lines.Line[1].LawsonItem

The alternate method of referencing XML in Process Flow is:

XMLContract_output.*::Contract[0].*::Lines.*::Line[0].*::LawsonItem [0]

At this point, you’re probably wondering, do I have set explicit references to all of the elements?  What if I don’t know how many times an element will appear?  This is where looping comes in.

Looping XML

First off, credit is due to the Process Flow gurus at Lawson for showing me this trick.  I was on the right path, but not putting 2 and 2 together until they showed me the light.  Thanks guys.

Looping through the XML is pretty simple.  All we need are three things:

  1. A variable defined in the start node to hold the value of which element we are on
  2. An assign node to increase the value after we complete a loop
  3. A branch node to decide whether to continue reading the XML or continue on

For this example, I’m going to show how to loop through all the <Line> elements, but the same would hold true for the <Contract> elements.

I have a tendency to use Hungarian notation, so my line number variable is “intLine” defined as an integer and set a start value of 0.  Here’s how the flow looks:

In the Assign node, I add 1 to the intLine variable.  The “Write File” is a FileAccess node to Write to a file.  I have it here for demonstration purposes only.  After the “Write File”, it goes to the branch “Check for more lines”.  The “More Lines” branch has this:

intLine < XMLContract_output.Contract[0].Lines.length() .

What I’m doing is getting a count of the number of <Line> elements and comparing to the intLine variable.  So long as intLine is less than the total number of <Line> elements, the loop will occur to send it back to the assign node.  Once it’s no longer less, it will send it to the End node.

Note that I have the Contract number hard-coded, to do this properly, you would need to also iterate through the <Contract> elements as well.  If you do add a loop for <Contract> you must remember to reset the intLine at the beginning of the loop for <Contract>, otherwise it will keep looking for the line number of the first loop you run.

Please also note that I have opted to iterate the intLine variable at the beginning.  This is for simplicity, but also because I know that the <Line> element is required in the schema, so I know I will always have at least one.  If you don’t know whether it will exist or not, the flow should be: XML node, Branch Node, (if success) Assign, Write File, then back to Branch.

HTH

Calling Process Flow from BCI

Lawson offers three different ways to trigger Process Flows from BCI (Business Component Integrator). There is FileClient, ScanFileClient and FourGLClient. BCI is delivered with PFI (Process Flow Integrator) and Data Stage. The great thing about these is that they can be deployed on ANY system and used to call Process Flows on your Lawson system. As an example, we have our ImageNow box call a process flow after DataCapture has completed the output for the day. The Process Flow then looks up and adds data to the file generated by DataCapture, and puts it on the Lawson server to be processed by MA540.

In this instance we use FileClient installed on the ImageNow server. The command to execute is:
FileClient -DataFile %1 -ProcessName INDC_MA540_Xform -ParseByLine false
This is in a .bat script, so the %1 is a variable name that represents the file to be processed today.

The command to initiate is “FileClient”.
-DataFile is the file to be processed
-ProcessName is the name of the flow
-ParseByLine indicates that it should send the whole file as opposed to sending each line one at a time (separate workunits usually).

Typing FileClient at the command line will print a list of commands. Of course you should consult the Lawson PF documentation for specifics regarding each one. Some are well documented, while others are confusing. Usually, trial and error works best since there are several ways to do things.

The first node in my Process Flow parses the data into lines. I could have skipped this loop and set the BCI -ParseByLine to true (the default), but I like to have all of my errors in one place. That is, if BCI can’t send at all, that’s one thing. However, if there is a problem with a specific line, I would rather check the PF logs than having to check the logs on the BCI side, which is a different server.

In my case, I have a detail Invoice file generated from DataCapture. However, I do not have the Lawson item numbers that are required for MA540 to process because it’s not on the Invoice. That’s where the Process Flow comes in. By the way, this is Process Flow Integrator.

At a high level, the process is:

  1. Parse the file into lines
  2. Parse the lines into data fields (it’s comma delimited)
  3. Determine the Line Type (Header, Detail, Other)
  4. Branch
    • Header Records have some fields reformatted and the record is written out
    • Miscellaneous Records are written out as is
    • Detail Records are processed – Branch
      • If a PO line is present in the record, then look up the item number using the PO line
      • If no PO line, then look up the item using the Vendor #
      • Write out record

I can post the processflow for anyone who wants it. The nodes used are Assign, Branch, Data Iterator, DME and Sys Command.

Update 4/7/2011:
Here is the Flow: INDC_MA540_Xform
It is the actual xml file for the flow I run. Save the file and rename to .xml, do not to try to open in Word.

HTH

Lawson Process Flow searches

I seem to be getting a lot of hits for Lawson Process Flow and examples. If you’re here from a search engine and you don’t see what you’re looking for, how about leaving a comment with a question? I’d be more than happy to answer and it would benefit everyone else as well.

Just a note:

You can click on one of the Category links on the side to take you to the Process Flow posts.

XML in Lawson Process Flow

If you’re using the XML node in Lawson Process Flow, you may need to do type conversion.  The Assign nodes in PF will respect the type defined in the XML schema file.  Dates are a good example of this.

Let’s say that you’re reading an element from an XML file that is a date type.  When you read it, it is in the format of YYYY-MM-DD.  To use this in an AGS or DME call, you will need to convert the format to something like YYYYMMDD or DD/MM/YYYY (depending on locality).

In this case, you must first convert the date to a string so you can perform ECMA (javascript) string manipulation.   To make the conversion, use the String() function.

Example:

Sample XML –

<root><sample><updateDate>2009-01-01</updateDate></sample></root>

Convert to AGS format (LSF90)

var strDate = String(XMLNode_output.sample[0].updateDate);

strDate = strDate.replace(/-/g,””);

This will replace all of the “-” characters in the date string, resulting in a properly formatted AGS date string of “20090101”.   For those not familiar with ECMA, the /g is for “global” replace.

HTH

Addendum – 20091009
Working with the output of the XML node today using numbers (decimal type specifically), I was noticing that it was not doing proper comparisons.

I had a node declared as decimal type with a value 0.01. In an assign node, I was doing:
if(XMLNode_output.sample[0].amount > 0)
foo();

In the result, it never did foo(). The only way I could get it to work was to explicitly convert the xml value to a number:
var dblAmount = new Number(XMLNode_output.sample[0].amount);

Based on this, it would appear that when using Lawson’s XML node, you should be prepared to do type conversions as necessary.

HTH