Page 1 of 1

Trace File Best Practices

Posted: Thu Jul 11, 2019 3:58 am
by jyoung
When using a trace handler that implements #PRIM_APP.iTraceHandler the example the docs show uses an incrementing file counter.

Something like this

Code: Select all

Mthroutine Name(GetNextFile) Access(*Private)
Define_Map For(*Result) Class(#Prim_alph) Name(#Result)
Define_Com Class(#prim_nmbr) Name(#Extension)
Begin_Loop Using(#Extension)
#Result := *Sys_dir + "UserTrace" + "." + #Extension.asstring.rightAdjust( 3 "0" )
Leave If(*Not #Com_owner.FileExists( #Result ))
End_Loop
Endroutine
I've modified that code to go into a temp file change the name as such

Code: Select all

mthroutine name(GetNextFile) access(*PRIVATE)
 define_map for(*RESULT) class(#PRIM_ALPH) name(#Result)
 define_com class(#PRIM_NMBR) name(#Extension)

 begin_loop using(#Extension)
  #Result := *TEMP_DIR + "API-Trace." + *YYYYMMDD.AsString + "." + #Extension.asstring.RightAdjust( 3 "0" ) + ".txt"
  leave if(*Not #Com_owner.FileExists( #Result ))
 end_loop
endroutine
This creates a WHOLE lot of incremented files with very little content in each file.
Capture.PNG
Capture.PNG (66.73 KiB) Viewed 139 times
My question is about condensing those files into a single file by date.
For example, I can do this

Code: Select all

mthroutine name(GetNextFile) access(*PRIVATE)
    define_map for(*RESULT) class(#PRIM_ALPH) name(#Result)

    #Result := *TEMP_DIR + "API-Trace" + "." + *YYYYMMDD.AsString + ".txt"
endroutine
And I can easily view all the logs for a specific date.

The question is should I do this?

Is there any benefit from having a file increment?

Its much easier to have a single file to look thru then having to go to each file; however are there unseen drawbacks to this approach?

Thanks,
Joe

Edit

For completeness, here is the entire trace handler. In this case I can turn on/off tracing via a System Variable (*APITRACE)

Code: Select all

* =================================================================================================================
*
* API SERVER SIDE TRACE HANDLER
*
* Enables trace output on server side objects.
* Tracing is toggled via the LANSACFG/TRACE data area
* Trace files are written to *TEMP_DIR, typically
*     IBM i - /lansa_devpgmlib/x_lansa/tmp
*     Windows - C:\Windows\Temp
* Trace files are named API-Trace.YYYYMMDD.99.text

* -----------------------------------------------------------------------------------------------------------------
* TO USE:
*
* Define this component in your object with application scope
* define_com class(#APITraceHandler) scope(*APPLICATION)
*
* Enable tracing by setting the TRACE/LANSACFG data area to 'YES'

function options(*DIRECT)
begin_com role(*EXTENDS #PRIM_OBJT *implements #PRIM_APP.iTraceHandler)

define field(#FileHandle) type(*DEC) length(3) decimals(0)
define_com class(#PRIM_ALPH) name(#Tab)

evtroutine handling(#COM_OWNER.CreateInstance)
#Tab := (09).asChar
#COM_OWNER.InstallTracing
endroutine

mthroutine name(InstallTracing) help('Plug in the trace handler to the Application') access(*PRIVATE)
if (*APITRACE = "YES")
#SYS_APPLN.TraceHandler <= #COM_OWNER
endif

endroutine

mthroutine name(Initialize) options(*REDEFINE) access(*PRIVATE)
#COM_OWNER.OpenTraceFile
endroutine

mthroutine name(Terminate) options(*REDEFINE) access(*PRIVATE)
#COM_OWNER.CloseTraceFile
endroutine

mthroutine name(TraceMessage) help('Executed whenever #SYS_APPLN.TraceMessageData or #SYS_APPLN.TraceMessageText is used') options(*REDEFINE) access(*PRIVATE)
#COM_OWNER.WriteToFile( #ComponentName #Description #LineNumber #MessageText )
endroutine

mthroutine name(OpenTraceFile) help('Create a new trace outputfile') access(*PRIVATE)
use builtin(STM_FILE_OPEN) with_args(#COM_OWNER.GetNextFile "APPEND" "N" "Y") to_get(#FileHandle #IO$STS)
endroutine

mthroutine name(WriteToFile) help('Write an entry in the trace output file') access(*PRIVATE)
define_map for(*INPUT) class(#PRIM_ALPH) name(#ComponentName)
define_map for(*INPUT) class(#PRIM_ALPH) name(#Description)
define_map for(*INPUT) class(#PRIM_NMBR) name(#LineNumber)
define_map for(*INPUT) class(#PRIM_ALPH) name(#MessageText)

define_com class(#PRIM_DAT) name(#Now)

#MessageText := #Now.now.AsLocalizedDateTime.AsString + #Tab + #ComponentName + #Tab + #LineNumber.asstring + #Tab + #MessageText

use builtin(STM_FILE_WRITE) with_args(#FileHandle #MessageText) to_get(#IO$STS)
endroutine

mthroutine name(CloseTraceFile) access(*PRIVATE)
use builtin(STM_FILE_CLOSE) with_args(#FileHandle) to_get(#IO$STS)
endroutine

mthroutine name(TracingState) options(*REDEFINE)
#MessageTracingActive := True
endroutine

mthroutine name(GetNextFile) access(*PRIVATE)
define_map for(*RESULT) class(#PRIM_ALPH) name(#Result)

#Result := *TEMP_DIR + "API-Trace" + "." + *YYYYMMDD.AsString + ".txt"
endroutine

* mthroutine name(GetNextFile) access(*PRIVATE)
* define_map for(*RESULT) class(#PRIM_ALPH) name(#Result)
* define_com class(#PRIM_NMBR) name(#Extension)
*
* begin_loop using(#Extension)
* #Result := *TEMP_DIR + "API-Trace." + *YYYYMMDD.AsString + "." + #Extension.asstring.RightAdjust( 3 "0" ) + ".txt"
* leave if(*Not #Com_owner.FileExists( #Result ))
* end_loop
* endroutine

mthroutine name(FileExists) access(*PRIVATE)
define_map for(*INPUT) class(#PRIM_ALPH) name(#Path)
define_map for(*RESULT) class(#PRIM_BOLN) name(#Result)

use builtin(OV_FILE_SERVICE) with_args("CHECK_FILE" #Path) to_get(#io$sts)
#Result := (#io$sts = "OK")
endroutine
end_com


Re: Trace File Best Practices

Posted: Sat Jul 20, 2019 1:21 am
by jyoung
I contacted support who in turn contacted the developers about this.
Here is their response
Joe is correct in his concerns about multiple processes. The tracing object will be instantiated by the first object to run on the server, and it will open, and lock the file being written to. So when another user in another job is executing the application, their trace messages will not be recorded. You would need to implement your own locking/waiting functionality, and at the very least, change the logic you have in your trace handler to open, and close the file every time it writes to the file (not just in Initialize and Terminate).
 
There is one quick and easy solution to the locking problem – use the database. DBMs have built in lock/wait functionality, so if you write trace messages to a database file there would be no locking issues. However there are downsides, you cannot easily open the trace table to read it like a text file.
 
As for which we recommend, its an individual choice - there’s no right or wrong, it all depends on your objectives.
 
File:
Pros:
No setup required.
The file can be viewed at any time.
Can log anything in whatever format.
Cons:
The user have to take care of the synchronisation themselves.
 
Database Table:
Pros:
No need to worry about the synchronisation.
Cons:
Setup required, to ensure the existence of the database table, more headache if required to support on different database types.
Need an utility to read the records.
My interpretation of this is that using a SINGLE file is not ideal and will result in missing trace statements. Using multiple files is safe (as shown in the examples), however requires collection and aggregation to find information. I am curious if a tool like Splunk could help here. The database is a decent option, if you don't mind throwing together a quick UI and clean up.

I'm also wondering if something like AWS Kinesis or Azure Event Grid would be viable. Big concern there is that since there is not a native SDK for LANSA, you would have to send the log data via the REST API or use a custom Integrator Service that could do it for you. Both of which I think would cause performance issues in the app.

Joe

Re: Trace File Best Practices

Posted: Sat Jul 20, 2019 5:35 am
by atostaine
We use trigger programs to write "audit" records when database's are written, deleted, updated, etc. We can see who changed a customer name, etc.

We also have RP's that are loaded in the application at startup that let us write a record to the same audit file with any description. For example, when doing a long A/R posting, we have this sort of code in the RDMLX function:

#yComm.yAudit.mWrite Key1(ARPosting) key2(*component_name) text('Started')
#yComm.yAudit.mWrite pKey1(ARPosting) pkey2(*component_name) ptext('Error writing to AR file. Job ended abnormally')

The file has timestamp, user Id, job names, etc all populated automatically.

Recently we added a parameter pDebug, and it writes to a different file that we use for tracing. We just use SQL or other methods to clean up the file when we need. Mostly I just CLRPFM the file, then run my process since I'm the only developer at some shops.

Art