Jump to content
JodyK

NLog logging engine for LabVIEW

Recommended Posts

I recently spent some time describing a logging tool that we use here at DMC that has significantly reduced our debug times and helped a lot with onsite support and I thought it was worth bringing it up to the LabVIEW community.

Essentially, it's a logging utility based off of NLog that allows the user to add the equivalent of print statements to their code - any string that can be built is fair game. It also allows a logging level to be associated with the print statement and this is the concept that makes it extremely powerful. Some statements can be low-level "trace" statements, while others are "warnings" or "errors". During run-time (even in an executable) the level of the logger can be changed and you can easily do trace-level debugging for a glitch, and then set it back to an informational level afterwards.

Multiple targets are supported, including RT console, log files, TCP/UDP streams, and email. All the calls are made asynchronously, so the debug statements have a minimal impact on the code execution.

At this point we are finishing and polishing the implementation, but more information and details can be found in a blog post I recently wrote:

NLog for LabVIEW: Logging done properly

-Jody Koplo

Edited by JodyK
  • Like 2

Share this post


Link to post
Share on other sites

Hi Jody,

I must say that looks very interesting, thank you for the link and the blog post.

I am just about to look into logging at my new company fo a LabVIEW project I have inherited that has none at present and as I used to have it I miss it a lot.

We use to have a log system that logged, different levels, not as clearly defined as your levels, but here were a couple of issues with this. I should say this was not a Real-time system but a PC based test executive.

One problem is that of functionality within the code, if you turn on Trace you would get a lot of logging messages, as we had trace logging all over the place. But what happens if I want a Trace type level but I am only interested in say GPIB write & reads or maybe only telnet command sent or read from our UUT.

The other problem I saw was that of the "if only we had the logging turn on" :P problem. There were situation where rare unexpected behaviour or errors occurred, you would know something had happened after a UUT had been tested, but on the retest after you have "turned logging on" all went well so you have no extra information.

I had thought to solve this by basically defaulting all logging to a Trace level and flagging logged messages in a functional way i.e. GPIB: TELET: SERIAL:, I would run a separate logging process to collect and deal with this data.

Options to deal with the data could be, write all data to file when PC not busy, or maybe just keep all logging for an individual test stage and if that stage runs OK and a PASS is generated throw it away but if a fail or error occurs then log it to file.

If you do decide to release this to the community, I for one will be very pleased to have a play with it.

Cheers

Dannyt

Share this post


Link to post
Share on other sites

I have looked at and played with jgcode's eror logger and it did work for me, but error logging and debug loging are differnt in scope, in my view.

I must admit you have reminded me I was going to look at extending that example code into a full logger tool.

Share this post


Link to post
Share on other sites

I must admit you have reminded me I was going to look at extending that example code into a full logger tool.

That's what I was thinking could be done, after rereading the OP and then the title of JG's example but before clicking "Post".

Share this post


Link to post
Share on other sites

I think we all have something similar in our toolkit (although probably with not as many interfaces). However, a while ago mine got a face-lift to use a SQLite database instead of text files. The fact that you cannot open it in a text editor is far outweighed by the extra features like being able to filter errors to only show errors, info and/or errors containing certain text. It also means you can have much larger log files since after a program has been in the field a while, text editors struggle to open them. It also makes long term statistical analysis of the files much more agreeable.

Edited by ShaunR
  • Like 2

Share this post


Link to post
Share on other sites

I think the Database idea is great for errors, we did something similar in adding error messages to our test results report and they were imported into our DB wih all the oher results.

However that does not work for general logging of which error loging is only a small subset off

Share this post


Link to post
Share on other sites

I think the Database idea is great for errors, we did something similar in adding error messages to our test results report and they were imported into our DB wih all the oher results.

However that does not work for general logging of which error loging is only a small subset off

Not sure I quite follow you here.

If you are already using a DB for results, then just adding an error table is a no-brainer, The only difference is the db name that you log the error to. You also get the advantage that you can link a specific row (or test if you like) with one or more errors, info, warnings etc giving you greater granularity.

Share this post


Link to post
Share on other sites

I totally agree with you about your comments regarding errors, but to me there is far more to logging than that. There can be problems in code that do not result in errors but incorrect results or when you create a new feature or application you want to actively debug it. I think the blog sort of covers that aspect quite well.

On my previous system if we turned on full logging and ran a test the result was a several Mb test file, with lots of useful information, but not stuff that could suitable be placed into a DB, for example we could see all telnet conversations both to UUT & thier replies, or all GPIB conversattions.

We ran our test software in a foreign manufacturing plant and sometimes when there were problems we would ask them to turn logging on (a simple menu option) and get them to send back the log file as we could not debug on the remote executable.

Share this post


Link to post
Share on other sites

We're utilizing TDMS for for results, but I really like the idea of SQLite of error/warning/whatever logging. Has anybody tried to tie the two together? I think you can stuff a blob in a TDMS, so you could include your database in the TDMS if you wanted, but that seems a little hacky.

Share this post


Link to post
Share on other sites

I totally agree with you about your comments regarding errors, but to me there is far more to logging than that. There can be problems in code that do not result in errors but incorrect results or when you create a new feature or application you want to actively debug it. I think the blog sort of covers that aspect quite well.

On my previous system if we turned on full logging and ran a test the result was a several Mb test file, with lots of useful information, but not stuff that could suitable be placed into a DB, for example we could see all telnet conversations both to UUT & thier replies, or all GPIB conversattions.

We ran our test software in a foreign manufacturing plant and sometimes when there were problems we would ask them to turn logging on (a simple menu option) and get them to send back the log file as we could not debug on the remote executable.

I do the same and insist on result data as well. I think you've just picked up on the error bit because of my last comment (my bad), but previously I did say log file with info, warnings and debug so I think we are on the same page. If the log table is in the same DB as the results then you get them by default when they send the file. A few MB is nothing really in the scheme of things and it makes no difference in performance for a database of couple of GB. Of course, with text files you would really be struggling even with 10s of MB,

As to what you save in the log table, well that's just down to your category partitioning. The sort of info (comms etc) that you describe, for me, would be "debug" and only as and when required. Maybe you would just have an extra category "Comms" since categories are not mutually exclusive, But I would still want errors, warnings and info logged during normal operation and over extremely long periods.

Because you can handle such large data files you can leave error, warning and info logging enabled permanently and just switch in the "debug" for all the low level stuff as and when required.You then get useful things like how often they restarted the machine, what operators were logged in when the errors happened, if there were any warnings before the errors occurred, any alarms emitted etc. And all filterable :) Of course. Errors should be minimal if the software is working as intended. So it's really info and usage I would primarily be interested in and I request customers send me the DB file every month for the first 6 months so I can see how it is being used/abused and what can be done to improve it. Quality departments love the info too since you are logging calibration and tool-change info over time and they can run data through their 6 sigma software ;)

We're utilizing TDMS for for results, but I really like the idea of SQLite of error/warning/whatever logging. Has anybody tried to tie the two together? I think you can stuff a blob in a TDMS, so you could include your database in the TDMS if you wanted, but that seems a little hacky.

I'm not sure I like the idea of including a database in a database. I don't really see the point since it wouldn't be searchable from the TDMS. Like with most things I prefer to stick with one technology rather than mix, If I were to consider it, I think I would just keep the Sqlite file separate or include the errors/info in the TDMS (SQLite cannot beat TDMS for streaming).

  • Like 2

Share this post


Link to post
Share on other sites

wow, you've gome all the way :worshippy: with it. You have given me something else to think about

cheers

Share this post


Link to post
Share on other sites

wow, you've gome all the way :worshippy: with it. You have given me something else to think about

cheers

Well. I've been around a bit.It's my 3rd most re-used piece of code. Maybe I should include it as an example in the API library ;)

Share this post


Link to post
Share on other sites

A few MB is nothing really in the scheme of things and it makes no difference in performance for a database of couple of GB. Of course, with text files you would really be struggling even with 10s of MB,

So long as you have the storage space for it. My typical database is terabytes in size, mainly because the customer (lately) has wanted to see 1-2 years of production online (7-50 years offline, depending on the customer). Space becomes a premium when the customer wants to marry production data in with the test data. A single part's record can get to 3-5 MB as flat-file.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.