Mike Ashe Posted April 5, 2007 Report Share Posted April 5, 2007 Hi all, Has anyone used the G Math or another math expression parser in LabVIEW Real Time? Most of the parsers I've seen are stack based, with build array and subset of array nodes everywhere and they evaluate the original text string each time through. Obviously we try to stay away from those functions in LabVIEW RT for memory management reasons, to keep determinism. I am currently working on a modified version of the "Parse Arithmetic Expression" VI to try to get something that is suitable for RT. I am substituting a preallocated "stack" of 50 variables and an index for the stack, rather than just looking for the "top". Instead of building a new value on top with build array I am substituting into the array and incrementing or decrementing the index. I'm doing the same for the operators, which I am converting to enums. Has anyone done this for RT? Ideally I'd like to parse all the formulas once and initialization of the RT target, then run with no strings at all and no array resizing functions. Thoughts and wisdom, oh wise ones? Thanks, PS: I am doing this for a client, under NDA or I would post the code I already have. I am going to try to convince the client to let me post just the parser snippets. I've just recently got the to start using openG toolkit code, so there is hope of going in the other direction. Quote Link to comment
Aristos Queue Posted April 5, 2007 Report Share Posted April 5, 2007 > Thoughts and wisdom, oh wise ones? Wisdom? No. Thoughts? Yes. I rarely program for RT, and have never done more than superficial testing with it. All I can offer is theory. My thought is this: You're attempting to do something that is inherently not deterministic... the time needed to parse any given math expression is completely based on the particular math expression. It has no relation to the length of the expression or number of terms in the expression or anything else that you might count for determinism. Given that, you can't possibly be wanting to do this inside the RT loop. Given that, why not use the stack base approaches? Yes, you try to avoid such things on RT, but if you really need them, they are there (to varying extents on the different targets). Quote Link to comment
Neville D Posted April 5, 2007 Report Share Posted April 5, 2007 QUOTE(Mike Ashe @ Apr 4 2007, 10:41 AM) Hi all,Has anyone used the G Math or another math expression parser in LabVIEW Real Time? Most of the parsers I've seen are stack based, with build array and subset of array nodes everywhere and they evaluate the original text string each time through. Obviously we try to stay away from those functions in LabVIEW RT for memory management reasons, to keep determinism. Hi Mike, Have you taken a look at http://www.kshif.com/calcexpress/' target="_blank">CalcExpress by Konstantin Shifershteyn? Its not free, but when I tried the demo version a while back it beat the pants off the NI parser. It also has a lot of additional functionality that you may not need, but what the heck. Neville. Quote Link to comment
Louis Manfredi Posted April 6, 2007 Report Share Posted April 6, 2007 I did some work with a general parser long ago-- I think I'd rather start from scratch than try and clean up the old code. But I do recall one aspect that worked pretty well... Rather than parsing the text string each and every run, I would "compile" object code from the text string on the rare occasions that the string changed. I substituted each string, such as "*" or "^" or "sin(" with a U8 in an array. These were used to select cases in a case structure. Getting rid of the text scanning made a HUGE improvement in execution speed. If I were doing it again, I'd use an enumerated type rather than a U8. Quote Link to comment
Mike Ashe Posted April 7, 2007 Author Report Share Posted April 7, 2007 Thanks to eveyone for their replies, I'll make responses. QUOTE(Aristos Queue @ Apr 4 2007, 03:18 PM) My thought is this: You're attempting to do something that is inherently not deterministic... the time needed to parse any given math expression is completely based on the particular math expression. It has no relation to the length of the expression or number of terms in the expression or anything else that you might count for determinism. Given that, you can't possibly be wanting to do this inside the RT loop. Correct, this is being used to create memory only Tags that are derived by an expression containing other real Tags based on DAQmx readings taken inside the critial RT loop. I am moving the data out of the RT loop using RT FIFOs, just like you are supposed to. For each system, however, the client will want to be able to define Memory Tags, by a configuration file that can be edited. The math is not a total parser, just 4-functions, AND, OR, >,<, etc. No trig or anything fancy. Basically just slope and offset and simple boolean math for discreets. After the real data is sent to the Tag Manager Module, it is available to other modules. The first thing we want to do is run the critical safety checks, then we want to evaluate the Memory Tags, then do all the non-critical warnings checks, etc. QUOTE Given that, why not use the stack base approaches? Yes, you try to avoid such things on RT, but if you really need them, they are there (to varying extents on the different targets). I have been so far, and it works in the non critical loops, but I have been hearing that there are issues with memory deallocation over the long haul with RT. These test stands will need to run 24x7 for months on some tests and any memory leaks, etc could cause problems. I have had some people say that any strings at all are bad things, so I was just trying to get rid of as many as possible. I don't actually want to parse the expression every time. I only want to do it once, at system startup, for each Memory Tag, then store the operators and Real Tag indexes in 2 arrays for each Memory Tag. At run time I will always run the same operators/indexes for each Memory Tag. I'm just trying to follow Andrew S. Grove's (chairman & founder of Intel) maxim that "Only the Paranoid Survive" QUOTE(Neville D @ Apr 4 2007, 04:39 PM) Have you taken a look at CalcExpress by Konstantin Shifershteyn? Yes, I have in the past. His is a good product, but not to boast, but I already have one that is better. It is a basic semi-compiler, all written in G, that has more flexibility and functionality. Somewhat like the LabBASIC tool that someone just posted here on LAVA, but not an express node. I've been thinking of releasing it as open source for a long time, but there are issues. In any event, all of these types of tools use a lot of strings and are built on stacks on the inside, therefore, build arrays, etc. They also are overkill for what I am trying to deliver. We're trying to keep it lean and mean: four functions, simple booleans, no more, and make it robust and memory efficient. QUOTE(Louis Manfredi @ Apr 4 2007, 06:42 PM) I did some work with a general parser long ago-- I think I'd rather start from scratch than try and clean up the old code. But I do recall one aspect that worked pretty well... Rather than parsing the text string each and every run, I would "compile" object code from the text string on the rare occasions that the string changed. I substituted each string, such as "*" or "^" or "sin(" with a U8 in an array. These were used to select cases in a case structure. Getting rid of the text scanning made a HUGE improvement in execution speed. If I were doing it again, I'd use an enumerated type rather than a U8. That is exactly what I am trying to do now. I am already using enums and my stacks are preallocated arrays with floating indexes. Almost like a circular buffer, but without the wrap. If you have any of that old code, I'd love to take a look at it. By the way, I got permission from the client to post code here. I will do that tomorrow after I get it working a bit more. Thanks everyone! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.