Jump to content

Gold Parser Engine


Recommended Posts

I would like to present one of my latest projects,  a port of the Gold Parser Engine to LabVIEW. It was not easy, I needed More advanced evaluation than the Built in LabVIEW functions could offer, so I found the Gold parser engine for Java and VB, and yes i could use the .net but I couldn't get it to work well with reentrant calls to parse multiple things in parallel. So I put this together, I would like to polish it with the LV community, And Make it work even faster than it is already.

 
I would like to thank, Devin Cook and Ralph Iden, for their excellent work on the previous Engines.
 

In case you're not familiar with Gold: GOLD is a free parsing system that you can use to develop your own programming languages, scripting languages and interpreters. It strives to be a development tool that can be used with numerous programming languages and on multiple platforms.

 

I use it mostly to build AST out of strings of text that represent short snippets of code. The Source comes with an Top level example of how  a recursive evaluator can be implemented. But it can be used for many LabVIEW things including a 100% native XML parser, 100% native Scripting language... and the idea is that its native so that's nice. 

 

Performance:

I haven't tested long source strings for performance but currently it is working really quite quickly. Sub milliseconds for the numerical evaluation example. LabVIEW can be really sneaky making extra copies when you don't really need them. I have tried to do somethings to increase performance more. Alot of the items in the parsing VI are subroutines. And I have added Swap Values as a common performance tool. It is really cool. especially when working with class private data that doesn't deserve an in class function. eg. Tokens contain other tokens that can be evaluated recursively, however it doesn't make sense to put the eval function in the token class. As AristosQueue posted here http://forums.ni.com/t5/LabVIEW-Idea-Exchange/New-in-place-element-structure-option-class-property-node-read/idc-p/2515866#M24026 

 

After creating a grammar This example uses Devin Cook's Operator Precedence example. As you can see a string goes in and a number comes out. Parser engine has an application to test and verify grammars. This engine only except egt format.

2-9-2015%2B1-18-56%2BPM.png

The above grammar has 15 symbols. I think of content as leaves on a AST (abstract syntax tree) and Nonterminals as Branches that have more leaves.

2-9-2015%2B1-00-40%2BPM.png

 

Productions are a list of how each NonTerminal can be configured I like to think of it as a list of cases to handle.

2-9-2015%2B1-49-26%2BPM.png

It turns out that this makes a pretty simple recursive evaluation scheme  :thumbup1:

2-9-2015%2B1-38-39%2BPM.pngIf the Token is Content I see what type by getting the table index and using a typedef with all of them listed. In the case of an integer i get the value from the string and return it.

2-9-2015%2B1-39-07%2BPM.pngHowever if the Token is a NonTerminal, It can be handled by Sending the Tokens contained by the token into the recursive function and  using the values. You can see that the Production:

<Muiti Exp> ::= <Multi Exp> '/' <Negate Exp>

Contains 3 tokens.

I want to evaluate the Value associated with the <Multi Exp> so i get the sub token at index 0

and <Negate Exp> at index 2. 

 

You can find the Code here: https://decibel.ni.com/content/docs/DOC-41064

 

Link to comment
  • 3 months later...

Hi,

 

Cool with the general GOLD parser, but performance wise it's very slow in the specific case.

 

One specific case being math expressions like your example. For that you could use a specialized toolkit like Expression Parser, which is thousands of times faster. Example with your expression "11+3/23+(9*8)*x":

 

Loop with Parse + Evaluate: 124 us/iteration.

 

Typically you'd parse once and then evaluate multiple times with differing x-values;

 

Loop with Evaluate only: 305 ns/iteration.

 

post-15239-0-70449200-1432299554_thumb.p

 

Add to that the vastly larger feature list of Expression Parser (I don't know how that would affect the GOLD parser):

 

- More than 260 math functions and constants supported.
- Supports any number of variables of any name.
- Supports VI Registers.
- Reports overflow if that occurs during evaluation.
- Supports all 14 numeric data types that LabVIEW offers, including complex evaluation.
- Offers special expression control like conditionals, piecewise defined functions, pulse trains, and defining your own custom periodic functions.
- Supported on desktop and real-time.
 
Not to steal any thunder, but parsers aren't easy to make performant.
 
Cheers,
Steen
Link to comment
  • 3 months later...

In what way is it not fair?

UI elements are updated and polled asynchronously.  Having control values being read or written during the timing measurement will change the results.  Especially if you are reading or writing elements continually in a loop.  The proper timing should read the control before taking the start time, and write to the indicator after taking the stop timer.  For bonus points turn off automatic error handling and debugging.

Link to comment

UI elements are updated and polled asynchronously.  Having control values being read or written during the timing measurement will change the results.  Especially if you are reading or writing elements continually in a loop.  The proper timing should read the control before taking the start time, and write to the indicator after taking the stop timer.  For bonus points turn off automatic error handling and debugging.

 

Oh that little thing :-) Yes yes, I know all that. That is on purpose, because that is how I compare it to other solutions as well. Saying it isn't a fair measurement makes assumptions as to what I want to measure - that was the reason for my question. In this case the only parameter I want to illustrate is that Expression Parser is several orders of magnitude faster than the Gold Parser for numeric parsing.

 

There is no way in plain LV to reliably measure the execution time of a single EP Eval as it executes in a few nanoseconds. Thus I need a loop with many iterations of Eval and then divide by N. If I put data IO outside the loop the LV compiler will optimize the loop away since every iteration performs the same operation. The least intrusive way to stop this dead code elimination is with async controls and indicators inside the loop. Yes, we spend a bit of time writing to and from their buffers, and several times per second a thread switch and transfer to the front panel value fields might be happening, maybe even the stray redraw. But I can't come up with a variable data IO solution that skews the timing less. And it suffices to demonstrate the performant characteristics of EP.

 

/Steen

Link to comment

If I put data IO outside the loop the LV compiler will optimize the loop away since every iteration performs the same operation. 

Yup that's why you use randomized data, and use that same randomized data in the other functions you are trying to compare it to.  At some point we are splitting hairs, but if we are seeing orders of magnitudes of difference between the execution time of two methods then I agree, updating the UI and thread swapping probably isn't adding that much uncertainty to our time measurement.

Link to comment

Offtopic:

You should use randomized data for a fair representation. Maybe the algorithm performs a lot better or a lot worse for certain values. Maybe the functionality posted in the OP functions a lot better for very large values.

 

I would benchmark with random data which represents the full input range that could be expected. Furthermore I would also do the for-loop around 1 instance of the function. Then store the timings in array. Compute the mean, median, variance, standard deviation and show maybe a nice histogram :-)

 

What is also nice to do, is by changing the input size for each iteration, 2^0, 2^1, 2^2, 2^3,... elements and plot it to determine how the computation execution scales.

Link to comment
 

 

You should use randomized data for a fair representation. Maybe the algorithm performs a lot better or a lot worse for certain values. Maybe the functionality posted in the OP functions a lot better for very large values.

 

Yes, if I were benchmarking the numeric evaluation. I'm not, I'm benchmarking all that comes before that final step, because that is what differs between the two compared solutions. LabVIEW prims handle the numeric evaluation in both cases - for multiplication for instance we're both handing the numeric arguments over to the LV Multiply primitive. That part of the evaluation should perform equally in both our implementations. I'm interested in minimizing the impact of the identical parts, and benchmark where our implementations differ.

 

Both random and exhaustive full-range data would be neccesary if this was a scientific benchmark. Not only for numeric input, but definetely also for the expression content itself. It could turn out that the parsing and traversing the RPN tree is very little of the formula evaluation for most real-world formulas, but in my experience it is not. If you spend most of your time evaluating formulas that take seconds or minutes to evaluate, it might turn out that the parsing algorithm has no real impact on performance - you might spend all your time inside one LV prim perhaps anyway. That is rarely the case, but for special cases it might be.

 

 

Furthermore I would also do the for-loop around 1 instance of the function. Then store the timings in array. Compute the mean, median, variance, standard deviation and show maybe a nice histogram :-)

 

1) Tell me how to accurately time slices of maybe 10 ns on a Desktop machine?

2) You don't think that building and filling an array with 20 million timing values will impact the benchmark in any way? I also suspect it would take a while to do statistics and display of that data afterwards.

 

But sure, if the world depended on this being very accurate we should do some of that stuff. I can send you the Expression Parser code though, and let's make a dollar bet on the outcome of your exhaustive investigation. My money is on that your results will be within 2% of what I've posted here :-)

 

/Steen

Just for clarity I feel it's important to note that I agree both with you Brian and Wouter - my benchmark is skewed somewhat.

 

Doing an accurate benchmark is very very hard at these small time slices. So many factors play a part from run to run. The quick'n'dirty approach I used here does come within touching distance of a much more deterministic approach is my experience (with string to numeric parsing). I've benchmarked exactly these operations for more than 20 years now (making calculator software), so I have a good feeling for when I see a trustworthy trend and when I need to do a more thorough benchmark. Not necessarily the same if I were to benchmark, say, HDMI data transfers. I have no experience with that.

 

You probably know, but NI for instance has 'the grid' they deploy VIs and apps on to benchmark. A network of identical computers and SW images that should run identical benchmarks, but which do not entirely. Enough runtime on the grid will illustrate the general direction of performance; "Did the investigated code change make performance go up or down generally speaking"?

 

/Steen

Edited by Steen Schmidt
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.