Jump to content

Terrible Bug - While Loop Inside a Case Stmt


wwbrown

Recommended Posts

I may have seen this somewhere before, but the attached demonstrates a terrible bug. The indicator should update to "Ignore Notification". Remove the outer case structure or enable debugging and the VI suddenly works! I will submit this to NI shortly.

The VI also works correctly if you popup on the 2 that is wired to the Index Array and select "Change to Control".

The bug seems to be something in the constant folding. I'll let JeffK know... did you already file the bug with NI?

Link to comment
The VI also works correctly if you popup on the 2 that is wired to the Index Array and select "Change to Control".

The bug seems to be something in the constant folding. I'll let JeffK know... did you already file the bug with NI?

Best feature of LV9: No more constant folding ! More reliable programs without any practical loss of speed and linking VI's in statically becomes more consistent !

:thumbup:

Joris

Link to comment
Best feature of LV9: No more constant folding ! More reliable programs without any practical loss of speed and linking VI's in statically becomes more consistent !

Are you being obnoxious on purpose or by accident? Your statement seems like a deliberate flamebait.

[LATER EDIT] Sorry, Joris. My original reply was uncalled-for and has now been edited. But your comment really doesn't make sense. What's the difference between constant folding and any other compiler optimization?

Link to comment
Are you being obnoxious on purpose or by accident? Your statement is either a deliberate flamebait or a really stupid thing to say for a long list of reasons.

I am hoping it is a cultral difference, and that Joris' English interpretation skills are a little different to mine. I think it's a case of text-based messaging not including the emotion in which the text was intended?

Link to comment
Are you being obnoxious on purpose or by accident? Your statement seems like a deliberate flamebait.

[LATER EDIT] Sorry, Joris. My original reply was uncalled-for and has now been edited. But your comment really doesn't make sense. What's the difference between constant folding and any other compiler optimization?

Don't worry I did not see it ;)

I meant it as an ironic marketing line. Usually marketing departments introduce new features that are better, faster, more reliable. Usually that goes for something that was "added". But you don't hear them about negative consequences of the addition. So I turned everything around, made an improvement of a removal, just to make a point. It seems it hit harder than intended.

If you think about the folding optimizations, they hardly give any improvement because a reasonable programmer will always think about the program he's writing. If the programmer places a loop that does nothing, he just didn't think enough. The same goes when he could better have placed a calculation outside a loop. Many inefficiencies cannot be detected by an optimizing compiler, because they are caused by an efficient structure of the program. There's always a bottleneck in any program, and no matter how good the compiler optimizes, the compiler cannot improve the design. The problem is usally with the programmer, not with the compiler. So improving the user may give much larger speed gains. This may sound like a stupid statement but it is possible to learn a programmer better ways of programming.

More optimizations always requires more code. And more code will introduce more bugs. I really cannot understand why NI decided to do this. It is dangerous and there's no real gain. It only undermines the stability, reliability and clarity, and those are important factors when you are working in environments where LV is often used. LV is not an office application.

To come back to your question, I think these folding optimizations are something different than a loop speed optimization because loop speeds are a hard limit. The program cannot loop any faster than the LV generated code allows it to. Then, generating better looping code has gain. But all folding optimization gains can also be achieved by the programmer writing his program slightly more efficient. That's why I consider them different.

Joris

Link to comment
If you think about the folding optimizations, they hardly give any improvement because a reasonable programmer will always think about the program he's writing.

I don't agree with you robijn. Twenty years ago this was the case as the compilers were so unefficient that programmers were forced to do all the optimization by themselves. Today most mainstream compilers are pretty good in making efficient code. So it's better to write code that is easier to read; this easy-to-read code shall compile to efficient code anyhow. Computing constant expressions at compile time is one example of such modern way of programming; instead of using constant value 12.56 you should write pi*(2^2) as it makes more sense when reading the code. The following two constant expressions represent the an array of integers x^2 with x values ranging from 0 to 100. The diagram above is much easier to read than the constant below.

post-4014-1168802435.png?width=400

Link to comment
I don't agree with you robijn. Twenty years ago this was the case as the compilers were so unefficient that programmers were forced to do all the optimization by themselves. Today most mainstream compilers are pretty good in making efficient code. So it's better to write code that is easier to read; this easy-to-read code shall compile to efficient code anyhow. Computing constant expressions at compile time is one example of such modern way of programming; instead of using constant value 12.56 you should write pi*(2^2) as it makes more sense when reading the code. The following two constant expressions represent the an array of integers x^2 with x values ranging from 0 to 100. The diagram above is much easier to read than the constant below.

post-4014-1168802435.png?width=400

Yes for these examples you are certainly right. But then, how far should the optimization go ? Because optimization creates a risk, as we've all seen ...

Joris

Link to comment
The CAR is 45E8091Y. I have the original before and after code leading to this problem. NI should let me know if they need the original code to resolve this issue.

Can't find that CAR number... but I did find one with a very similar CAR number that includes a link to this post. That CAR is 45E85U1Y.

I think we've got enough to identify what's up, so no need to post any further code.

Link to comment
I don't agree with you robijn. Twenty years ago this was the case as the compilers were so unefficient that programmers were forced to do all the optimization by themselves. Today most mainstream compilers are pretty good in making efficient code. So it's better to write code that is easier to read; this easy-to-read code shall compile to efficient code anyhow. Computing constant expressions at compile time is one example of such modern way of programming; instead of using constant value 12.56 you should write pi*(2^2) as it makes more sense when reading the code. The following two constant expressions represent the an array of integers x^2 with x values ranging from 0 to 100. The diagram above is much easier to read than the constant below.

post-4014-1168802435.png?width=400

There is a little problem with this optimization. As long as it works sometimes and NEVER creates wrong results I don't care. But If I create a VI that does something specific and logical and the result that comes out is simply completely off track, I'm getting very pissed. This has been with certain optimizations in shift register handling in the obnoxious 6.0.1 version and other versions before and after and this whole constant folding again has caused quite a bit of throubles.

The difficulty simply is: you do not expect LabVIEW to calculate 1 + 1 = 3 and when you get such a result you are searching sometimes hours, questioning your sanity before you throw the towel and decide that it really is a stupid LabVIEW bug. I can live with LabVIEW editor bugs or not always correctly working new features but I certainly don't accept LabVIEW to create completely wrong code that has worked for several versions before. As such I do not want constant folding unless I can rely on it to not cause the compiler to create wrong results. If I need optimization I can think about the algorithme myself and find a variant that is quite likely just as fast or even better than what LabVIEW possibly could come up with from a different suboptimal algorithme.

My stance here has been and always will be: I rather have suboptimal and possibly even slow code generated that produces correct calculations than hyper fast code that calculates into the mist.

The only exception to this might be if the miscalculation would be to my advantage on my bank account :rolleyes:

Rolf Kalbermatter

Bad programmers and bad software achitecture create the risk, not optimization... :rolleyes:

But in this case the bad programmer is not the one USING LabVIEW.

I know how hard optimization is but still I would rather have a choice in this than having to start to doubt LabVIEW itself every time a result does not match my expectations. And to be honest I have this choice by still using LabVIEW 7.1.1 for basically all of my real work.

Rolf Kalbermatter

Link to comment

Well put Rolf :thumbup:

I agree completely.

/BeginRant

An example: I use read line from file with nothing wired to the output to skip a line in the file. I've used the technique for years.

LabVIEW 8.0 optimized a previously working program for me by skipping the read (and not updating the file pointer) since nothing was hooked to its output.

That, and other problems led me to a painful process of reverting to 7.1.

So far, this thread, and other traffic in this forum hasn't motivated me to try version 8.2. My maintenance contract is up for renewal soon, and I'm not sure what I'm going to do. I haven't profited from any of the upgrades that I paid for in this year's contract, seems silly to pay again if I'm going to stick with 7.1 for the the indefinite future.

A client has a level of work that might justify them buying their own copy of LabVIEW so they can shift some of the work from me to their in-house staff. But I can't reccomend they buy 8.2 if I don't use it myself, and I'm not sure there's any reasonable way to get them a copy of 7.1... Again, I'm not sure what I'm going to do.

I'd like to use some of the neat new features of 8.2-- but none are essential to my clients' needs, which are adequately served by 7.1-- and whatever high-minded rhetoric we can come up with about real programmers always working with latest new version of software, I make my living by serving my clients' needs adequately, not by making them unwitting beta sites for NI.

/EndRant

At any rate thanks Rolf, for stating the issue clearly. I'd be happy to read as compelling an argument for the other side, but I doubt there is one.

Best Regards, Louis

Link to comment
I know how hard optimization is but still I would rather have a choice in this than having to start to doubt LabVIEW itself every time a result does not match my expectations. And to be honest I have this choice by still using LabVIEW 7.1.1 for basically all of my real work.

I agree. And I also would rather see no optimizations what so ever rather than miscomputing optimizations.

Link to comment
I agree. And I also would rather see no optimizations what so ever rather than miscomputing optimizations.

This is both true and untrue. Let's look at inplaceness. If we didn't optimize memory usage, LV becomes unusable. Quite literally -- the "ideal" form of a data flow language is that every wire is its own independent allocation. By analyzing the flow, we can identify when memory can be reused.

LV 6.0.1. was released. LV 6.0.2 was released about two weeks later because we had a bug in inplaceness for bundle/unbundle nodes. But even if the bugs were dire, we wouldn't turn off inplaceness. We'd fix the bugs.

That's the part I don't understand about this thread. LabVIEW is a compiler. Every node you drop generates some amount of assembly code, just like every line of C code generates assembly code. An optimization bug is no different from a functionality bug. We redid the queues/notifiers in LV6.1 to be language prims instead of CIN nodes. There were a couple of deadlocks in the queues/notifiers in that first revision (fixed in LV7.0). But finding such a bug doesn't make everyone question the functionality of LabVIEW, just the intelligence of the fool who wrote the queue/notifier code. Finding the constant folding bug makes everyone panic. I find that odd.

I guess my point is that any bug in LV is a functionality failure, and I'm not quite sure why the constant folding bug raises more concerns than any other bug. It needs to be fixed, sure, but obviously a whole lot of VIs work just fine in LV8.2, despite this bug, even VIs that have constants on their diagrams. LV8.2 would've been hard pressed to ship out the door otherwise.

Optimization of code is becoming a major issue for LV. We've coasted for a long time by being a highly parallel language and thereby staying ahead of C in performance in a lot of routines. But parallelism is less of an advantage as the processors become more parallel themselves and other compilers optimize out entire chuncks of code. There's many multiple of optimization features behind the scenes in the last couple releases of LV. For example, everyone praises the 50x speed improvement in the LVVariants. Would you rather we didn't attempt that? It was entirely possible that we would get it wrong and variants wouldn't work correctly. It seems we got it right. But the push against constant folding smacks of "this is something that LV has done that was so risky I can't believe you exposed users to this!" That's overreacting, to me.

Link to comment
I guess my point is that any bug in LV is a functionality failure, and I'm not quite sure why the constant folding bug raises more concerns than any other bug.

The reason is that we developers can avoid using LVClasses if we knew they were buggy. We can avoid using notifiers if we knew they we buggy. But we cannot avoid optimization bugs if a specific buggy optimization cannot be turned off.

In many other programming languages compilers are highly configurable so that developers can define which optimizations to use. In LabVIEW we only have debuggin on/off. And this is not a project specific option but a VI specific option.

In many mainstream compilers there are different development versions and stable versions of the compiler. Development version may be more unstable but have more features. Stable versions then are very stable. In LabVIEW NI doesn't market new version of LabVIEW being unstable and old version being stable. Rather NI kind of gives an implicit promise that LabVIEW is a stable compiler regardless of the version. This however is not true. Because LabVIEW doesn't distringuish between stable and unstable version, some features may start getting stable but then there may be new features which make LabVIEW unstable again. Optimization bug is one such thing that can make a otherwise very stable compiler regarded as unstable. So if LabVIEW 8.20 would otherwise be considered stable compiler, forcing everybody to use new optimization feature, makes it unstable again.

What I suggest is that every new feature, that cannot be avoided by simply not using it, can be switched off from LabVIEW options. This allows developers to choose between stable LabVIEW and feature rich LabVIEW. With new features I mean features that have a risk to be unstable or from which there is no experience yet.

Link to comment
What I suggest is that every new feature, that cannot be avoided by simply not using it, can be switched off from LabVIEW options. This allows developers to choose between stable LabVIEW and feature rich LabVIEW.

I've got to agree with Stephen on this one - including different switches for optimisations is waaaay too much work, and for little gain across the board. Where does it end? Do you want to be able to toggle optimisation at the top leve? The VI level? What about individual optimisations for each VI?

IMHO, if you don't want the "instability" of a new version of LabVIEW, then stick with one that you're happy with. Optimisation between versions is at more than just compiler-level - if 8 or above doesn't do it for you, then stick with 7...

Link to comment
IMHO, if you don't want the "instability" of a new version of LabVIEW, then stick with one that you're happy with. Optimisation between versions is at more than just compiler-level - if 8 or above doesn't do it for you, then stick with 7...

What I was trying to say is that if new general features affecting every VI will be introduced in every release, then there will never be a stable version of LabVIEW unless these new general features can be turned off. And Aristos was saying between the lines that optimization issues will be addressed in future version of LabVIEW.

Link to comment

Addressing Rolf's concerns...

It would need to be a setting on a per VI basis, since you'd want the setting saved with the VI so that if it was distributed to another machine that other machine wouldn't turn the optimization back on and break the VI. We could handle it a lot like the alignment grid settings -- the Tools>>Options setting applies to new VIs and then each individual VI records the grid size that it was constructed with so that on different machines the VI keeps its grid.

It's not a bad suggestion. I'll pass it through.

Link to comment

Stephen,

I hope you now understand my "obnoxious" remark. I think many, many programmers got irritated by the optimization problems. As a professional I can hardly explain my customers that for most applications we still prefer 7.1.1 (not only for stability reasons). All I would suggest is to take less risk and to keep basic things as they are. What you need as a professional programmer is a good basis, all the rest you can add yourself with helper VI's, tools etc. With a good basis we can sell LV programs that do not give the customer nor us any headaches.

Joris

Link to comment

Hello Constant folding bug hunters,

There is a old post that turning of the options of constant folding doesn't solve the problem... At the company I work for (CIT Engineering) we are preparing an introduction for all our developers/students to move to 8.20 I would like to hear what is best to do before there is a bugfix (8.2.1/8.3). Should we turn the options of or not. I'm a bit lost here.

FYI: Speed/memory optimisation is not an issue on the king of all desktop processors a CoreDuo intel processor... they know how to design a processor in Israel... impressive to see LV8.20 start in less than a second. :wub: Ok embedded targets are probably better of with the optimisations but only 10% of our projects run on such a platform and the application size if limited.

Donald

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.