Jump to content

Terrible Bug - While Loop Inside a Case Stmt


wwbrown

Recommended Posts

This is both true and untrue. Let's look at inplaceness. If we didn't optimize memory usage, LV becomes unusable. Quite literally -- the "ideal" form of a data flow language is that every wire is its own independent allocation. By analyzing the flow, we can identify when memory can be reused.

LV 6.0.1. was released. LV 6.0.2 was released about two weeks later because we had a bug in inplaceness for bundle/unbundle nodes. But even if the bugs were dire, we wouldn't turn off inplaceness. We'd fix the bugs.

That's the part I don't understand about this thread. LabVIEW is a compiler. Every node you drop generates some amount of assembly code, just like every line of C code generates assembly code. An optimization bug is no different from a functionality bug. We redid the queues/notifiers in LV6.1 to be language prims instead of CIN nodes. There were a couple of deadlocks in the queues/notifiers in that first revision (fixed in LV7.0). But finding such a bug doesn't make everyone question the functionality of LabVIEW, just the intelligence of the fool who wrote the queue/notifier code. Finding the constant folding bug makes everyone panic. I find that odd.

I guess my point is that any bug in LV is a functionality failure, and I'm not quite sure why the constant folding bug raises more concerns than any other bug. It needs to be fixed, sure, but obviously a whole lot of VIs work just fine in LV8.2, despite this bug, even VIs that have constants on their diagrams. LV8.2 would've been hard pressed to ship out the door otherwise.

Optimization of code is becoming a major issue for LV. We've coasted for a long time by being a highly parallel language and thereby staying ahead of C in performance in a lot of routines. But parallelism is less of an advantage as the processors become more parallel themselves and other compilers optimize out entire chuncks of code. There's many multiple of optimization features behind the scenes in the last couple releases of LV. For example, everyone praises the 50x speed improvement in the LVVariants. Would you rather we didn't attempt that? It was entirely possible that we would get it wrong and variants wouldn't work correctly. It seems we got it right. But the push against constant folding smacks of "this is something that LV has done that was so risky I can't believe you exposed users to this!" That's overreacting, to me.

Well I didn't say to turn off all optimizations. Certainly not the ones that are already working fine and in the particular case with 6.0.1 it was not about inplaceness or not. It was about more agressive inplaceness optimization that would completely optimize away bundle/unbundle constructs if combined with certain shift register constructions. The same code had worked fine for several years in previous LabVIEW versions without so much of a hint of performance problems and suddenly blew up in my face.

The Queue port was also not such a nice thing but I got easy off there since I didn't use queues much as I had gotten used to create my intelligent USR global buffer VIs for vrtually anything that needed something like a queue functionality too.

But I think there is a big difference in bugs introduced through things like constant folding and bugs introduced in new functionality. I can avoid using queues or whatever quite easily but I can hardly avoid using shift registers, loops and basic data structures such as arrays or clusters since they are the fundamental buidling blocks of working in LabVIEW. So if in that basic functionality something suddenly breaks that LabVIEW version is simply not usable for me. The same would be for fundamental editor functionality. Just imagine that dropping any function node on the diagram suddenly crashes on every fourth installed computer somehow.

Other bugs can be very annoying but you still can keep working in that LabVIEW version and write impressive applications with this if you need to. While we all would like bug free software I think almost everyone has accepted that that is something that will never really happen before LabVIEW 77 with it's 5th generation AI and environment interfaces with causality influencer. But the basic functionality of LabVIEW 2 should not suddenly break.

Rolf Kalbermatter

Link to comment
  • 2 months later...

QUOTE(Aristos Queue @ Jan 15 2007, 10:36 AM)

Can't find that CAR number... but I did find one with a very similar CAR number that includes a link to this post. That CAR is 45E85U1Y.

I think we've got enough to identify what's up, so no need to post any further code.

I cannot find 45E85U1Y in the LabVIEW 8.2.1 release notes. Was it fixed?

W. Brown

Link to comment

QUOTE(Louis Manfredi @ Jan 17 2007, 11:23 AM)

Sorry for the late reply.

My LV2-style globals do not typically generate an output for their "init" cases or (sometimes) their "put data" cases. Does this mean that they would not be executed under LabVIEW 8.x because those particular subVI instances have no outputs wired?

QUOTE(Louis Manfredi @ Jan 17 2007, 11:23 AM)

But I can't reccomend they buy 8.2 if I don't use it myself, and I'm not sure there's any reasonable way to get them a copy of 7.1

Are you still trying to do this? We were told by NI that it was ok to purchase a copy of LV8.20, but then install LV7.1 from our existing CDs. I think that we may have used the LV8.20 serial number for installing LV7.1, but as I wasn't the one who did the install, I 'm not certain of that.

Gary

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.