Jump to content

Another Event Structure Bug?


Recommended Posts

I may have found what is another bug with the way the Event Structure handles new vs. old data inside the structure.

I configured an Event case to fire on both the "Mouse Up" and "Mouse Down" events of a boolean with the mechanical action set to "Switch Until Released". I did this to toggle the state of a digital line as long as the user holds the button down.

The problem is that sometimes the value of the button in the block diagram is not what it is on teh front panel. The attached VI shows this behavior in a very simple mannor.

The button is wired directly to the LED indicator inside the Mouse Up/Mouse Down event case. So you would think that the LED would follow what the button is doing. Most of the time it does, but not always. If you turn on execution highlighting, you can see a False being generated from the button ternimal when it should be a True. How fast or slow you click it makes no difference. I've tried every combination of locking the front panel between the two events and it makes no difference. I've tried putting The LED outside the Event structure and that makes no difference.

I get the same problem in 7.0, and I do have the 7.1.1 update installed.

Using "Value Change" seems to work correctly, so that's the way I'm going for now.

Am I seeing things or is this just wrong?

Ed

Download File:post-47-1109701834.vi

Link to comment

I've submitted this to NI along with Michaels workaround with the property node. Interestingly, reading the value from a local variable shows the same problem as reading it from the button.

I'll post any replies from NI.

Ed

UPDATE:

A quick response from NI on this one.

Ed,

This is definitely not expected behavior.  I have filed a report to R&D about this issue.  I'm sure they will look into changing the functionality for later versions of LabVIEW.

Link to comment

I recently encountered this, and while it can be annoying, it is not a bug. Let me explain.

When you get a mouse down or mouse up event on an object, you are truly getting the same event that LabVIEW itself gets.

For example, in the case of a switch-when-pressed boolean, the mouse down causes the button to change value. That is, when LabVIEW internally gets the mouse down event over this boolean, it internally says, "hey, I need to change the value of that boolean in memory, change its front panel appearance, etc."

At the same time this is happening, your event structure looking for mouse down events fires. The first thing you do in the event frame is read from the terminal / local / property node associated with the mouse down control.

There is a genuine race condition between the terminal on the block diagram being read, and LabVIEW internally updating the value of the control. As with any race condition, the frequency of this could vary from system to system, but in my experience LabVIEW is usually able to process the actual change in value before the terminal is read in your event structure, but this is not guaranteed.

With the value change event you are guaranteed to get the most recent value the control changed to.

Further, if you looked for the Mouse Down? filter event, you are guaranteed to always get the old value, because your event frame is processed completely before LabVIEW internally gets a chance to respond to the mouse down.

I do not know the internals of how exactly events are handled in LabVIEW, but I suspect "fixing" this behavior could cause other undesirable behavior. Despite perhaps not helping solve the problem you were attempting to accomplish, I hope this at least provides some clarification.

Link to comment

Hi All:

Jeff's description of how the effect occurs seems very plausible to me.

I'm afraid I'm not at all comfortable with classification of the event as non-bug, however.

My mental image of LabView is that anything (like a button) that can be visualized as a direct analog to a physical piece of electrical hardware should behave more or less identically to an idealized version of that hardware. So, when I push a hardware button it clicks, and for all intents and purposes the circuit closes at the simultaneous time.

Not matter what electronic circuits I use to observe the state of the button, or its transition from one state to another, the state of the contacts on the button should be simultaneous to the pushing of the button. And, if a transition from not-pushed to pushed occurs, any observation of the switch contacts triggered by the transition's occurrence should correctly indicate that the transition has, in fact, occurred-- that the button is now in the pushed state.

To my thinking, once the LabView kernel process (or whatever) starts to process the mouse-down event it should inhibit all interrupts until both the event firing and the new state of the button are set to the correct values.

I'm getting perhaps a little too philosophical, don't want to get into the angel-on-the-head-of-a-pin thing, but it seems to me that once you accept unintuitive and undocumented behavior of LabView objects, the language quickly begins to lose value-- Isn't one of the wonderful things about a virtual instrument that you usually don't have to build debounce circuits for every switch on the panel?

Hope I'm not being too nitpicky. :blink: Very curious to hear other's thoughts on this. :question:

Best Regards, Louis

Link to comment
  • 1 month later...

Sorry I'm a little late to the party on this one - just saw the link on info-labview and thought I'd respond to clear some things up here.

1) Just because NI support says something isn't intended behavior (aka "a bug"), doesn't mean it's so. For some of the more advanced features, they may be lacking full understanding of the intracacies of the behavior of the feature. This is the case here. I don't mean to criticize or demean our support folks; it's just that no one at NI other than the developer who wrote the code *REALLY* knows what the intended behavior is.

2) Jeff's description is correct. Hopefully I can clear this up a bit here, and explain why this is something that we can't eliminate in LabVIEW...

When LabVIEW notifies an event structure of a "mouse up", "mouse down", or "mouse move" event, it is doing so before it has completely processed the event. The code to generate the events lies somewhere between where we get the message from the operating system, and where we hand it to the specific control that was hit for it to process and respond appropriately. In this case, that means that the "Mouse Down" and "Mouse Up" events are being given to the diagram before the control actually updates it's value as a result of the user action. This is what introduces the race condition. Because you are watching for "notify" events, the LabVIEW UI Thread (which is where all user interaction is processed) quickly notifies the event structures that they have an event in their queue, and then continues processing the action as normal (in this case, messaging the boolean that was hit and having the boolean update it's value). Depending on what thread the block diagram is running in, it is a crapshoot as to whether the read from the boolean's terminal will happen before or after the boolean responds to the mouse message and updates it's value.

The reason Michael's fix with the property node works is that all property nodes switch to the UI thread, thus synchronizing the execution of the block diagram with whatever's going on in the UI thread (oftentimes forcing the diagram to run entirely in the UI thread). In this particular example, that means the current message(mouse down or up) will be fully processed before the value is read via the property node. Reads from both locals and FP terminals happen in whatever thread the diagram is executing in.

So....race conditions are bad, inherently. Why don't the LabVIEW developers eliminate the possibility in this case? Well, for one, users like multi-threading. We need to allow the UI and block diagram execution to happen in seperate threads, otherwise you won't get the performance you want from your block diagram (as execution must pause every time the user moves the mouse or hits a key). The introduction of event-driven programming introduced some interesting problems, as one of the fundamental reasons for doing so was to be able to synchronize the front panel (UI) and block diagram. While allowing for this, we still want the UI to be responsive while the block diagram is executing (and vice versa). While we could halt the processing of UI messages while the diagram is handing events, or halt diagram execution while LabVIEW processes interactions with the UI, both these would have a negative impact on performance. This makes eliminating race conditions quite difficult, as it takes away all of our "easy" solutions.

In response to Louis' point:

My mental image of LabView is that anything (like a button) that can be visualized as a direct analog to a physical piece of electrical hardware should behave more or less identically to an idealized version of that hardware. So, when I push a hardware button it clicks, and for all intents and purposes the circuit closes at the simultaneous time.

Not matter what electronic circuits I use to observe the state of the button, or its transition from one state to another, the state of the contacts on the button should be simultaneous to the pushing of the button. And, if a transition from not-pushed to pushed occurs, any observation of the switch contacts triggered by the transition's occurrence should correctly indicate that the transition has, in fact, occurred-- that the button is now in the pushed state.

This is a great analogy, except that there is no resource contention or multi-threading going on in the real world. Unfortunately, electrons travel much faster than computer processors execute lines of code. Imagine a physical system where the button push itself doesn't connect the circuit, but rather intiates some sequence that takes a measurable amount of time to finish that ends up connecting the circuit. In this scenario, it is completely possible (well, theoretically), to push the button, and after pushing it, read the state of the circuit and see that it is still "open". In software, the click on the button does not simultaneously change it's value like it does in the real-world. And, even in the real world, there is a delay between the closure of the circuit and the flow of current through the entire wire. While very smalll, there is a window where a current or voltage reading would show the circuit as being "open" rather than closed, after the button has been pressed. (I'm not a physicist, I only took 12 hours of physics in college, so I could be wrong here, and if so, please don't berate me too badly).
To my thinking, once the LabView kernel process (or whatever) starts to process the mouse-down event it should inhibit all interrupts until both the event firing and the new state of the button are set to the correct values.

This would mean that we halt block diagram execution while the UI thread is processing ANY message from the operating system. Clearly, many users would be up in arms about this. Even if we could restrict this behavior to cases where the event structure is involved and the OS message is specifically related to a user interaction, it would introduce a non-determinism that, in my opinion, would be unacceptable.

We *could* delay the generation of events until the specific OS message is completely handled by all LV objects, but that would mean that you would get "Mouse Down" events after the value has already been updated. And what of filter events, where the diagram must take part in the event-handling before the default LV actions are taken? Those events must get handed off to the diagram before the default LV handling occurs, so we clearly can't defer it until later to get the behavior you describe. If we generate filter events in their current locations, but notify events (which don't synchronize the front panel and block diagram) after the default LV handling has occurred, then the diagram will be handed events out of order. For example, if the diagram was watching for Mouse Down?, Mouse Down, Value Change? (<- a filter event for this does not exist, but it has been discussed since we first started considering adding event-driven programming), and ValueChange, and the user clicks on the boolean, the order of event generation would be: Mouse Down? -> Value Change? -> Mouse Down -> Value Change. This does not make sense in the context of how we've defined the order of events in LabVIEW. I suppose we could change that, but I don't think this ordering makes sense in any context.

I think a good point is made here, however, that users of LabVIEW should not have to worry about race conditions and that things should simply work as expected. Users should not have to accept "unexpected" behavior simply as a result of some weird implementation detail due to the architecture of the LabVIEW code base. I am a big proponent of this philosophy, and have made my job more difficult by being unwilling to simply say "customers will learn to work around it", solely because it's difficult to implement the "natural" solution. However, we do need to recognize that the LabVIEW code base is almost 20 years old, and it most definitely was not architected to allow for users' block diagrams to participate in event-handling. While it may be possible to re-write LabVIEW and architect it in such a way that the interface to event-driven programming we expose will work "as expected" in all cases, it most certainly would break other things in the language. We're re-writing components of LabVIEW as they become brittle, but we most definitely don't want to do the whole thing at once.

I think that the event-driven programming interface we've provided does behave as documented, and the behavior we've defined is not unreasonable. It *is* an advanced feature, and with any language, if you are participating in the event-handling, you have to be very careful of what you are doing in response to those events. There are rules you must follow, and you should be aware of what exactly the event your handling *means*. We've gone through great efforts to document the event-driven programming features throroughly, as well as give many presentations on the subject, as we realize it is a difficult feature to grasp and to use effectively. However, it's very possible that we haven't been clear enough, or some behaviors aren't completly documented. If you find that's the case, please file a bug report.

Hopefully this post is coherent, and possibly clears up why this is "expected" behavior. I "wrote" a reply in my head this morning, but got interrupted before I was able to type it out, so I lost a bit of my train of thought. I hope I did not come across as being defensive - I do not mean to be, and I want you all to continue to raise questions on this and challenge us to do things "right" and make programming as easy as possible for all of you. I just want to peel back the covers a bit and help y'all understand the mechanism going on and the design decisions that have been made so that more of these behaviors make sense, and hopefully you can help "share the wealth", so to speak.

J

  • Like 1
Link to comment
  • 2 months later...

I'm even later to the party but I want to thank Jason for taking the time to not just write about the "bug" but to provide some very good information about the background and philosophy that goes on. :thumbup:

I can think of a software company or two that could learn from this kind of responsiveness.

Thanks!

Barrie

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.