Jump to content

JasonKing

NI
  • Posts

    46
  • Joined

  • Last visited

    Never

Everything posted by JasonKing

  1. A general note about insane object errors: To avoid saving corrupt VIs, we have sanity-checking code that makes sure all fields of an object are within a valid range (obviously this is a bit of a simplification, but being more specific is unnecessary). We run the sanity checking code before we save a VI (and perhaps when we re-compile, but I am not certain). Ideally the sanity checking code will report the error but also correct the value or reset it to a value to ensure the VI is not corrupt, but it is not always possible to safely "guess" a valid value for each field. This code is both for us to detect errors in our code, but also protects users (read: you) from unknowingly saving off VIs with corrupt objects. As suggested above, the best course of action is to delete the insane object and start over. It's unfortunate, but it's a lot better than having your whole VI hosed. While I won't necessarily recommend or condone using internal tools (obviously they ship with and can be turned on in the release version of LabVIEW), what George suggested will help in identifying which object is the offender. J
  2. I can see the desire to nest event structures here to kind of represent the state machine, but the above posts are correct - it generally is a bad idea. I'd suggest one of two alternatives: 1) Have one event structure registered for events on the important controls on all tabs. The visibility of the controls on each tab should ensure that event-handling cases don't execute at an inappropriate time 2) Have one event structure that at any given time is only registered for events on controls on the visible page of the tab control. On value change of the tab control, dynamically unregister for controls on old page and register for controls on new page. Make sure to turn on panel-locking for the tab control:value change event case or you will have a race condition. Personally, I'm a fan of #2, but it's a little bit harder to maintain and a little less self-documenting. J P.S. Regarding the "timeout"....I think many people forget that this timer resets each time the event structure gets an event, so it's possible for the timeout case to starve if, say, the ES is watching for mouse move events on the top-level VI and the user is vigorously moving the mouse. It may be better to have a parallel loop with a WAIT in it if you want to perform some action at a regular interval.
  3. Hi, Ben. I misunderstood you - I thought you were negatively associating him with the picture control!! I agree, the flexibility it provides, specifically creating custom controls (I actually used it to create a custom tab control implementation for the Mindstorms NXT software), is great. I will certainly pass on your praise. I just wanted to make sure he wasn't getting pigeon-holed due to a single feature he worked on a long time ago. J
  4. Wow, that article is painfully inaccurate when it comes to details. However, it is nice to see NI and LabVIEW get press coverage due to the new Mindstorms release, especially since the product carries LEGO's name, not ours. However, LEGO doesn't make processors - it's an ARM7 inside the NXT. Plus, none of the work that went into this software product has been integrated into standard LabVIEW as of yet. LV8 released while we were only about halfway through development on Mindstorms. Albert, I'm actually curious if you think the sensors (particularly sound and light) live up to the marketing behind them. It seems to me that LEGO makes some bold claims about their capabilities. J
  5. Yes. In LV 8.0 I believe three properties were added to the graph which allowed you to insert a picture 1) behind all display elemnents (grids and plotted data), 2) Between the grid lines and plotted data, and 3) in front of both grids and plotted data. I don't have LV 8 installed on any of my machines, but I'm pretty certain it made it into the release. The properties (or maybe they are invoke node methods) take a picture string as their data, just like you would wire to the picture control. Hopefully this would save you from using Gary's solution. J
  6. Paul has contributed many great things to LabVIEW and other NI products. I hope that his legacy is not his work on the Picture Control (I'm not even sure if he worked on it - that was well before my time, but I'll assume your recollection is correct). He most recently played a key role in the development of the compiler used for the new LEGO Mindstorms product, so hopefully he carries more credibility than "that picture control guy" J
  7. I wouldn't take that at face value. I'm sure there will be an official response to this, but what's being reported today isn't exactly accurate. J
  8. None of the built-in LabVIEW controls (either the "new" 3D ones or classic controls) will use OpenGL or DirectX acceleration. As far as the 3D graph (which is really an ActiveX widget), I'm not sure what it was written with, but I'd guess DirectX. I'm not sure what the "Use HW Acceleration" option does, but I would guess it tells you whether HW acceleration is enabled. I'm guessing this is a function of your driver and video card, but honestly I do not know anything about the implementation of the 3D graph. Again, I'm not sure what you are referring to with "software mode", but all OpenGL calls are being done in software (for rendering the models for 3D controls), and that's the only OpenGL going on other than the 3D stuff you can do in the picture control. Everything else is up to the operating system, but as I said before we are using GDI calls on Windows and QuickDraw calls on Mac, neither of which are likely to be accelerated. I think the best direction we could move in with regards to HW acceleration is to move to OpenGL, but again, drawing text is a pain, and there's a quite a bit of text on LV diagrams and front panels, even though it is a graphical language. This would, however, significantly simplify our internal draw manager. J
  9. You don't have to sit next to him every day! Is there some long-standing joke I'm unaware of that I should be giving him crap about?
  10. I'm not sure what you mean by "hardware acceleration", if you are simply referring to OpenGL acceleration, or all possible acceleration - some provide support for Windows GDI, Etc. We make system calls for most of our drawing commands, so if your card/driver optimize for GDI (on Windows) you will be getting HW Acceleration. On the mac, we are still using the QuickDraw (2D) APIs, so you won't see any acceleration there. Once we move to the new APIs, though, all cards with OpenGL acceleration will be utilized, as the new MacOS Drawing API is built on top of OpenGL. To really get HW acceleration on all platforms, we'd have to move all of our drawing to OpenGL, and, well, anyone who has messed with it knows that drawing text is not simple. Also, we'd need to rely on Microsoft using a standard OpenGL implementation. Right now that's just not feasible, but I imagine with the Arrival of Vista more system calls will take advantage of HW acceleration, though I imagine most of their rendering will be built on top of DirectX rather than OpenGL. I'm not sure if card manufacturers are optimizing for this or not. We certainly do look at things like this, but to be honest I haven't really heard that drawing speed really is an issue in LabVIEW. Most people are more concerned with crunching numbers faster. Of course, if more of the drawing is pushed off on the GPU, we'll have more of the CPU available for number crunching. J
  11. Yes, VIs saved in LLBs were compressed (at least somewhat) with a home-brew compression algorithm. Stand-alone VIs were not. The new format essentially only saves the differences from the default state/instance of each object. This has given us the savings you see mentioned in the release notes. J
  12. Damn that's some good-looking software
  13. Yes, it is. The hope is to be able to add it to waveform (and possibly XY) graphs as well, but we didn't have time to figure out how to enforce a policy of only one X-scale on the graph when the cursor is in use, or figure out the behavior of the cursor when there are multiple X-axis at play. J
  14. I know this thread is long-past dead, but I had to chime in. I don't understand the problem here. The LabVIEW graphs DO decimate the datasets before (or while) drawing. Drawing used to be the most expensive part of a graph/chart update, rather than the computation to change coordinate systems from the diagram data to pixels. There is code that specifically decimates co-linear and co-incident data points before drawing. That's pretty much the best we can do, as that's the soonest we can know if any data points are superfluous. LabVIEW must keep all data points around because at any time the user can zoom the graph. If we didn't allow this functionality, sure, we could throw out datapoints that we knew wouldn't affect the drawing the the graph and we wouldn't have to hang on to them. Unfortunately, we must balance the needs of two groups of users: those who want speed and those who want precision. So....the only option I see is to give the user an option to tell LabVIEW that all they care about is an approximation of the actual data, and we can use a threshold to determine whether to hold on to the data point or not. However, this will mean that the graph may be misleading when zoomed in, and some time will be spent in computations to figure out if each data point handed to the graph is "necessary" or not. All in all, the graph and chart mapping/drawing code hasn't changed much in the six years I've been at NI. Some macros were converted to templates for readability/debuggability, which had a slight negative effect on performance, but that was offset by a refactoring of some mapping code that gave a positive gain. As your computer has been getting faster over the years, the graph/chart code has largely stayed the same so it should be speeding up right alongside your CPU. J
  15. Hi, Jim. I'm pretty certain there isn't a property to do this, as there is no VI Server class that coincides with the plot groupings. The only thing I can think of is to see if you can get a reference to a plot area, and if so there might be a property under there somewhere (sorry, I don't have 8.0 installed on my machine to look to verify). I haven't seen many comments regarding the MSG, and haven't seen the bug reports since I'm working on the MINDSTORMS project right now (I did about 80% of the work on the MSG - long enough to write a bunch of buggy code, but not long enough to stick around and fix the bugs . Are you finding it useful? J
  16. I don't know all the details, but I don't believe any of the Robolab VIs are signed in any special way. Truly, the SubVIs/primitives that are used simply build up a string of ASM for the brick, which the "end program" primitive just outputs. These would do nothing special in "real" labview except spit out this string for you. There is no LabVIEW runtime engine running on the brick, so there is no way to get LabVIEW code to compile to the brick - even a driver to communicate with it wouldn't help your programs run down there. Quite a bit of effort goes into targeting LabVIEW to new platforms such as PDAs, DSPs, RT, etc. I doubt anyone in NI Marketing would find it cost-effective to work on a project simply to target a toy with a $200 USD price tag. Now, regarding this new product - we've been a lot more involved in the product development of this version of Mindstorms. Of course we love the idea of being able to target this toy with full-blown LabVIEW (it'd be a great tool for teaching robotics and data acquisition in high school and college). The product itself is proof-of-concept of this, but obviously it's not LEGO's goal. It's up to marketing to decide if it's cost-effective for NI to make that happen, and I really don't know all the contractual details involved. I can't wait to be able to show off the software and its implementation at NI Week. It will be very interesting to see what regular LabVIEW users think, and what you crazy power users think about how we built it. I'm going to stay away from the license issues J
  17. At the moment, no LEGO products work with LabVIEW proper. The educational version of Mindstorms, Robolab, is essentially LabVIEW (the same wiring, etc), but you must work from a pre-defined set of palette items that are essentially macros for the assembly on the brick/embedded device. LabVIEW cannot target the current Mindstorms products that are out there. As far as NXT, as NI's press release says, the software is based on LabVIEW. I'm not sure how forthcoming I can be about the details, but the software that ships with Mindstorms NXT will not be LabVIEW exactly, but it will be a data-flow language with similar rules that is built on top of the LabVIEW engine. This should hopefully mean that LabVIEW proper can be used to target the LEGO hardware, but that is not what will be on shelves in August of this year. I do think a lot of the people on this board will get a kick out of the product as the language essentially will be LabVIEW, but with enough training wheels that it can be used by 10-14 year olds. J
  18. That's a shame. Should I even ask about backups? J
  19. There was another reply here (that I never got to, but wanted to). What happened??
  20. Sorry I'm a little late to the party on this one - just saw the link on info-labview and thought I'd respond to clear some things up here. 1) Just because NI support says something isn't intended behavior (aka "a bug"), doesn't mean it's so. For some of the more advanced features, they may be lacking full understanding of the intracacies of the behavior of the feature. This is the case here. I don't mean to criticize or demean our support folks; it's just that no one at NI other than the developer who wrote the code *REALLY* knows what the intended behavior is. 2) Jeff's description is correct. Hopefully I can clear this up a bit here, and explain why this is something that we can't eliminate in LabVIEW... When LabVIEW notifies an event structure of a "mouse up", "mouse down", or "mouse move" event, it is doing so before it has completely processed the event. The code to generate the events lies somewhere between where we get the message from the operating system, and where we hand it to the specific control that was hit for it to process and respond appropriately. In this case, that means that the "Mouse Down" and "Mouse Up" events are being given to the diagram before the control actually updates it's value as a result of the user action. This is what introduces the race condition. Because you are watching for "notify" events, the LabVIEW UI Thread (which is where all user interaction is processed) quickly notifies the event structures that they have an event in their queue, and then continues processing the action as normal (in this case, messaging the boolean that was hit and having the boolean update it's value). Depending on what thread the block diagram is running in, it is a crapshoot as to whether the read from the boolean's terminal will happen before or after the boolean responds to the mouse message and updates it's value. The reason Michael's fix with the property node works is that all property nodes switch to the UI thread, thus synchronizing the execution of the block diagram with whatever's going on in the UI thread (oftentimes forcing the diagram to run entirely in the UI thread). In this particular example, that means the current message(mouse down or up) will be fully processed before the value is read via the property node. Reads from both locals and FP terminals happen in whatever thread the diagram is executing in. So....race conditions are bad, inherently. Why don't the LabVIEW developers eliminate the possibility in this case? Well, for one, users like multi-threading. We need to allow the UI and block diagram execution to happen in seperate threads, otherwise you won't get the performance you want from your block diagram (as execution must pause every time the user moves the mouse or hits a key). The introduction of event-driven programming introduced some interesting problems, as one of the fundamental reasons for doing so was to be able to synchronize the front panel (UI) and block diagram. While allowing for this, we still want the UI to be responsive while the block diagram is executing (and vice versa). While we could halt the processing of UI messages while the diagram is handing events, or halt diagram execution while LabVIEW processes interactions with the UI, both these would have a negative impact on performance. This makes eliminating race conditions quite difficult, as it takes away all of our "easy" solutions. In response to Louis' point: This is a great analogy, except that there is no resource contention or multi-threading going on in the real world. Unfortunately, electrons travel much faster than computer processors execute lines of code. Imagine a physical system where the button push itself doesn't connect the circuit, but rather intiates some sequence that takes a measurable amount of time to finish that ends up connecting the circuit. In this scenario, it is completely possible (well, theoretically), to push the button, and after pushing it, read the state of the circuit and see that it is still "open". In software, the click on the button does not simultaneously change it's value like it does in the real-world. And, even in the real world, there is a delay between the closure of the circuit and the flow of current through the entire wire. While very smalll, there is a window where a current or voltage reading would show the circuit as being "open" rather than closed, after the button has been pressed. (I'm not a physicist, I only took 12 hours of physics in college, so I could be wrong here, and if so, please don't berate me too badly). This would mean that we halt block diagram execution while the UI thread is processing ANY message from the operating system. Clearly, many users would be up in arms about this. Even if we could restrict this behavior to cases where the event structure is involved and the OS message is specifically related to a user interaction, it would introduce a non-determinism that, in my opinion, would be unacceptable. We *could* delay the generation of events until the specific OS message is completely handled by all LV objects, but that would mean that you would get "Mouse Down" events after the value has already been updated. And what of filter events, where the diagram must take part in the event-handling before the default LV actions are taken? Those events must get handed off to the diagram before the default LV handling occurs, so we clearly can't defer it until later to get the behavior you describe. If we generate filter events in their current locations, but notify events (which don't synchronize the front panel and block diagram) after the default LV handling has occurred, then the diagram will be handed events out of order. For example, if the diagram was watching for Mouse Down?, Mouse Down, Value Change? (<- a filter event for this does not exist, but it has been discussed since we first started considering adding event-driven programming), and ValueChange, and the user clicks on the boolean, the order of event generation would be: Mouse Down? -> Value Change? -> Mouse Down -> Value Change. This does not make sense in the context of how we've defined the order of events in LabVIEW. I suppose we could change that, but I don't think this ordering makes sense in any context. I think a good point is made here, however, that users of LabVIEW should not have to worry about race conditions and that things should simply work as expected. Users should not have to accept "unexpected" behavior simply as a result of some weird implementation detail due to the architecture of the LabVIEW code base. I am a big proponent of this philosophy, and have made my job more difficult by being unwilling to simply say "customers will learn to work around it", solely because it's difficult to implement the "natural" solution. However, we do need to recognize that the LabVIEW code base is almost 20 years old, and it most definitely was not architected to allow for users' block diagrams to participate in event-handling. While it may be possible to re-write LabVIEW and architect it in such a way that the interface to event-driven programming we expose will work "as expected" in all cases, it most certainly would break other things in the language. We're re-writing components of LabVIEW as they become brittle, but we most definitely don't want to do the whole thing at once. I think that the event-driven programming interface we've provided does behave as documented, and the behavior we've defined is not unreasonable. It *is* an advanced feature, and with any language, if you are participating in the event-handling, you have to be very careful of what you are doing in response to those events. There are rules you must follow, and you should be aware of what exactly the event your handling *means*. We've gone through great efforts to document the event-driven programming features throroughly, as well as give many presentations on the subject, as we realize it is a difficult feature to grasp and to use effectively. However, it's very possible that we haven't been clear enough, or some behaviors aren't completly documented. If you find that's the case, please file a bug report. Hopefully this post is coherent, and possibly clears up why this is "expected" behavior. I "wrote" a reply in my head this morning, but got interrupted before I was able to type it out, so I lost a bit of my train of thought. I hope I did not come across as being defensive - I do not mean to be, and I want you all to continue to raise questions on this and challenge us to do things "right" and make programming as easy as possible for all of you. I just want to peel back the covers a bit and help y'all understand the mechanism going on and the design decisions that have been made so that more of these behaviors make sense, and hopefully you can help "share the wealth", so to speak. J
  21. So I had to come join the fray. I guess this could be called kind of a "hidden" feature. We did intend to make it possible to use this functionality with non-ActiveX events. The idea was that, if you are already using this method for handling ActiveX events, you probably don't want to have to use the Event structure to handle all other events - you'd want to be able to have a common architecture. We waffled on this a bit, and made the decision late in the game to make this functionality available. That explains the fact that documentation does not mention that it can be used for all events. I'll pretend not to be offended that no one read it, but in the article I wrote for DevZone that talks about the new events features, it does mention that the callback interface is available for all events. But it was a minor point, so I won't hold it against y'all Nope, not platform-dependent. Yup, meant for it to be there. However, most users should stick to the Event Structure. It is much more straight-forward, and generally easier to use. J
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.