Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Posts posted by Aristos Queue

  1. QUOTE(Tomi Maila @ Feb 15 2007, 11:17 AM)

    It seems that the actual runtime type is only in the data string and not in the type string. Type string only tells the type of the wire which may or may not coincide with the actual runtime type. So does anybody know (Aristos) how the runtime type is encoded to the data string? I could try to reverse engineer but it's always a little unsafe as one may make mistakes. All the solutions provided here have been related to the compiletime type and not the runtime type.

    When a LVClass is flattened to a string, the first 4 bytes form an unsigned 32-bit integer of "number of levels of hierarchy flattened." So if I have a hierarchy "LV Object->Parent->Child" and I flatten an instance of Child, then this will be a 2. So the first thing to check is if this number is zero then you have an instance of LabVIEW Object in your hands. If you find a zero, there is no further data in the string.

    After the numLevels value comes an encoding of the fully qualified name of the class. It follows the same encoding used to encode the name in the int16-array version of the type descriptor.

  2. QUOTE(John Rouse @ Feb 15 2007, 12:27 PM)

    I have actually changed the scope of some of my private elements to public to get around it (rather dirty I agree)

    Why is it dirty? You want them to be public then make them public. If you don't want them to be public, why are you putting them on the FP of VIs outside of your library? If you want a datatype that keeps its insides private, that's what LabVIEW classes are for. But if you use a typedef, you darn well better have the rights to use *all* of that typedef because all of it is hanging out in the open.

    File the bug report if you want, but it'll get closed "Not A Bug." :-)

  3. I had an interesting argument with a customer at NI Week a few years back. I'm a computer scientist by degree and software engineer by trade. I was giving a presentation on good software engineering practices in LabVIEW. Early in the presentation we had discussed not having diagrams larger than a single screen and the judicious use of "Create SubVI from Selection". One customer in the crowd had mentioned his complete test harness for his hardware was in a single VI which took up nine screens (arranged 3x3, not nine in a line scrolling left to right) and he argued that this was better because he had everything right there in front of him with no jumping around to different files every time he needed to look at a different part of the structure. It was during this exchange that he casually mentioned having 90+ controls on the front panel that displayed the status of his test harness as it ran.

    Later in the presentation, I was discussing software testing and how cool it was that every VI of LabVIEW was independently runnable. This would allow a programmer to run each function independently and test that each subVI worked correctly in isolation, a convenience not found in most programming languages. This made it easier to create a test harness to prove the correctness of the code. Well, that toasted this guy. "Are you telling me that I need a test harness for my test harness?!" I could only shrug and reply, "Well, if I had nine screens of VI code, and then I made a change somewhere in the middle of it, I'm not sure I'd trust myself to get it right without some way to test all the functionality." That got him to pause. See, for him, the code was, as you say, a means to an end. It was not itself an engineering project. The very concept of the test harness itself as something that could be engineered wrong -- where wrong in this case means "in such a way that errors are not detectable and modifications are not easily makable" -- hadn't really occurred to him.

    You mentioned that the software engineers seemed uninterested in how the system worked or the overall complexity. I say "seemed" because if they really were uninterested then they are lousy software engineers and you should fire them. However, those unfamilar terms that you mentioned (core system architecture, class diagram, use case model) are the terms that a software engineer would use to understand a complex system. When you want to understand the structure of a building, you pull out the blueprints. When you want to understand software, you pull out the class diagram. When you want to know if a piece of hardware is ready to ship, you check off its test plan. When you want to do the same for software, you review the use case model. If you find a piece of software for which these items do not exist, then you have to generate them from the existing software before you can hope to make a significant refactoring of it and have any hope of maintaining the existing functionality. "Can we do this optimization? No, because it precludes this use case. Do we need this variable? Yes, because it provides this service. Can we simplify this data? Yes, by using this class hierarchy."

    Ad hoc software works. One of the major goals of LabVIEW is to provide an environment where non-programmers can do incredible work without the CS/SoftEngr degree, where you can learn dos and donts in on-the-job training. But there comes a point -- generally when the software reaches a particular size and/or needs to be handed off to a new programmer or becomes a team project -- where ad hoc just isn't enough. In much the same way, a hobbyist can soder wires to breadboards and make circuits and even entire computers, but eventually an electrical engineer has to step in to clean up. Software engineering is very much in its infancy, but there are "best practices" that have developed over the decades to help bring order to the chaos and manage the complexity.

    I have never met the software engineers that you're working with. But my guess is that they're no more or less "practical solution oriented" than hardware engineers. But they may well be judging "practical" on a whole different scale -- is it practical to have an ad hoc system that keeps growing over time and has no master architecture plan?

    I wish you luck. Major overhauls of software are always painful --- huge amounts of work all so that the system can do exactly what it currently does. If a refactoring is done well, then there is no perceivable benefit to the end user at all (maybe a performance boost, but that's it). Refactoring is generally done so that after the refactoring is finished, the system is in a state where it can once again be managed, extended and improved. I'd suggest giving these guys the benefit of the doubt, and let them analyze the system and rephrase it in their own terms. You may discover that they grasp more than you think about the software and its relation to the real world task it is performing.

    But, as I said, if you discover that they really are ignoring the software's end purpose then fire 'em. Or, if you can't do that, you might go online to your favorite bookseller and order a copy of Close to the Machine: Technophilia and Its Discontents by Ellen Ullman. It is the story of one programmer's education as she learns to understand the connection between the software she writes and the users it affects. I recommend it as reading to all programmers, and it might provide a subtle hint to your folks.

  4. QUOTE(geirove @ Feb 14 2007, 06:23 PM)

    Thanks everyone. BUT I am trying to track the Source of an Event:

    That's a very different thing than the caller of a breakpoint. When an event executes, there is no caller per se -- the event fires off into space and is caught by someone, but the code that fired the event is long since finished firing it off and has gone on to other tasks. This is true in any programming language, but is especially true in LV where multiple threads of operation are always in play (as opposed to C/C++/Java/etc where extra threads are only spawned when user explicitly asks for it). Debugging event systems generally requires that you include information in the event that records the source of the event. In LV, you might be using user-defined events. If so, make your data a cluster... part of that cluster is the data that you actually want to transmit. But also in the cluster, include a string. When you fire the event, set the string using the Call Chain primitive. That way, when you catch the event in the event structure, you know from whence the event came.

    The Value (signaling) method of throwing an event doesn't allow you to do this. So the other common trick is to take the string from the Call Chain primitive and add it to a Queue. When you catch the event, dequeue the string. Since events are caught in order that they are fired, you can be guaranteed that the item at the front of the queue is the string for the event you just caught.

    Here's a VI to show you how to detect where a Value (signaling) event is coming from. I've used name strings for the two loops instead of the Call Chain primitive so that I could put the whole demo on one VI diagram -- it makes it easier to understand what's going on. The semaphore locking is necessary to make sure that each loop does its value signaling and enqueueing without the other loop jumping in to do its own value signaling and enqueuing (possibly multiple times) in between.

    VI is saved in LV8.2

  5. QUOTE(PJM_labview @ Feb 14 2007, 03:54 PM)

    Note: same restriction that Darren mentioned (no dyn call).

    Just to clarify -- "no dynamic calls" does not apply to the dynamic dispatch VIs of LabVIEW classes. If you call into a dynamic dispatch subVI, the call stack will show up just fine, thank you. I think it also works when you use the Call By Reference Node, but I don't know for sure. I assume that PJM_labview and Darren are talking about VIs running with the "Run VI" method. Those are definitely not in the call stack because the Run VI method is equivalent to kicking off a new top-level VI.

  6. QUOTE(Tomi Maila @ Feb 14 2007, 07:04 AM)

    With LVClasses you can pass many kinds of data, but not built-in datatypes.

    Passing different types of data isn't what makes the graph change. It's a compile time decision based on the data type provided. So, no, there is no workaround.

  7. Anyone considering multiple event structures should read the online help topic:

    "Caveats and Recommendations when Using Events in LabVIEW"

    Launch LV, use Help>>Search the LabVIEW Help... and use the Search tab to find the above topic. Of particular interest to you will be the topic about "Avoid Placing Two Event Structures in One Loop." It has further details about avoiding hanging when using multiple event structures in general.

  8. QUOTE(Tomi Maila @ Feb 13 2007, 10:16 AM)

    What would be a proper symbol for http://en.wikipedia.org/wiki/Mixin' target="_blank">mixin classes that hopefully will be added to LabVIEW in a future release... ?

    That one sounds easy ... a cube of two or more colors that is half empty (you add the rest when you inherit).

    :P Some people might think that as more of the "layered shot" class, and mixin should be a single color with chocolate chips in it. ;-)

  9. RFC == Request For Comments.

    I got a number of requests at NI Week for some standard iconography for LabVIEW classes. So, I've asked the iconographers internal to NI to produce some glyphs that could be used to annotate icons for classes that have particular functionality. Those glyphs will probably be available later this year. But one in particular is difficult to get right.

    An "abstract" class is one that you never intend to actually use for real data. It is an ancestor class that is defining a bunch of methods, but all the real work is done on child classes. So, what is a good glyph for "abstract"? I suggested a Campbell's soup can and Andy Warhol's signature, but that's a bit hard to fit in 16x16 (remember that this is a glyph to be applied to other icons, so we don't have 32x32 to play with). Iconographer Laura Hayden hit upon an interesting idea, and I'd like to get some feedback on it. Drawing on the idea that the abstract class is the ancestor of the real class, and is the prototype for the descendant, she came up with this glyph:

    post-5877-1171325346.png?width=400

    The double-helix of DNA. I kinda like it. I had resigned myself to "abstract" getting an arbitrary symbol, the way that "create" has the yellow star glyph -- there's no natural association between a yellow star and create, but by repeated usage, the association has been built up in our collective minds. This almost has the right feel -- the genotype that will be shaped by the environment to create the phenotype of the descendant classes. Maybe that's carrying the metaphor too far, but it works for me. I'm curious how others feel.

  10. While I can see your point of view to a certain degree I think it is not exactly helping.

    Wasn't trying to be helpful this time. :-) I don't have a helpful suggestion on this topic. Jim is right -- go ahead and use this trick. Perhaps someday it'll stop working. Or, perhaps, we've gotten along this long without someone filing a CAR, maybe it'll just be enshrined. I haven't filed it after reading this conversation. There is some error checking, but it isn't comprehensive.

    although I do understand that maintaining that and keeping it complete is a thankless job.

    And you wonder why we have never released scripting... :-)

  11. Looks suspiciously like the 16bit typestring array in older Flatten to String functions. The first word 0x4070 is the actual type with the higher 8 bits being flags such as indicator/control etc, and 0x70 being the typecode for any LabVIEW refnum. Then follows a refnum type word and after that a variant (C union, not LabVIEW variant) construct depending on the refnum type. 0x001E seems to be the refnum type for LabVIEW classes. 0x0008 is for all kind of VI server refnums for instance.

    Rolf is correct. LVClasses are not refnums, but internally I borrowed some of their syntax and it was easiest to make them a (get this) "by value form of refnum." That's not as much of a hack as it sounds -- refnum is sort of a misnomer at this point, and internally refnums would be better named as "any data type with a hierarchy". I wasn't even the first to use a by-value hierarchy -- credit for that goes to one of the DAQ groups.

    Bytes 8 thru 11 are a hex value that indicates which application instance this LV class is loaded in. It will be different every time the class loads into memory (and is ignored when flattening/unflattening to disk). Notice that I could have Alpha.lvclass loaded into one application instance which has as its private data two ints. In another application instance I could have a different Alpha.lvclass that has a string. The name of the class is not enough to distinguish the datatype -- we need to know which context to look at. The FFFFFFFF for LabVIEW Object is a special code for "union of all app instances" -- I only have a single instance of LabVIEW Object class that is used by all app instances. Occassionally you will see values other that FFFFFFFF in a LV Object type descriptor when data from a specific class has been upcast onto an LVClass wire.

    There's also 4 bytes for flags. This is pretty much left as room for future expansion of the data type. It's a reflex of mine -- when building something that will be very hard to mutate and where the penalty for bad design could have effects lingering for generations, leave yourself some extra bits to encode mutation. ;-)

  12. A type cast takes the bit pattern of whatever you wire into it and reinterpets it as the new data type. The actual data (as you saw in your example) does not change.

    In this case, the type cast is only necessary because the editor does not let you see the Text property when you use the StringConstant class. If you had wired the numeric class to the top terminal, you would have gotten the numeric class properties.

    Exactly. And that's why this is such a terrible bug for LV!!! I'll wager at least one of those casts creates a crash situation.

    This should *soooooo* be illegal. At the very least the property/invoke node should be type testing the refnum value and returning an error if it is an invalid type.

  13. Let's pretend they do exist. Would you AQ think the in such an imaginative situation, NI would change the context where the XNodes would be running in a future release of LabVIEW to be the same where the VI using the XNode is open? Of course XNodes do not exist, but just try to imagine such a situation.

    It's hard to imagine this implementation aspect of XNodes changing. The XNode developers assure me this is a key aspect of XNodes -- I'm not privvy to all the details of why and wherefore. And since LVClasses would be the only feature really putting any pressure to change this, it would surprise me if anything changed. I think it is up to LVClass R&D and LVClass users to find all the workarounds we can. I know that the LVClass should be able to be used in the writing of an XNode so long as that same class is not part of the code that actually gets scripted into the XNode. The problem with wiring an LVClass to an XNode is that the scripting has to be done very gently to avoid loading the LVClass into the XNode context and then copied back into the original context (which occurs when the XNode is done with scripting in what I think of as the scratch pad context).

    I don't play with XNodes at all myself yet. It's an interesting tech, but not my primary focus, so most of what I know comes from others -- both internal and external -- who tell me about the problems they've encountered.

  14. I like this picture, may I use it in my signature... :) Althoug I wonder if I should use by-value objects instead...

    I just assumed that Jimi was a strict typedef of Tomi Malia for backward compatibility. I had no idea that we had actually changed wire type. I'll be sure to cast in the future. At least Tomi didn't inherit from Jimi and override the Name method -- although that would make wiring convenient, it would make his his own parent, and the recursive allocations would rapidly cause problems. ;-)

  15. The reason that the typecast solution works, is that property/invoke nodes don't do type-checking at run-time (they rely on LabVIEW's type checking of wires in the editor) -- they only check to see if the input object has the properties/methods that are being accessed/called. And, the StringConstant VI Server object does have a Text attribute, but it simply hasn't been exposed yet in VI Server. This is akin to duck typing -- if a StringConstant has a Text attribute, then reading the Text attribute should work (even if the StringConstant is masquerading as String control, as long as nobody checks to make sure that it is really a String).
    Oh dear lord. You're kidding right? That is soooooooo a crash waiting to happen in some case where we (LV R&D) add an data field to the record of a FP control that we don't add to the constant... I can imagine a function where we assume we have a control in our hands, so we access a field (which isn't there because it is really a constant) and booom.... down goes LV.

    I can see why you want this functionality to work -- I mean, having access to the text field and all -- but WOW, that is so bad it makes my teeth hurt. And we can't even fix this improper cast in any way that I can think of without breaking your code. Even if we were to add a Text property to the StringConstant, how could we find all the improper type casts and change the code to leave out the type cast and just call the StringConstant's new Text property directly. :o

    Wow. It's bugs like this that had me so paranoid when we released OO in LV8.2 -- what did I put in that is a bug, a dangerous hole, that will have to haunt us for the rest of time because there's no way to mutate away from it? Haven't found such a bug yet, but the possibility continues to plague me every time Jimi starts a new thread...

    :!: PS -- for those of you who are now building object hierarchies in LabVIEW, the correct way to handle an object hierarchy like this, where you don't have multiple inheritance but you want something to unify two branches, is for one inheritance to own an instance of the other branch as a data member. For example, the Constant class would have a Property of "Control" which would have all the control behavior that underlies the constant. The other possibility is for StringConstant and StringControl to be the same class, with a boolean property "Is Constant?" (or maybe a trinary "style" which returns Indicator, Control or Constant).

    Type casting your types in this abusive manner is, as I said, a problem waiting to happen.

  16. The TCP/IP primitives and the VI Server prims already provide ways to do asynchronous signaling from one app context to another, whether on the same machine or a remote machine. Depending upon what you're doing, you can use those comm methods to spawn user-defined events for a local event structure. I suspect that writing a decent library for remote events is possible using the G available today.

    The possibility of remote events -- and other remote communications channels like remote queues -- does exist in the future of LV. Not in its immediate future, but the preparation for a more distributed LV is one of the big reasons for isolating the queues and events that exist today to a single app instance.

    Basic principle: An app instance should always be treated as a separate machine. If the code works in separate app instances, then it should work on separate machines. Behind the scenes, LV might cheat for two app instances on the same computer, but the same VIs must still work if the app instances aren't co-machine.

  17. The new behavior is the behavior 8.0 should have had. It was a bug that the queues, notifiers and events worked across contexts. In fact, they didn't always work, only most of the time, and they will work even less in the future as we go forward with intended features.

    The way to communicate data from Application Instance A to Application Instance B is using network protocols such as VI Server, the TCP/IP prims, data socket, etc. Basic rule of thumb: If a given communication method could be used to communicate between two different computers over the network then they can be used for communicating between App Instances. Any feature of LV that allows data to be passed between App Instances on the same machine that does not work when the App Instances are on different machines should be reported as a bug.

  18. Greetings,

    Due to limited licenses, I cannot install Labview on all the machines I would like. Consequently, the development and deploying are often on separate computers. Does anyone know of a viewer which would open a vi and allowing browsing of the front panel and code? The viewer would be like a '3-d model' viewer where you can only view the model and manipulate it, but not alter. If no, does Labview support introspection and reflection like java -> build Labview exe that encapsulates the vi? ;) Thanks for your time.

    JMA

    We do have the introspection, but only on development clients. In the runtime engine, there is no support for looking at the source code. Indeed, when you deploy to a runtime engine, we strip out the diagrams from the save file and all that remains is the compiled assembly instructions, the front panel, and the dataspace. You can take images of your block diagrams (there are methods in the development system for getting PNG or other graph file formats of the diagram) and send them along with your VIs if you want people to be able to look at the block diagram.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.