Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Everything posted by Aristos Queue

  1. A) There's a known issue with building LVClasses into EXE/DLLs when choosing the various "remove" options. Classes were really intended to be coherent wholes and no provision was made for breaking them into pieces when building apps. It's being addressed, but for the time being, you have to include the typedefs (so that the private data control doesn't disappear) and not remove unneeded VIs (since the dynamic dispatch VIs aren't directly called by the caller VIs, the build system thinks they're not needed). B) Yes, the dynamic dispatch VIs save outside of the EXE/DLL. And, yes, it is cumbersome. The problem is that the DLL/EXE is effectively a single directory internally and so no two VIs of the same name can be saved inside the EXE/DLL. Of course, LVClasses almost always have two VIs of the same name -- that's how overrides work. So as a workaround, we made the dynamics save outside of the EXE/DLL. The diagrams and panels are stripped, as you noted. A future version of LV should address the file name limitations of EXE/DLLs.
  2. QUOTE(yen @ Feb 25 2007, 12:50 PM) Will you be at NI Week in August?
  3. QUOTE(JFM @ Feb 27 2007, 08:18 AM) QUOTE The TS can thus represent any point in time with the same accuracy. A floating point value can not do this, since a higher integer value (seconds), means less accuracy for the fractional part. Taking your two statements together... it would be logically impossible to have very high accuracy for a relative value. For relative offsets, all you need is a scalar value, for which floating point works just fine -- as it maintains very high precision.
  4. Yen asked me two questions. I'll answer them in reverse order... QUOTE(yen @ Feb 20 2007, 12:24 PM) Refactoring the Queue VIs (which used CINs) to be the Queue primitives was my first assignment when I joined LV R&D. Aristos, as others have mentioned, is Greek for "excellence" or "the best". It also is the root for "aristocracy," which in modern parlance has come to mean "ruling class". So "He who brings the best out of the Queues" would be a good translation. QUOTE(yen @ Feb 20 2007, 12:24 PM) the real question - Your signature is "A VI outside a class is a gun without a safety. Data outside a class is a target." and since you've had quite some time now with both the LVOOP beta and public releases I was wondering how much G coding do you do and how much of that uses LV classes (and is it all because you think that's the best tool or just because you want to work with your baby)? Checking my credentials, I take it. I think I've mentioned before that I don't get to do as much G programming as I like. There are many in R&D who program full time in G -- frequent LAVA poster Darren, for example. I am not one of those. On any given day, the majority of the hours are spent spelunking in the 2 million plus lines of C++ code that make LabVIEW and all of its various modules. My work on the LVClasses means that I've had to make many changes across lots of features, so I've wandered through most of those lines at some point in the last few years. When do I get to program in G? Mostly on test days -- when LV rolls into beta, the R&D team spend a good chunk of time developing whatever we choose to develop using G (and filing bug reports...). That's when a lot of example programs get written. I only have two large-scale projects that I've developed using LabVIEW. One is a game that I built early on in my LV career and have tweaked a bit from time to time, but it is still pretty poor style. The other is a 300 VI hierarchy that scans all the VIs on a computer (or multiple computers), parses their block diagrams and builds a database to discover which nodes are most frequently used and which nodes are used in conjunction with which others and what sort of conjunction (which are wired together, which are in the same struct, and which are in the same VI). I've used this tool to find performance hot spots that need optimization in LV and to change the palette layouts -- when I can show that two nodes are *always* used in conjunction then it strengthens the argument that they should be on the same palette. I don't program in G to the level of most of you. But I am a very strong programmer in whatever language I pick up, and G is no exception. More important, to this conversation, than how much I write VIs is how much I debug VIs. I get about one CAR every day from various users (more when we're in beta). The bug that the user files may be about some small corner of their code, but I see the entire VI, and frequently I can see corner or edge cases that aren't going to be handled properly. I get called in to debug problems with instrument drivers "that just have to be a bug in the LV compiler" that turn out to be a data value not being set properly. I don't get to write much in G, but I do know all the mistakes that can be made. And I think my LabVOOP team has provided a critical tool for preventing a wide swath of those bugs. Since the release of LV8.2, my G programming has increased substantially. Some of that is the demand for examples for OO, which not many people can write yet. More of it is my own interest in using G is increased. How far can I push the LabVIEW classses? What can I do now that I couldn't do before? Is there some piece of code that was hard or confusing before that a class hierarchy can clarify? I've lately finished refactoring the General Error Handler.vi using LV classes, with positive effects on memory footprint, complexity of diagram and functionality. I don't know if I'll actually ship that or not, but I like seeing the cleansing effect on some otherwise very tangled code. Do I use LVClasses in my own work? Yes. I worked for five years on that design because I considered it such a fundamental gap in the strengths of LV. In my opinion, LVClasses are the right tool more often than they are the wrong tool. The two tools that I've developed for my own use since August both use LV classes. I made sure that LVClasses were available in the Base edition of LabVIEW when a lot of others were pushing for them to be in the Pro edition only. Many other developers looked at LVClasses and saw an advanced programming tool that could only be used by those with years of experience. I look at LVClasses and see a tool that a customer should be learning in Basics I. Before they create their first VI they should create their first class. I don't honestly think that we're at that point yet, usability-wise, but I intend that LV classes will get there within the decade. I get a lot of pushback on this that the random scientist or engineer with no programming background has no clue what a class is or how to use it. I point out that they don't know what a front panel or a block diagram is either, and yet they figure it out from our interface. I don't think that I could've worked for 6 years on that project without a strong amount of belief that my team was bringing something fundamentally *better* to the table. Yes, I said better. You won't hear that from Marketing or from most CLDs, or, indeed, most users in general. The value of the classes to LV programming is something that is becomming apparent to one developer at a time. Darren had his epiphany only a couple of weeks ago when he was working on an XML parser. Within NI, the value of the classes is catching on -- the next version of LabVIEW will see LV classes at the heart of AppBuilder, the MathScript node and the Getting Started Window. In short, I believe this: LabVIEW classes are an outright replacement for clusters in most situations unless you have to be backward compatible with old code or you're passing across DLL boundaries. LabVIEW classes should replace many uses of the typedef (but not the strict typedef or the custom control). Whereever "type like" data is connected to a case structure -- an enum, or a string ID -- that is a place where classes can improve the readability, the maintainability and the probability of correctness of a VI. I do believe in classes. A lot. And now, if you'll excuse me, I have a bug to fix. A coworker has presented me a 1260 VI hierarchy involving 49 LabVIEW classes. And one of those VIs doesn't want to load in the latest development build. "It worked fine last week, but now is broken. I tried to pare it down, but everything seems ok if I take away any of the other 1259...." :ninja:
  5. Using File>>VI Properties>>Description is a good idea since the text entered here shows up in the Context Help window when people are using your VI as a subVI or browsing the palettes. That description is intended for VIs that will be used as subVIs for other programs. If you want to document how to use your VI for an end user, then there are several options. The obvious is putting comments directly on the front panel. You could also fill in the Help Path (again in File>>VI Properties>>Description) to point to any HTML file. If you do that, then the menu item Help>>Help for this VI will be ungrayed. Clicking on it will launch the HTML file in a browser. Also, if you popup on individual controls, you can set the description for that single control. This text will appear in the Context Help window (when the CH is open) when the mouse is over the given control.
  6. QUOTE(yen @ Feb 19 2007, 01:20 PM) See correction to my earlier post.
  7. QUOTE To do inheritance testing, use the To More Specific primitive. If it return an error, then you don't inherit from the given type. That does not give you generic runtime type testing (you have "test if this runtime type inherits from this compile time type" but you don't have "test if this runtime type inherits from this runtime type") but it solves most cases. And, yes, it works in the runtime engine.QUOTE Runtime must be aware of the class hierarchy somehow, otherwise I wouldn't be able to unflatten data from a flattened string. How can I access this information? There must be a way. Don't get me started on this topic again. I've ranted enough elsewhere. Basically, I found out late in the release cycle for LV8.2 that *all* the .lvlib reflection API was dependent upon components that only exist in development environment. Writing properties (aka scripting) was always dev-only, and that makes sense. But reading properties? That's generally available for all controls/VIs/etc at runtime. But not libraries. And because classes are types of libraries, we have the same deficiency. No one objected for libraries since they really only provide namespace control and a reflection API is much less useful for them. For classes, however, being only in dev is a major functionality loss. It was too close to release to change when I found out, so I couldn't raise objections, and now it is entrenched. Digging our way out of this is non-trivial. That's a big reason I've been so responsive in this thread -- trying to get as much runtime functionality as possible written in G as a workaround for the missing built-in functionality.
  8. On your original question of custom controls: In LV7.0 and later, open Example Finder and search for "animated gifs". There are examples showing integration of animated gifs into custom controls. Just one of many fascinating things you can do with custom controls, but one that is less well known.
  9. QUOTE(Tomi Maila @ Feb 17 2007, 12:17 PM) By design. No scripting works outside of the dev environment. You can build scripting into DLLs, but those DLLs will only operate when called within a development environment of LV. Reason for this: There is no compiler in the runtime engine... it is for running, not editing.
  10. QUOTE(Tomi Maila @ Feb 16 2007, 03:18 PM) That might be a second pattern. I'll think about the issue.
  11. QUOTE(Tomi Maila @ Feb 16 2007, 01:32 PM) You don't have to pass it anywhere. Just access it directly in the subVIs. The whole point is that there is but one of these so it doesn't have to be passed around. If this is not desirable for some application, then this is not the pattern for that application. There are a good number of times when absolutely guaranteeting that one and only one exists is valuable.
  12. QUOTE(Tomi Maila @ Feb 16 2007, 11:13 AM) Those are private functions? Really? I don't use the G code interface to classes much, but I thought all of the scripting access for classes was wide open public. But, to answer your question, no there's no way to get the hierarchy otherwise. Well, you could by brute force type testing, but that would be ridiculous. QUOTE(Tomi Maila @ Feb 16 2007, 11:13 AM) Does LabVIEW object have different name in different LabVIEW language versions. So if I interpret 0 to be LabVIEW Object, do I also have to find out what is the name of the LabVIEW Object in this language version or is LabVIEW Object a universal class name? I only have an English version of LabVIEW so I cannot test this. You know, I honestly have no idea.
  13. QUOTE(Tomi Maila @ Feb 15 2007, 11:17 AM) When a LVClass is flattened to a string, the first 4 bytes form an unsigned 32-bit integer of "number of levels of hierarchy flattened." So if I have a hierarchy "LV Object->Parent->Child" and I flatten an instance of Child, then this will be a 2. So the first thing to check is if this number is zero then you have an instance of LabVIEW Object in your hands. If you find a zero, there is no further data in the string. After the numLevels value comes an encoding of the fully qualified name of the class. It follows the same encoding used to encode the name in the int16-array version of the type descriptor.
  14. QUOTE(John Rouse @ Feb 15 2007, 12:27 PM) Why is it dirty? You want them to be public then make them public. If you don't want them to be public, why are you putting them on the FP of VIs outside of your library? If you want a datatype that keeps its insides private, that's what LabVIEW classes are for. But if you use a typedef, you darn well better have the rights to use *all* of that typedef because all of it is hanging out in the open. File the bug report if you want, but it'll get closed "Not A Bug." :-)
  15. I had an interesting argument with a customer at NI Week a few years back. I'm a computer scientist by degree and software engineer by trade. I was giving a presentation on good software engineering practices in LabVIEW. Early in the presentation we had discussed not having diagrams larger than a single screen and the judicious use of "Create SubVI from Selection". One customer in the crowd had mentioned his complete test harness for his hardware was in a single VI which took up nine screens (arranged 3x3, not nine in a line scrolling left to right) and he argued that this was better because he had everything right there in front of him with no jumping around to different files every time he needed to look at a different part of the structure. It was during this exchange that he casually mentioned having 90+ controls on the front panel that displayed the status of his test harness as it ran. Later in the presentation, I was discussing software testing and how cool it was that every VI of LabVIEW was independently runnable. This would allow a programmer to run each function independently and test that each subVI worked correctly in isolation, a convenience not found in most programming languages. This made it easier to create a test harness to prove the correctness of the code. Well, that toasted this guy. "Are you telling me that I need a test harness for my test harness?!" I could only shrug and reply, "Well, if I had nine screens of VI code, and then I made a change somewhere in the middle of it, I'm not sure I'd trust myself to get it right without some way to test all the functionality." That got him to pause. See, for him, the code was, as you say, a means to an end. It was not itself an engineering project. The very concept of the test harness itself as something that could be engineered wrong -- where wrong in this case means "in such a way that errors are not detectable and modifications are not easily makable" -- hadn't really occurred to him. You mentioned that the software engineers seemed uninterested in how the system worked or the overall complexity. I say "seemed" because if they really were uninterested then they are lousy software engineers and you should fire them. However, those unfamilar terms that you mentioned (core system architecture, class diagram, use case model) are the terms that a software engineer would use to understand a complex system. When you want to understand the structure of a building, you pull out the blueprints. When you want to understand software, you pull out the class diagram. When you want to know if a piece of hardware is ready to ship, you check off its test plan. When you want to do the same for software, you review the use case model. If you find a piece of software for which these items do not exist, then you have to generate them from the existing software before you can hope to make a significant refactoring of it and have any hope of maintaining the existing functionality. "Can we do this optimization? No, because it precludes this use case. Do we need this variable? Yes, because it provides this service. Can we simplify this data? Yes, by using this class hierarchy." Ad hoc software works. One of the major goals of LabVIEW is to provide an environment where non-programmers can do incredible work without the CS/SoftEngr degree, where you can learn dos and donts in on-the-job training. But there comes a point -- generally when the software reaches a particular size and/or needs to be handed off to a new programmer or becomes a team project -- where ad hoc just isn't enough. In much the same way, a hobbyist can soder wires to breadboards and make circuits and even entire computers, but eventually an electrical engineer has to step in to clean up. Software engineering is very much in its infancy, but there are "best practices" that have developed over the decades to help bring order to the chaos and manage the complexity. I have never met the software engineers that you're working with. But my guess is that they're no more or less "practical solution oriented" than hardware engineers. But they may well be judging "practical" on a whole different scale -- is it practical to have an ad hoc system that keeps growing over time and has no master architecture plan? I wish you luck. Major overhauls of software are always painful --- huge amounts of work all so that the system can do exactly what it currently does. If a refactoring is done well, then there is no perceivable benefit to the end user at all (maybe a performance boost, but that's it). Refactoring is generally done so that after the refactoring is finished, the system is in a state where it can once again be managed, extended and improved. I'd suggest giving these guys the benefit of the doubt, and let them analyze the system and rephrase it in their own terms. You may discover that they grasp more than you think about the software and its relation to the real world task it is performing. But, as I said, if you discover that they really are ignoring the software's end purpose then fire 'em. Or, if you can't do that, you might go online to your favorite bookseller and order a copy of Close to the Machine: Technophilia and Its Discontents by Ellen Ullman. It is the story of one programmer's education as she learns to understand the connection between the software she writes and the users it affects. I recommend it as reading to all programmers, and it might provide a subtle hint to your folks.
  16. QUOTE(geirove @ Feb 14 2007, 06:23 PM) That's a very different thing than the caller of a breakpoint. When an event executes, there is no caller per se -- the event fires off into space and is caught by someone, but the code that fired the event is long since finished firing it off and has gone on to other tasks. This is true in any programming language, but is especially true in LV where multiple threads of operation are always in play (as opposed to C/C++/Java/etc where extra threads are only spawned when user explicitly asks for it). Debugging event systems generally requires that you include information in the event that records the source of the event. In LV, you might be using user-defined events. If so, make your data a cluster... part of that cluster is the data that you actually want to transmit. But also in the cluster, include a string. When you fire the event, set the string using the Call Chain primitive. That way, when you catch the event in the event structure, you know from whence the event came. The Value (signaling) method of throwing an event doesn't allow you to do this. So the other common trick is to take the string from the Call Chain primitive and add it to a Queue. When you catch the event, dequeue the string. Since events are caught in order that they are fired, you can be guaranteed that the item at the front of the queue is the string for the event you just caught. Here's a VI to show you how to detect where a Value (signaling) event is coming from. I've used name strings for the two loops instead of the Call Chain primitive so that I could put the whole demo on one VI diagram -- it makes it easier to understand what's going on. The semaphore locking is necessary to make sure that each loop does its value signaling and enqueueing without the other loop jumping in to do its own value signaling and enqueuing (possibly multiple times) in between. VI is saved in LV8.2
  17. QUOTE(PJM_labview @ Feb 14 2007, 03:54 PM) Just to clarify -- "no dynamic calls" does not apply to the dynamic dispatch VIs of LabVIEW classes. If you call into a dynamic dispatch subVI, the call stack will show up just fine, thank you. I think it also works when you use the Call By Reference Node, but I don't know for sure. I assume that PJM_labview and Darren are talking about VIs running with the "Run VI" method. Those are definitely not in the call stack because the Run VI method is equivalent to kicking off a new top-level VI.
  18. QUOTE(Tomi Maila @ Feb 14 2007, 07:04 AM) Passing different types of data isn't what makes the graph change. It's a compile time decision based on the data type provided. So, no, there is no workaround.
  19. Anyone considering multiple event structures should read the online help topic: "Caveats and Recommendations when Using Events in LabVIEW" Launch LV, use Help>>Search the LabVIEW Help... and use the Search tab to find the above topic. Of particular interest to you will be the topic about "Avoid Placing Two Event Structures in One Loop." It has further details about avoiding hanging when using multiple event structures in general.
  20. QUOTE(John Rouse @ Jan 15 2007, 11:18 AM) Here you go: http://jabberwocky.outriangle.org/LabVOOP_...gn_Patterns.pdf Sorry... my ISP changed how URLs work.
  21. Updated: http://forums.lavag.org/knowledgebase.html...;showarticle=93
  22. Is it possible that at some point in your EXE that you call a LV-built DLL that was built against LV 7.1? That's the only thing I can think of.
  23. Regarding Safari: When at work, I am not necessarily using my laptop. Regarding Mozilla and not Firefox: I use Composer a lot. That comes with Mozilla. Thuderbird replaced the e-mail client, Firefox replaced the browser, but there's no replacement for composer yet.
  24. QUOTE(Tomi Maila @ Feb 13 2007, 10:16 AM) That one sounds easy ... a cube of two or more colors that is half empty (you add the rest when you inherit). Some people might think that as more of the "layered shot" class, and mixin should be a single color with chocolate chips in it. ;-)
  25. I just posted a new topic in the GOOP forum. I uploaded an image to that topic. I tried uploading three times with Mozilla (version number below). I got the screen that says "Uploading" with the spinning lines glyph. And it just sat there. So finally I tried MSIE and it worked fine. Mozilla 1.7.13 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.13) Gecko/20060414
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.