Jump to content

jdunham

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by jdunham

  1. QUOTE(Nepomuk @ Jan 31 2008, 02:23 PM) I don't think so. You might want to read up on the http://en.wikipedia.org/wiki/OSI_model' target="_blank">OSI model. IP is at layer 3, which is a network layer. As it shows in the chart (follow the wikipedia link), it shouldn't be possible for two hosts to share data without using one of the host data layers (layers 5, 6, or 7). For example, your ethernet cable itself is layer 1, but just because your machines are connected at layer 1, doesn't mean you can transmit any data without some implementation of layers 2, 3, 4 and 5 as well. If the other machine you are connected to has no specifications for layer 4 and 5, you are going to have to get that information from that machine's vendor. Have you tried to enter the device's IP address in a web browser? lots of small devices implement help and configuration by implementing a small web server on the device. Can you ping the device? There are also tools like nmap which can scan for open ports, though a phone call to the vendor would be a lot faster.
  2. QUOTE(teck @ Jan 30 2008, 05:16 AM) Hi Teck: (OK, I get it!) Thanks for posting what you have tried already. It's a good start. The next thing you will want to check out is a FOR loop, which is described in the LabVIEW on-line help. Maybe you can post some code when you get are able to access your array row-by-row and have tried some comparison functions, and then we can try to help some more.
  3. I'm just curious how you debug your code with all of those objects? I tried a few applications with objects, but I quickly found out that one can't see the data on the subvi front panels. It ended up taking me so much longer to debug the application than it does when I use regular old typdef clusters. If you are not using the benefits of inheritance, then there is even less reason to use them. As far as it being very slow, thanks for the heads up, and good luck getting some resolution. Is there anything in your objects like a DAQ or VISA refnum which might have a lot of overhead attached to the control?
  4. QUOTE(AdamRofer @ Dec 28 2007, 02:05 PM) In a legacy from MS-DOS days, you can also generate this character with ALT-248. My other favorite is ALT-241, which gets you plus-or-minus (±). You can use this method to generate any of the 'standard' 8-bit ASCII characters 0-255. Jason
  5. Well I think you are on the right track. I wouldn't get hung up on that book that says the graying out should only be a response to user events. In a real-world I/O application, you are often using gray-out to display the availability of the model's functionality. If the model notifies you that some ability is currently unavailble, then it's obvious that the user interface should respond by disabling any inputs which invoke the ability. Graying stuff out is part of the way that you display your model state. It seems like the main reason to use the mediator pattern is to make your interface 'skinnable'. By funnelling all interaction through the mediator, and requiring that no application logic is in the UI view, it frees you up to make different programs to view the data and all use the same API (the mediator) for consistency and reuse. I think most LabVIEW applications don't need to be skinned, and writing the extra API and supporting code could be classified as 'a waste of time'. (contrary opinions are welcome!) That being said, I think that processing as little logic as possible in the View is a great idea. Too often I have found it difficult to change my code because some functionality was partly implemented in both my view and my model, but I think you are already hip to that problem, otherwise you wouldn't be writing this post.
  6. We use a simpler version of "option 2" with our spawned processes. I am a strong believer in leveraging LabVIEW's strong typing. Drop all of your parameters into a typedef cluster, and make a singleton queue (size=1) of those. That's so much easier than using a FOR loop and variants and parsing your own data. If you are spawning different VIs from the same diagram, then I would still make one cluster for each VI, and make a singleton variant queue. The dynamic VI should know what kind of variant to expect and you can flag an error if it gets the wrong input. A trick we use is that the caller passes parameter cluster into the queue, starts the dynamic VI, and then puts another item into the queue. Since it's a singleton queue, that function will be blocked until the dynamic VI is up and running. At that point the spawning VI can know it is safe to destroy the queue and run any other code which needs to be sure the dynamic VI is alive. I think that kind of method way easier and much safer than using control references by name. You can change your front panel text and you won't introduce any hard-to-detect bugs. Jason
  7. QUOTE(chrisinlf @ Dec 16 2007, 07:18 PM) If you are doing constant communication, then polling is no big deal. Of all the reasons to decide between LabVIEW and C#, that's just about the least important one. Just poll the serial port 10 times per second, and read all bytes available. If you're handling a little data, then 10Hz is no sweat. If you are handling tons of data (which means your baud rate is pretty high) then it's probably way more efficient to process the input at 10Hz then to run your code for every new byte. If you are reading the port in a subvi containing a loop with a wait function, then you may want to wait 0ms on the first iteration in case your data is already available at the serial port. The serial port is asychronous anyway, so you can't count on any specific timing of the data arrival. That would seem like the main reason to use VISA events. If you simply must be event driven, then just write a polling loop, and put all the characters into a queue as they arrive. Then when NI comes up with a fix, you can replace your workaround.
  8. QUOTE(Shaiq Bashir @ Dec 4 2007, 02:33 AM) Assuming you are using XP, make sure that the folder C:\Documents and Settings\All Users\Application Data\National Instruments\NIvisa exists. I have had several upgraded computers with VISA problems because the upgrade is missing this required folder. QUOTE(Shaiq Bashir @ Dec 4 2007, 02:33 AM) When i right click on any control lets say KNOB and click on its properties. It starts searching something in different folders of my Labview Installation and after taking quite some time, it displays its properties. I dont know why is it doing so? Again this wasnt the problem when i was using 8.1 version of Labview. i would like to bring into the knowledge of you people that 8.1 is still installed on my pc. I dint reinstall it. I just install 8.2, i thought it will upgrade it but the National instruments entry in the "add remove programs" of control panel shows that 8.1 is still there along with 8.2. So is this causing all these problems? Kindly help me as soon as possible! Hmm. Maybe you can try renaming your LabVIEW 8.0.1 folder and then Mass-compiling your VI.lib folder. Some of the VIs internal to LabVIEW (running the properties pages) may have been linked to the older 8.0.1 VIs. Just a guess.
  9. QUOTE(ibbuntu @ Nov 30 2007, 03:45 AM) I'm glad your View SubPanel thing is working. Be sure to post if you needed any special tricks. (Or better yet, some example code!) I would just make a new thread, and put a link to it here, and then in your new thread, start it with another link to this old thread, mentioning that you started a new thread slightly related to the old one.
  10. QUOTE(ibbuntu @ Nov 29 2007, 11:07 AM) Make sure you know about this: http://forums.lavag.org/Dynamic-Cloning-VIs-t8291.html
  11. QUOTE(Prakrutha @ Nov 29 2007, 06:40 AM) Forgive me for jumping to conclusions, but don't you have teaching assistants for you class? If you are really stuck at the beginning, you should get some one-on-one coaching.
  12. QUOTE(MrYoung @ Nov 28 2007, 08:00 PM) We have a big project and none of our builds worked when we tried to use our 8.2 lvproj files. They just failed with errors with blank error descriptions. Everything was fine when we started from scratch (after a couple of wasted days and some tech support). The lvproj 8.2 -> 8.5 upgrade seems to be just hopelessly broken. :headbang:
  13. QUOTE(ibbuntu @ Nov 28 2007, 02:49 PM) Clarity and English will help your design. For example, now you can say, "My 'Instrument' is going to be required to save the common run data from my experiment." Now in the real world, your oscilloscope or your camera are not going to know anything about your specific apparatus, so in the real world, the previous sentence doesn't make any sense. That should tip you off that expecting your child camera class to save your metadata doesn't make much sense either. I'm not trying to be totally harsh about your design, but just trying to make the point that using good names can help clarify your design. QUOTE(ibbuntu @ Nov 28 2007, 02:49 PM) I agree, when I first came up with it I meant it to be an abstract class, but clearly forgot about that in the implementation. I have one problem, which is to do with a "get data" or "set data" function. Now, as each instrument can set its own data internally as in the "capture data" method I mentioned earlier, there isn't a problem. However with the "get data", which would be a method to return the data, there would be a problem as each seperate instrument would output a different data type, which isn't allowed. So they couldn't override a "get data" method from the abstract "diagnostic" class and consequently I couldn't have a dynamically dispatched "get data" method to send my data to a "Storage" class. ... I imagine that due to the problem I described above I will have to go along the route of each "diagnostic" saving its own data, using metadata given to it by the "Storage" class. This is why your data should be private and Aristos Queue was right to design LabVIEW such that public data is not allowed. If you have a routine which needs your class's data, and that data type is going to keep changing, then that routine should be a member of the child class and should be called via dynamic dispatch. In other words, what's the point of "returning the data" if the caller doesn't know how to handle it because it could be anything. You should only return a common type, like a Pass/Fail boolean, to a routine which doesn't understand your data. If you need to display the data, then pop up a window which is a member of the class, or use SubPanels which are filled in with a dynamic dispatch VI (I haven't tried that, but it should work). Rendering/Saving/Evaluating the data should be the responsibility of your class, whenever there's no way the caller can know what to expect. Your Storage class should invoke a dynamic dispatch method of Instrument to get the data written to the file. However, I don't know why Storage should pass the metadata to the Instrument rather than writing it itself. Are you going to store the metadata in some kind of comment field in the BMP file? If so, then any code that is going to recall your metadata is going to have to support all possible file formats. I think you would be better off storing the metadata in a application-specific format (like your own 'spreadsheet' file) and then including the name of the data file (BMP, WAV, CSV) produced by the child object. At that point the child is no longer involved since you can sic your Storage class on the problem of writing out that data, which makes a lot more sense, and supports the point I made in the first paragraph. You can say, "the Storage class is going to store the experiment metadata, and invoke instrument-specific VIs to store whatever data is unique to that instrument," and it sounds sensible. QUOTE(ibbuntu @ Nov 28 2007, 02:49 PM) Finally, I'm not sure I understand what you mean by "cast their data to the parent before writing it". I was having a brain spasm. I was thinking you can convert your object to the parent type inside some of the child methods, but the data is still private so that doesn't work. Some languages let you do that and call it protected data, but not LabVIEW. You can use an accessor method (Get/Set) inside the child VI to get at the common data, but I think you are better off keeping that data away from your child objects.
  14. QUOTE(ibbuntu @ Nov 28 2007, 11:56 AM) Hi Tom: I was just learning about this stuff recently, and I went through the same confusions. You are on the right track, but you have to think in a slightly different way. Diagnostic should be an abstract class. It probably won't have any private data. Actually I woud probably call it "instrument", but I will stick with your terms. Each of your derived objects should inherit from Diagnostic, and override its methods. Since the methods of your abstract class should never get called, you may want to do nothing on their block diagrams except throw an error. None of the standard error codes quite fit, but I chose error 1394 since it's close enough. QUOTE(ibbuntu @ Nov 28 2007, 11:56 AM) Which diagnostics are used depends on the experiment and what's happening in the experiment. So what I want to do is simply have an array of objects of a type which inherits from "diagnostic" and use dynamic dispatching to call all the relevant "capture data" methods. However, because the data is of different types I can't store the data in the parent class easily. At the moment I am converting the data to variant type and storing it in the "diagnostic" class' private data. So my first question is whether this is the right approach for solving this particular problem? No, you store the data in the child classes, which are different, and you don't need any variants. The whole point of this type of design is to get away from using variants. Objects let you use strong typing and still decide at runtime which routines to run. QUOTE(ibbuntu @ Nov 28 2007, 11:56 AM) Not only do I want to capture the data, I also want to store it. Now, each diagnostic should know nothing about how its data is to be stored, as different experiments will want to store data differently but might want to use the same diagnostics as another experiment. So I have another "Storage" object which can be inherited from for a particular experiment. So somehow I need to get the data from the diagnostic object to the storage object and have the storage object save it. Again I would like to do this in a loop where each diagnostic passes its data to a "save data" method of the storage object. Here I hit a problem, different types of data need to be saved in different ways, say to a .bmp file, or a csv file, but I want to only call one VI so I can do it in a loop. You have to define the interface somewhere. Usually I would imagine you do this in the derived class. If you have a camera, it's going to give you an image, and it's reasonable for the camera object to understand the handful of image file formats you want to support. If you need metadata which is specific to your experiment, you probably don't want your camera object to know about it, but that is probably a separate object anyway (the experimental data) which hopefully does not need to know what kind of instrument generated the data, otherwise your loop is going to get really messy no matter how you slice it. This common data could be in your parent class, or else it could just be in a different class entirely and passed into the saving routines as a separate input. QUOTE(ibbuntu @ Nov 28 2007, 11:56 AM) I have thought of some solutions to this, but I can't decide which is the best way to go, if any. I thought of having a lot of different storage objects to deal with the different types of data, but then I need some way of identifying what type of data is being wired in and dynamically dispatching on that (don't really know how that would work). That idea also conflicts with the idea of what a storage object is, as it is supposed to be specific to the experiment, with information such as the root directory and run number, this data would then have to be common to all the derived storage classes. Another way to go would be to have each diagnostic be responsible for saving its own data, but then this loses the flexibility of being able to save the data differently for different experiments. Also the diagnostic shouldn't have to know about storage specific information, such as the filename and save location of the data, this should be passed to it from the storage object. I would think that each diagnostic shouldn't know about the common experiment data. You could make each of them call a method of the parent and cast their data to the parent before writing it in the header of the file. Another way is more of a database-type approach where each stored object is listed in a master file listing the common data, along with a file name which points to a child-specific file which could be images or text files or whatever. I hope that helps, Jason
  15. I haven't done much of this but 'SubPanels' are LabVIEW's way of doing panels embedded in other panels, specified at run-time. You may also want to search on "plug-in architecture" for ideas.
  16. Do you have a conditional disable structure on the diagram or in a subvi? If you have changed the conditional symbol definition in your project, then the same VI could definitely be working inside the project but broken outside of it.
  17. QUOTE(hugh @ Nov 14 2007, 01:01 PM) Most companies (but not all, I guess) are sensible enough to throw that ton of money at you and get your insight and the code at the same time, rather than sending the same ton of money to Russia and hoping they can tease your algorithms out of the code. Nevertheless, it is reasonable to want to protect trade secrets in your code. I don't know whether subvi names are still accessible from a built EXE. I had thought this potential hole was plugged. Looking a little bit, you can't use VI Server to get a list of All VIs in memory, but you can use it to get a specific VI's callers and callees. If you make a good guess about a vi.lib VI which might have been used, and use it as a starting point, you may be able to rebuild the hierarchy of all of the VI names in a built executable. To get around this, you could write a VI server routine to rename all of your subVIs with random names right before building. It seems like your time would be better spent finding venture capital partners who can provide funding and lawyers to fight off the poachers. Aside from getting the VI names out of the executable, I don't think there is much someone could learn by attacking the EXE file, but that would really be a question for National Instruments since the system and the file formats are proprietary. Jason
  18. When you build an EXE, your VI diagrams are stripped away, so that gets you pretty far. By nature, someone could put the EXE through a disassembler to see the compiled code, but I can't imagine it would be easy to make sense of a disassembled LabVIEW program. I have no idea what magic your software does, but most people savvy enough to reverse engineer your LabVIEW program would probably not be buying labview-based EXEs in the first place. Mikael is right that you may want to make sure your VI names don't tell the whole story. Good luck with your product, Jason
  19. QUOTE(jlokanis @ Nov 13 2007, 01:36 PM) You could also use the http://forums.lavag.org/Map-implemented-with-classes-for-85-t8914.html' target="_blank">Map Class, but unfortunately it uses recursion which is not permitted in LabVIEW 8.2. .
  20. QUOTE(Michael_Aivaliotis @ Nov 12 2007, 08:42 PM) Umm, if you author a copyrightable work for hire (in the USA), then the employer owns it. However this is merely the default. That ownership can be transferred or shared by mutual agreement in the form of the employment contract. Money can also be used to grease the skids. If a programmer (or her union) can negotiate a direct royalty or retention of the copyright, then bully for her. AQ pointed out that different industries have different norms on getting salaries, advances, royalties, etc. I suspect that NI has convinced its programmers that they are better off taking salary and stock options than royalties. http://java%20script:add_smilie(' target="_blank">
  21. QUOTE(Aristos Queue @ Nov 11 2007, 04:58 PM) OK, I get it, but I don't think it's a big deal. The big terminals look like icons, which is why I never liked them. That's what people are confused about. A different idea, which I predict you will not like, is to have LabVIEW always display built-in types with old-style terminals, and always display typedefs and lvclasses as icons, and remove the option. This is close to what I was suggesting before, and by taking away the option, you simplify LabVIEW, and you also display some information which you didn't expose before. Regarding better visual indication of dynamic dispatch VIs, I spoke with my C++ colleague, and I understand better why the class's user is not supposed to care whether or not a method is dynamically dispatched. However when you look at the definition of a C++ function, you can see that right away whether it is declared with virtual, but in LabVIEW it's a lot harder to see that a certain method is dynamic dispatch. I think the block diagram wire for that object has a funny gray 'sleeve' around it but it took me a while to notice that; it's very subtle. And there is the dotted line around the terminal pane connections (which is very, very helfpul), but I still think there is room for more hinting about the nature of the VI. Maybe something is needed in the project view, or the help window. I still think it would make life easier for class authors (and code reviewers) to see this behavior at a glance rather than digging around for it. I also think some kind of display of reentrancy and recursion would be extremely helpful. Thanks for listening, Jason
  22. QUOTE(bono02 @ Nov 12 2007, 01:26 AM) SubVI calls should take much less than a microsecond. They are very efficient. Do NOT use that as an excuse to let your VIs do too much. Your loops will be governed by your I/O speed, not your diagram code, unless you do bad things with memory management (changing/building arrays in a fast loop), and even then you might get away with it. With a motion control system, your moves will probably take 0.5 - 60 seconds. If it is on the long side, you will need to be able to cancel the motion. The old fashioned way to do it was to make it a state machine, looping 10Hz or so, and check your stop condition, all of your GUI, and the state of your I/O. This VI would get very big and hard to debug, and would have to run at a fixed rate to handle everything in your system, often resulting in some performance compromises. What I am suggesting is that you write a very simple VI to start it, and another simple VI to poll for completion, checking the I/O and also a notifier from your GUI (the "cancel motion" notifier), and any other termination conditions you need. After it works, do the next one, and then when all those are done, make your top-level application. Make sure you understand you 1us requirement. It is probably impossible to write a pc-based application with reliable 1-100us deterministic closed-loop performance. RT will help, but your I/O calls will still be slower than that (though NI has been working on this since I last tried), and so you won't have much time for processing. In motion control, your physical processes (motion stage inertia etc) are usually slower than this, so I don't understand what you need. If you are measuring signals, you can use NI hardware-timed acquisition which is much faster, but only process the results at 100Hz or so and then there would probably be no need for an RT system. Without more info about your application I just couldn't say any more. QUOTE(bono02 @ Nov 12 2007, 01:26 AM) Did you create a user event for passing message from both layers? What do you think about the performance?I learned from s/w engineering lecture, design is the most important part! I agree with this, but I have to race with the small time that I have. I am also not used to design everything in detail, just small chunk of algorithm, I usually write down first on paper, it helps me to write the code faster. Well, certainly do what works for you. But when I try to write the GUI or the top operational level first, I usually get it wrong. Even if I think I understand the problem, I usually don't. By coding the low-level pieces and beating on them until they work, I usually gain an understanding of what matters and what doesn't and then when I am ready to write the GUI, I know what to do. QUOTE(bono02 @ Nov 12 2007, 01:26 AM) Ok, here we also have NI-Elvis. Not me is using it, but my friend. He's teaching undergraduate students. He told me that Elvis is too slow, too many top-level VI which slow the flow of the program. From your experience, what is the maximum level for a sub-VI should be? I don't know anything about NI-Elvis. If their code is too slow, then it is a design issue, not a problem with excessive VI calls. It is what they are doing in those VIs which must be the problem. I'm not sure what you mean about maximum level. If it is nesting then there is no max. QUOTE(bono02 @ Nov 12 2007, 01:26 AM) I have tried the LV FSM toolkit, but for me I don't like it, I can not read the program clearly (maybe because it is using stacked sequence).I am afraid that once I code, a single small changes will ruin everything. My sentiments exactly. In C it was easy, you can insert any code in any line you want, but in LabView I feel it is a little bit difficult. Thank you for your suggestion, I will try to use the producer/consumer coding technique smile.gif 3) Thank, I am using two monitors now thumbup1.gif , though I connect to the PXI controller using remote desktop! But still zoom in/out function I think is more comfortable, hehe.. 4) Ok. Thx.Note:Why my reply always ending up and merge with my reply to Jason? That's on purpose, but it also removed your text formatting, which is a bug. Read more about both here.
  23. QUOTE(bono02 @ Nov 11 2007, 06:57 PM) You mean there's another one somewhere? Anyway, Welcome to LabVIEW! Here are my opinions, but I can guarantee they are not universally shared. I hate state machines. Sometimes they are necessary, but most of the time they are not. I do like a producer-consumer architecture for handling events. Events, UI or programmatic, can happen at any time, and you need to handle them promptly (less than 1/4 sec) so that your user feedback is snappy. However the part of your code which actually gets stuff done is often slow (more than 1 sec, often way more), so you need to let those operations run their course , undisturbed, in some kind of dataflow VI. A corollary to this is to keep all of your user interfaces out of the operational code. Each VI gives you a user interface (awesome for testing) but you should resist the temptation to handle user interface needs with those VIs which are actually operating your hardware. Use your producer-consumer architecture to pass all the messages between those two worlds. I like to use notifiers to send progress updates and results back from the operational code to the presentation layer. However if you are brand-new to LabVIEW, feel free to ignore all of that and show your results right on the operational VIs. I don't use UML, I don't design on paper first, and I don't start by roughing out the GUI. Those are worthy techniques, but I like to do a bottom-up design. Code up all of your functionality one VI at a time, and stick to one piece of I/O at a time. If you need to read and write from files, write a VI for each of those. If your motion control moves in a straight line, write a VI to do that and one to monitor your motion status until it is complete. If you are looking for the limit switch, write a VI to detect that, and then another one to wait for the limit switch with a timeout function. Then put those together for whatever complex motion/measurement function you are trying to achieve. If you use the RS232 port, make a VI for each message you send and including a check for errors or return messages in that VI. When you are all finished, you will have a much better understanding of what user interface you will need to set your tasks in motion, monitor the progress, and display the results. Then you can build the GUI and figure out what you need to add the operational VIs to get the information back and forth. If you do all of this, your VIs should never get so big and unwieldy (though we've all been there), and you shouldn't need to zoom in or out. As a C programmer, you have one file for each module, but remember, LabVIEW is one file per function, which is one layer down. You have to resist the temptation to stuff lots of functionality into each VI, and don't be put off by having to make an icon and a terminal pane for each function. It just doesn't take that long. Don't let any of the diagrams get too big or to have too many structures: While, For, Case, Sequence, etc. If you have more than 3 structures in a VI, you should think about why you are trying to do so much and consider making each structure the basis of a new VI. If you are doing motion control and anything else interesting, and following my advice, you should end up with between 20 and 100 VIs. Keep them organized and give them all sensible names so you can find them. OK, I'm done for now. Good luck! Jason Dunham
  24. QUOTE(Aristos Queue @ Nov 10 2007, 04:48 PM) Well I was using different wire patterns to show the different clusters, but I agree it could get confusing. The problem with pastels is that the bundle/unbundle nodes use the wire color for the text, and with pastel colors they are unreadable. The other LabVOOP work I had done recently was basically a factory pattern use of classes, and so I have gray for the abstract class and, orange, blue, and green for the three different types of my object, and there aren't many other objects running around those diagrams, and it works very well. There are so many ways to use lvclasses that I think this will remain a case-by-case design decision. QUOTE(Aristos Queue @ Nov 10 2007, 04:48 PM) I've discussed elsewhere several times why dynamic dispatch subVIs aren't called out on the diagram by some glyph, or halo, or something. Search those posts out. Summary: When reading the caller diagram, I really don't care about the underlying implementation of the subVI. Whether it is a static dispatch subVI or a dynamic dispatch subVI is irrelevant -- the icon should tell me what the call does, not how it does it. Similar commentary applies to recursion. Why don't we change the name of functions that call DLLs? Is it important that the caller knows that the subVI uses an external library? No. These are implementation details. If a LabVOOP user develops a brand new set of data types for a LV toolkit and gives them to the LabVIEW user to use, the LV user shouldn't have to understand or be distracted by concepts of dynamic dispatch that have no impact on his/her use of the toolkit. Same goes for recursion. Or DLL calls. (A counter example: When a subVI is an implementation of a LV2-style global, that *is* something that should be noted on the icon. It impacts the caller to know that this VI has global effect, and its value may be affected by other callers.) Actually, I couldn't find those other posts, but I get your drift. I believe we have different purposes here. You are concerned about the API of your class, and how to document it, and I'm glad that you care. I care about the readability of my diagrams, and those of the other developers whose VIs I am code-reviewing. After many years of LabVIEW, I can look at most diagrams, and run the code in my head. If stuff is hidden from my eyes, then I can't see the bugs. With a dynamic dispatch VI, the VI I'm looking at on the block diagram may or may not be the one which executes in any given iteration. I have to double-click on the VI to figure out whether seeing is believing. Now it's an order of magnitude longer for me to figure out how this VI works (not the subvi, but just how the caller is working and whether it's working). I'm on the prowl for bugs, so if there are many subvis wired to some object's wire, then a visual indication of which ones are static and which are dynamic will cue me to double-click on the ones which have to be opened to get the full story. I don't think your argument about the end-user holds much water anyway. In the Map Class we worked on, the end-user API did not have any dynamic dispatch VIs. If you expect the user not to know or even understand them, then it's inappropriate to include them in the API without some wrappers, which is what you did. If they need to use a VI which is dynamically dispatched, you have to make it as clear as possible, because honestly it's always going to be more confusing than the use of a static VI. For example in the Map Class, if you change the key type from string to anything else, you have to make sure the CompareKey.vi function, which is dynamic, still works with your new Key Class, so you have to get all dirty in it. If you have a dynamic VI on a diagram, I don't believe you can be shielded from how it works and still make and debug your VI. (I actually did leave it shielded it with a static wrapper, which I totally regret now). Recursion is pretty much the same. In your original implementation of Map Class, Map:Insert Pair.vi is not recursive. It calls Node:Node Insert Pair.vi which may or may not dispatch to itself or to Branch:Node Insert Pair.vi. That in turn calls Branch Node:Insert on Left/Right.vi (statically) which then makes a recursive call to Node:Node Insert Pair.vi. All those icons look pretty much the same, and honestly it took me the better part of an hour to figure out what was going on. I actually started with some recursive mouse clicking of my own before I realized I was going in circles. My time was wasted because the block diagram doesn't go to any effort to illustrate these relationships, even though that's it prime directive. It would have been great if I could see that the first call to Node:Node Insert Pair was not recursive (it initiates the recursion), and the other call is recursive, which could only be accomplished with an overlay. I added a glyph to my icon, and did some renaming so I could at least flag that the situation is confusing and needs close inspection when debugging. QUOTE(Aristos Queue @ Nov 10 2007, 04:48 PM) QUOTE(jdunham @ Nov 9 2007, 05:00 PM) 6. Use old-style BD terminals for built-in data types (including the built-in generic object), and newer icon terminals for lvclass terminals. That doesn't make sense to me. Use one style or the other. Several users have told me that when the types are mixed on one diagram, they get confused about what is a node and what is a terminal. I know it has happened to me too. I find the large icons have lots of useful information for non-LVClass types -- the conpane of VI References, the type of refnums, and typedef identification. It's a style choice whether you like large or small, but pick one. Anyone who learns to read the G language is going to find out in short order that the block diagram terminals can be displayed with or without icons, and that the two are equivalent. At this point there are several million VIs with the old style terminals, and most new NI examples seem to use the new ones and there are tons of dead trees in Amazon.com's warehouse with pictures of each kind. So any confusion is already out there and I can't fix it. So then the decision boils down to which view conveys more information, and what conventions lead to the quickest code ingestion. For my part, I am only going to use the icon, which is very eye-catching (too much so, IMO) when it cues the reader that this terminal is something out of the ordinary. So I might use it for an interesting typedef which relates to my application, and probably for most lvclasses. All of my enums are typedefs, but that big icon is not going to add any insight into the code (and doesn't even prove it's a typedef) so I'm going to leave it small. The error cluster is the error cluster, and using an icon that is enormous is not going to help me understand that. But the lvclass icons really pop, and it was helpful for me to easily differentiate them on the diagram using the icon view. You suggest that I pick one style, but by using both I can convey even more information, namely, which terminals are "interesting" and which are not particularly so. I used to think the new icons were pointless but now I am a believer. Thanks for helping me clarify my views, Jason
  25. Zoinks! I was just refactoring an application and converting some typedef clusters to lvclasses, now that I am getting more accustomed to LabVOOP. At the same time I was reading a stale info-labview post by Stephen Mercer (actually posted by Matthew Brooks on 5OCT2007) about how great LabVIEW is QUOTE It hit me like a thunderbolt. The price I would pay for making all of my cluster typedefs into lvclasses is losing this feature; the ability to test any VI by typing in the values and pressing the Run button (or using Suspend When Called and making a few changes). However, you can't enter data into an lvclass object on a panel, because it's just an icon, instead of a cluster. How do you test your class's methods without writing twice as many VIs? Even within a class, I can't open up the private data cluster (the .ctl file), copy its contents and paste them into any other icon of that lvclass. I can probe the wire and see its contents, but similarly, I can't copy from the probe into an lvclass control, even in the same VI. Even if the probe copy worked, it would not be sufficient, because I need to be able to modify the data easily. I have to make a new private method to populate the class with test data and copy/paste from that or write a test wrapper. The copy/paste problem is grave enough but it's still a poor substitution for direct data entry. At this point, I just can't see converting my large application to LabVOOP without a fix for this. The other developers on the project will have my head on a plate if I take away the ability to test the subvis, and to use Suspend When Called to view and modify the data during runtime. With traditional typedefs and enforcing encapsulation on your own, you get almost all of the benefits of LabVOOP, and you can still leverage the power of LabVIEW (in other words, you can get your work done). This is my suggestion: If a VI is owned by a class, then its front panel should allow you to see and use the private data cluster instead of the object icon. Extending this idea a bit, one should be able to make a reasonable user interface out of an lvclass method VI which should behave like a traditional (non-LabVOOP) VI with a strict type def cluster. It could be a right-click option whether the cluster is to be displayed if the lvclass is locked. If the class is used in a public scope (or as member data of another class), then the icon should be shown like it is now, and the user should only be able to copy and paste the data. In that case there ought to be an easy way to open up a new VI, add it to the class, and paste the contents of the clipboard so that the value can be seen, modified, and pasted back to the public VI. It would be even cooler if you could crack open an unlocked class's icon and modify the data when necessary, but that sounds hard, whereas the other suggestions should be pretty easy to add to LabVIEW. I still plan to use some LabVOOP for some by-reference objects, like the Singleton design pattern, since those are hard to debug on front panels and I'm pretty comfortable with the queue and notifier functions. But I think this makes lvclasses unusable for by-value objects which are otherwise a great idea.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.