Jump to content

drjdpowell

Members
  • Posts

    1,964
  • Joined

  • Last visited

  • Days Won

    171

Everything posted by drjdpowell

  1. Ah, I see, you have "internal to the UI component" messages in addition to messages such as "GoToCenterPosition" that are sent out of the component. Personally (illustrating alternate possible styles of programming that might use LapDog) I would probably try and write such a UI in a way that combines your top three loops in a single event-driven UI loop (using a User Event instead of a queue). This would eliminate "Inp:LoadBtnReleased" messages entirely. Your way is more flexible, I imagine, and allows the full use Queue functionality (so far, I'm happy with the limits of User Events). -- James BTW: is that a timed loop that reads a global variable? This is not your preferred architecture I would hazard to guess?
  2. My development group of one is very good at standardizing. The advantage of having a "VarMessage" as a standard part of the library is that you could add it to the enqueue polymorphic VI (in analogy to how you currently allow data-less messages), simplifying the wiring for those who use Variant messages. One can easily extend the polymorphic VI oneself, but then one has a customized LapDog library which makes upgrading to a new LapDog version trickier. Command Messages are different, I think, because they are inherently non-re-use (unless I'm mistaken, one would have a different tree of commands messages for each application). A VarMessage might also be an easier way in to LapDog messaging for those used to text-variant messages. My experience is limited to much smaller projects than yours, and they are scientific instruments where one does need direct control of many things. And "abstraction layers" seem less attractive if your the only person on both sides of the layer. Also, I was more imagining a bottom-up approach, where the meaningful process variables are propagated up into the UI control labels. And one isn't constrained to do this; one has the flexibility to abstract things as needed. Currently, for example, I'm implementing control of a simple USB2000 spectrometer. I've written it as an active object that exposes process variables such as "IntegrationTime". In my simple test UI, I just dragged the process variables into the UI diagram, turned them to controls and wired them to send messages in the generic variants way I described in my examples. In the actual UI, which is a previously written program from a few years ago, the IntegrationTime message is sent programmatically based on other UI settings. Making a specific IntegrationTimeMessage class would have made writing the test UI much more work, without gaining me anything in the actual UI. BTW, you don't send "MyButtonClicked" messages, surely? Isn't that exactly the kind of UI-level concepts ("Button", "Clicked") you don't want propagating down into functional code? I certainly see the advantages of the "one message, one class" approach. I'm just arguing for variants as a better generic approach over "one simple data-type, one class". -- James
  3. Yes. Yes. Huh?!? What's this got to do with a default state? The last time I looked at your code, the QSM structure was set up such that it always enqueued another state. Normally, the QSM doesn't do this. When there is no further state in the queue, the dequeue just waits until a new command comes in (or it hit's the defined timeout). Thus, these QSM designs sometimes have "timeout" states, but they never have "idle" states, nor is "default" used for anything other than typos. OK, should have read your whole post before starting to reply. So, yes, that's better. But as I've pointed out before, your developing your QSM design in the middle of trying to get up to speed on dynamic launching AND debugging what I'm sure is a complex FPGA/imaging project. Doing all that at once is fraught with difficulty. -- James
  4. Well, your already using generic message "names" that are text and can thus be misspelled; you'll need some mechanism to handle such issues, such as producing "unknown message" errors in the receiving code. The same error will be triggered the first time the code runs with a changed control label. New developers will quickly learn to use something called a "caption" instead of messing with the label. That brings up a question only you can answer: is LapDog intended to support and encourage a particular style of design, or is it to be of more widespread use to developers with differing styles? Personally, I would think having the control label match the name of the process variable controlled, with generic code connecting them, is an advantage for readability and testing, no? Note that using generic code for some controls doesn't preclude individual treatment of others (the "all controls" example I gave is just an example). Well, compile-time checking isn't possible at all in a messaging system, is it? If you send a U32 message to a loop expecting I32, or "BeginProces" to a loop expecting "BeginProcess", you'll learn about this error at runtime. Similarly for your "SetTemp" to 20psi message. BTW, it was the ground software, used to calculate how long to run the thrusters, that had the wrong-unit bug. The Mars Orbiter computer executed it's suicidal instructions flawlessly! -- James
  5. Hi Alex, I think your being way too ambitious, and trying to develop many advanced concepts simultaneously. Personally, I couldn't learn and use dynamic VIs, queues, QSM architecture, etc. on top of learning FPGA and your imaging equipment itself. Introducing one new thing is a good learning experience; introducing several is a terrible experience as you'll never untangle the nest of interacting bugs. And it's heavily over-engineered. This code is to collect images and display and/or save them, right? Why does it need six separate loops (four dynamically launched) to do this simple sequential thing. For example, your "Listen to Image FIFO Stream" loop loads it's image data into a set of three queues and sends it to your "Display and Save Image Data" loop; you could easily do that in one VI and save the (buggy, BTW) implementation of the queues. You could probably do this program with a single QSM, or at most a QSM with one extra loop controlled by a single notifier (as I suggested in your other thread). The best course of action I can suggest is: 1) Get a basic prototype working that does the basic image collection and display functions you want. ONE VI, ONE loop, no QSM, no architecture, no queues, no control references. Simple data flow only; it's just a prototype. 2) Use that experience to get a set of basic subVI's such as "Get image", "Save Image", "Trigger", etc. (no loops internally, these just do one action each). At this stage, think about clustering related information (type-def clusters or objects). 3) NOW start fresh on a architecture, using your new "driver" of subVI's. It would be best to use someone else's already debugged, and well thought-out, template (such as the JKI-QSM, which is what I use). I suspect you might only need one loop (with the Trigger->Get->Save steps in the timeout case of the JKI-QSM) but if not, use a separate loop on the same diagram controlled by ONE incoming communication method (not multiple polled control references and global variables). If you want to continue with your current monster, here's some issues I can see: 1) In the QSM design, every state calls "idle", including "idle", which causes the loop to execute the "idle" state (which does nothing and has no delay) at maximum speed, monopolizing the CPU (what's the "idle" state even for?). 2) Your three queues to send one image design is buggy since the queues can get out of step with each other when errors occur, Also, your queues will fill up and crash the program if the receiving "Display and Save Image Data" VI isn't running. And "Display and Save Image Data" will display and save default data if it doesn't receive an image every 5ms (the timeout you've added to each queue). 3) Your "Stop Stream Refnum" design neither starts, nor stops, your "Listen to Image FIFO Stream" VI. It doesn't actually do anything. As I said, simultaneous bugs on multiple levels is very difficult to work through. Personally, I only use dynamic VIs via a template (worked out and debugged on simple test cases), and use someone else's template for a QSM (JKI). Combined with an initial prototype to learn the new functionality (often I just play with the example VI's provided), this makes debugging manageable. -- James
  6. Just got LabVIEW 2011, and it is because of small differences in value of the floating point numbers. Increase the display format to 19 significant figures and you'll see you are actually asking in 1 mm is equal to 1.000000000000000890 mm. The correct answer is "no". In general, one should never use the "equals" operation with floating point numbers, because an insignificant (to us) difference is still a difference to the computer. Instead, one should subtract the two numbers, take the absolute value of the difference, and see if that is less than some small "tolerance" number.
  7. Alex, your project zip is missing the two dynamically-launched VI's. Can you upload a new zip that includes them? If so, I'll have a look at it.
  8. Hi Alex, It would be better if you continued your original topic, rather than starting a new one. Conversations like this serve as a resource for later readers (I've learned lots from reading conversations on LAVA) and splitting up the conversation across many topics makes it confusing and less readable. While dynamically launching a VI as a parallel process ("daemon") certainly works, its a bit tricky and often over-kill for what you need. I would really recommend you use a simpler solution with separate loops on the same block diagram, with queue/notifier/UserEvents connecting them. Like the Notifier-to/UserEvent-from design I suggested in your other topic, which does everything you want. Note that you can easily convert your simple solution to a dynamically-launched VI at a later date, but this is worth doing mainly only if you want to reuse the component in another program or have the ability to "launch" an arbitrary number of them. -- James
  9. The notifier can pass the information. My example shows a notifier with just a boolean, but you can instead use a cluster of information needed by the consumer loop. I've done exactly this kind of thing in the past. Use a User Event (again, a cluster) to pass the results back to the JKI loop event structure. -- James
  10. Excuse my ignorance, but isn't this an extremely common pattern, outside of OOP as much as in it? It's just a simplifying "wrapper" API about a more complex lower-level API. Many "LabVIEW drivers" are facades of other APIs such as ActiveX, VISA, or some dll or other. -- James
  11. I don't have LabVIEW 2011 and can't open the VI's; but, could this be an issue with floating point numbers and the "equals" operation? One should never do "equals" with floating point, because the two numbers from different calculations can be tiny bit different even if the calculations should mathematically be identical. -- James
  12. It might be better to use a notifier: This design will start and stop the inner loop with the same notifier (and destroying the notifier in the upper loop will shut down the lower loop when you want the program to stop). Using a User event is more complicated. -- James
  13. I'm not sure I understand. The event loop figures out which control triggered the event and sends a different message for each control ("Set>>{control name} in my example). Now, this example is only for controls that map well onto the required messages that the UI needs to send (User changed the pressure to 50PSI? --> Send "Set>>Pressure; 50PSI" message). If the relationship is more complex, then one needs custom logic in the UI for each control. -- James
  14. Tell that to the people who flew the Mars Climate Orbiter INTO the planet, rather than into orbit, because one of their software packages was outputing in pound-seconds what the rest of the program thought was Newton-seconds. It's a Volt-Kelvin. Volt = Joules/Coulomb; Joule = kg m^2 s^-2; Coulomb = Amp-second --> Volt = kg m^2 s^-3 A^-1 The base unit is often unintelligible to humans, but as soon as you create an indicator and set it to "Watts", or do any math operation that requires consistent units (add, subtract, greater than, etc.) you'll get a broken wire and realize you multiplied the wrong things. And you'll get an error from your power module if you send it a VarMessage to set power at 1668.9 Volt-Kelvin. Basically, the computer is too dumb to tell you your unit is weird, but it's far better than you at identifying when the physical dimension is not the same between two quantities. Units can be some trouble, and internal to a module you might want to not use them, but for public communication between modules (possibly written at different times or by different programmers) they extend the bug-preventing value of type-checking to physical dimensions and eliminate the need to remember what units other modules expect things in. -- James BTW: the link to the prior conversation is actually here.
  15. Here's another modification of my example of Variant messaging, this time to send an Enumerated Constant to another module that doesn't actually know what the Enum's definition is. Imagine the sending module has a configuration Enum that it sends to the UI module to allow the User to select a configuration. Coupling would be looser if the UI didn't need to depend on the definition of the Enum. We can do this by having the UI module use the OpenG "Get Strings from Enum" VI, and setting a Ring control to match the Enums strings. When the User selects a configuration, the UI sends back the corresponding Ring value (U16), which is automatically cast back into the Enum by the "Variant to Data" primitive in the sending module: -- James
  16. In practice I wouldn't register all controls on a front panel like that; instead controls (or groups of controls) which need special code would have their own case. But multiple controls can be registered for the same case, either statically or dynamically as an array of references. A DBL with an attached unit is a different type than a DBL (no unit). The wire will break if you wire them together (or if the units are incompatible). So to use units with messages you need separate message types for each base unit. Or use a variant. You could just not use units, sending raw DBL messages and relying on the receiver to know what the unit is, but united numbers are useful for preventing bugs. Probably, is there a LapDog discussion group? -- James
  17. Could be worse, look at a project of mine from a few years ago before I learned anything about architecture (the labels you can read are in 106pt font!!!): -- James PS: One thing you need to attend to, though, is your wire routing. You have many places were your wires go under other structures in very confusing ways (I think one wire is running backwards under another wire!). Learn to use the "Clean Up Wire" Menu option.
  18. I'm afraid I'm not getting much further. I can upload text into the binary container (though not download it) with ODBC, but trying to upload binary (ADO datatype adBinary or adLongVarBinary) returns "Invalid type". I'm using the Database Toolkit. The admin who runs the Filemaker hasn't been able to do it with ODBC (non-LabVIEW) but he has uploaded binary with JDBC.
  19. Here's an image of a recent large (for me at least) project: This is the top level of the program, which is a "Database Viewer" for displaying several different types of measurement records in a database. It is based on a free template available from JKI, a form of something known as a "Queued State Machine" or QSM (though Queued Operation Machine would be a more apt name). In a QSM, a large program is organized into various frames of the outer case structure, with some queuing mechanism for calling the frames in order, and some kind of memory for organizing data available to all frames. Google "JKI state machine" for more info. Not so obvious, but seen in the image is another technique: clustering closely-related information on one wire and having a set of subVI's that act on that wire. This allows the top-level diagram to be much clearer and simpler. The best way to do this is with LVOOP "Classes", though one can instead use type-def clusters. In the image you can see the use of a "Record" class, and most of the complexity of the program is hidden in that class's subVI's. -- James
  20. I've tried adding a dynamically-dispatched method called something like "Text Description" to my classes, which outputs a human-readable summary of the object data. "Text Description" in child classes call their parent's "Text Description" and add-to or modify it, building it up (possibly through several levels of inheritance). Then this method can be used in a single probe that works on all child classes. There are some probes of mine in this image from another conversation: The probes themselves only work with Text. It works well if each level of the class hierarchy isn't too complicated, and can be meaningfully summarized in a few words. -- James
  21. I guess one of my problems is that my own messaging design has too many polymorphic VIs to add to. In the quest to make the wiring simple I have polymorphic VIs for Write(create), Extract(get), Send, Reply, Query, and Notify Observers. So using the inbuilt polymorphism of the variant-to-data primitive is attractive. For example, here's a quick rewrite of the lower receiving loop in my previous example, where the "Sett>>..." messages set the internal data values of a cluster in a shift register (a form I use a lot): To extend this to accepting Set>>Temperature messages, I just need to duplicate the frame and select the right element of the shift-register data cluster. I don't need to change the message type. Or alternately I could use the OpenG Variant tools and write one case to handle all "Set>>..." messages: The "Set>>" subVI: Now I can add new "settable" items to the shift-reg cluster (and drop corresponding controls in the UI) without any new wiring to do. -- James Opps: just noticed I left the "Get Var" subVI with the original "Get LVObj" icon; hope that isn't confusing.
  22. I see the great value in using custom message objects instead of custom cluster typedefs (particularly if you use the "command pattern" and dynamically dispatch off the message), but I'm talking about using variants for standard LabVIEW datatypes. From my (more limited) programming experience, I more often use variants to write "generic" code that handles multiple simple datatypes, not large typedef clusters. Now, a large number of simple-type classes (combined with large polymorphic VIs as Steve mentions) will perhaps do everything Variants do, BUT, isn't this reinventing the wheel? Variants have a lot of nice features and are pre-existing, supported by NI, and have lots of utility code written in them already (love the OpenG stuff). Using Variants lets one interface more cleanly with LabVIEW features that already use variants to represent simple data types. For example, here's a simple code for a test UI, where all front-panel controls are connected to a message queue such that all "Value Change" events on any control are sent out as "Set>>{control name}" messages in a VarMessage: Here, I'm directly connecting to LabVIEW's ability to handle multiple control types in a single event frame, via variants. If I want to add a control, I just drop it on the front panel and give it the right name and it's done. Writing code to receive variant messages and update the appropriate control (by lookup of the control name) is only slightly more complicated. To do this UI without variants, I would have to stop and create new message types for the pressure and temperature unit DBLs (it's going to be a big polymorphic VI, Steve, once you get to all the different possible units!), then configure an event frame for each control. But again, isn't this reinventing the wheel? Your goal is to send data, and variants are designed to send data. I like things to be as "easy as hooking up a...wire". I imagine one could design a "SimpleTypeVarMessage" that would produce an error if one connected a cluster to it; that might address your dependency issues. Custom messages I see the advantages of, but that seems a lot of work. Also, isn't the use of custom messages a "dependency" in an of itself? I haven't worked with them enough to know, but it seems like custom messages to talk to a module is a dependancy. But variants ARE a pre-LVOOP solution to the problem of a wire carrying multiple datatypes. Wrapping that solution in a message class is a lot easier than reinventing it. -- James
  23. Question: Why don't you have a "VariantMessage" Type? You have an I32Message, and one could extend that with U32Message, I16Message, DBLMessage, etc. etc. etc., but that's a lot of message types! Is there a reason not to use a single VariantMessage Type to send all simple data messages, rather than a long list of native types? I ask because I've been experimenting with my own (similar to LapDog) message design, and there seems to me to be a lot to be said for using VariantMessages. -- James
  24. I'm currently trying to upload binary data from LabVIEW into a Filemaker 11 Container field via ODBC and I'm getting nowhere. I wondered if you were able to do this? I can upload non-binary fields fine.
  25. The other two methods aren't precise enough to see the <0.5ms variation in times. The ms timer can only time to the millisecond, and the datetime is even less accurate.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.