Jump to content

drjdpowell

Members
  • Posts

    1,981
  • Joined

  • Last visited

  • Days Won

    183

Everything posted by drjdpowell

  1. 'Cause you hit the button again. Stopping the loop doesn't stop the event queue inherent to the event structure. It locks the front panel, as instructed, until the event is handled. That it will never be handled because the loop is stopped is immaterial.
  2. First thing to do is put a Wait in the upper loop, which is currently running at max CPU and starving other processes. That might have strange effects.
  3. It serves as a starting point for adding functionality. For example, if your "Car" needs to always have it's headlights on when the engine is on, you would override "Start Engine" and add turning on of the lights to it, while calling the parent method to actually start the engine.
  4. To extend the analogy let's replace your numerics with something vehicles have. A vehicle has an engine, and a method to "Start Engine". You've created a "car", which being a vehicle has an engine that can be started, and you've given it a second engine (!) and were surprised when the "Start Engine" method started the first engine. OK, so, why did you add a second engine? -- James BTW. The "parent/child" terminology of OO is actually confusing; "Car" is not a child of "Vehicle", it is a type of vehicle.
  5. I believe so. It's "garbage collection"; where LabVIEW frees up the resources of the VI.
  6. Alex, you can guess my advice: do away with the over-complexity of dynamic launching and the like. But ignoring that... Whatever way you do this, it is better to think of the consumer making the data pipeline and getting that reference to the producer, rather than the other way round. The consumer is dependent on the queue, the producer is not. If I were designing something like this with my messaging design shown above, the producer would send it's data to an "ObserverSet", which by default is empty (data sent nowhere). The consumer would create a "messenger" (queue) and send it to the producer in a "Hey, send me the data!" message (alternately, the higher-level part of the program that creates both consumer and producer would create the queue and the message). The producer would add the provided messenger to the ObserverSet, allowing data piping to commence. In the below example, the consumer code registers it's own queue with the producer and is piped 100 data points. The Actor Framework is rather advanced. You might want to look at "LapDog" though, as it is a much simpler application of LVOOP that one can "get" more easily (I am surprised AQ thinks the Actor Framework is simpler). -- James
  7. Hi Ravi, Darren's VI seems to be missing from the R4 zip file. -- James
  8. You might want to instead have a numeric control of the clone number to search up to. Zero would mean disabled, the default could be 100 or so, and Users could increase the number if they were working with higher-numbered clones. BTW, an idea to improve the tool is to group VI's by class/library, like this: Not sure if that's easy to do, but it would greatly improve readability for LVOOP projects that tend to have large numbers of identically-names VIs (as well as not particularly descriptive names such as "Do.vi"). -- James
  9. Ravi, Here's, if you would like to use it, is a "Find Clones" VI that searches for clones up to some number. It can be inserted into the list of VIs after filtering. Find Clones.vi
  10. I downloaded the 2011 Actor Framework example (that uses Async Call by Ref) and it keeps increasing the numbers (even though the clones leave memory between runs) but sequentially. Rather strangely though, the numbers 1 and 2 get reused. Curiouser and curiouser... For the older software, with the "Run VI" method, the numbers don't increase.
  11. Not necessarily. If the original clones have left memory when the new ones are created, the numbering seems to start at the lowest available number. It's only when clones stay in memory for some reason (Front Panel still open, for example) that the numbers start marching forward. I was having the issue of Actor.vi clones staying in memory until I saved everything in LV2011 (your right, it is an older version of your framework that I am running) after which all the clone number start from 1 regardless of how many times I restart the application. I don't fully understand the issue, and unfortunately I don't fully understand why LabVIEW sometimes keeps clones in memory.
  12. I don't know your Actor Framework very well, but when playing around with it using the Task Manager I found that the dynamically launched "Actor.vi" clones showed up as "RunTop" and I was able to Abort them from the task manager. Running "Pretty UI" with 2 sensors added.
  13. Here's a quick hack where I test for clones, looking at AQ's Active Object Demo: Searching for clones up to xxxxx:100 doesn't take much time (up to xxxxx:10000 had a delay of about 5 seconds for this project). One could decrease the likelihood of missing clones while still being fast(ish) by a search procedure such as: search up to 100, if find clones also search 100-1000, if find more clones search 1001 to 10,000. -- James
  14. How fast can they be checked? Because in my experience, they start at 1 and count up, so in the majority of cases trying every number up to, say, 100 will find them all (I think I got the count up to 50-odd a couple of times). If you create your message queue in the calling VI and pass it to the dynamic callee, then when the caller goes idle it invalidates the queue, which can be used to trigger shutdown of the callee. No good if you need the callee to be fully independent of the caller, but most of my dynamic VI's are "owned" by the process that starts them. -- James
  15. The only alternate idea I can think off is to add the debug feature to whatever framework you are using to either dynamically launch the VIs, or to pass messages. The launching subVI could keep a list of references to the VIs that it launches (for access by the debug tool). Or if there is a common "enqueue" or "send message" subVI, that could be commanded by the debug tool to pause its own caller. But surely there must be a way to get a reference to all running VI's, including clones? -- James
  16. Darn "delete" browser button ate my first post! My rewrite will be entirely different in focus because my initial post wasn't that interesting anyway. Ah yes, the synchronous call to the unpredictable "User" process, I have been bitten by that before. I've written dialog boxes that beep loudly or that auto-cancel after a short while. I've toyed with the idea of making an asynchronous dialog box that works with command-pattern-style messages (since what is a dialog box but a way of sending a message and getting a response) but haven't put the effort in. Your multi-loop design is certainly robust against such issues, but how do you handle information that the UI Input loop might need? What if you have a dialog box that needs to be configured using your "DeltaZ" value, for example? My Users are always asking for dialog boxes that redisplay all the configuration controls for them so that can realize what they forgot to set. I don't see how your UI input loop could implement such a dialog. Just as an aside, has anyone heard of a design for a message-passing system where messages are flagged by what should be done if they are undeliverable for a significant time? Most of my messages convey current state information, and are rendered obsolete in a few seconds when the next message of that type arrives. Having the queue fill up with obsolete messages and crash the program seems silly. The ideal queueing system would know this and only save the last version for most messages. One could use a size-limited lossy queue, but unfortunately some messages are of the type that must be delivered. -- James
  17. That was my first thought, but I don't think that list includes dynamically-launched clones (which AQ will be using, I think). At least, it didn't seem to work under LabVIEW 8.6 when I tried it briefly. -- James
  18. Ah, I see, you have "internal to the UI component" messages in addition to messages such as "GoToCenterPosition" that are sent out of the component. Personally (illustrating alternate possible styles of programming that might use LapDog) I would probably try and write such a UI in a way that combines your top three loops in a single event-driven UI loop (using a User Event instead of a queue). This would eliminate "Inp:LoadBtnReleased" messages entirely. Your way is more flexible, I imagine, and allows the full use Queue functionality (so far, I'm happy with the limits of User Events). -- James BTW: is that a timed loop that reads a global variable? This is not your preferred architecture I would hazard to guess?
  19. My development group of one is very good at standardizing. The advantage of having a "VarMessage" as a standard part of the library is that you could add it to the enqueue polymorphic VI (in analogy to how you currently allow data-less messages), simplifying the wiring for those who use Variant messages. One can easily extend the polymorphic VI oneself, but then one has a customized LapDog library which makes upgrading to a new LapDog version trickier. Command Messages are different, I think, because they are inherently non-re-use (unless I'm mistaken, one would have a different tree of commands messages for each application). A VarMessage might also be an easier way in to LapDog messaging for those used to text-variant messages. My experience is limited to much smaller projects than yours, and they are scientific instruments where one does need direct control of many things. And "abstraction layers" seem less attractive if your the only person on both sides of the layer. Also, I was more imagining a bottom-up approach, where the meaningful process variables are propagated up into the UI control labels. And one isn't constrained to do this; one has the flexibility to abstract things as needed. Currently, for example, I'm implementing control of a simple USB2000 spectrometer. I've written it as an active object that exposes process variables such as "IntegrationTime". In my simple test UI, I just dragged the process variables into the UI diagram, turned them to controls and wired them to send messages in the generic variants way I described in my examples. In the actual UI, which is a previously written program from a few years ago, the IntegrationTime message is sent programmatically based on other UI settings. Making a specific IntegrationTimeMessage class would have made writing the test UI much more work, without gaining me anything in the actual UI. BTW, you don't send "MyButtonClicked" messages, surely? Isn't that exactly the kind of UI-level concepts ("Button", "Clicked") you don't want propagating down into functional code? I certainly see the advantages of the "one message, one class" approach. I'm just arguing for variants as a better generic approach over "one simple data-type, one class". -- James
  20. Yes. Yes. Huh?!? What's this got to do with a default state? The last time I looked at your code, the QSM structure was set up such that it always enqueued another state. Normally, the QSM doesn't do this. When there is no further state in the queue, the dequeue just waits until a new command comes in (or it hit's the defined timeout). Thus, these QSM designs sometimes have "timeout" states, but they never have "idle" states, nor is "default" used for anything other than typos. OK, should have read your whole post before starting to reply. So, yes, that's better. But as I've pointed out before, your developing your QSM design in the middle of trying to get up to speed on dynamic launching AND debugging what I'm sure is a complex FPGA/imaging project. Doing all that at once is fraught with difficulty. -- James
  21. Well, your already using generic message "names" that are text and can thus be misspelled; you'll need some mechanism to handle such issues, such as producing "unknown message" errors in the receiving code. The same error will be triggered the first time the code runs with a changed control label. New developers will quickly learn to use something called a "caption" instead of messing with the label. That brings up a question only you can answer: is LapDog intended to support and encourage a particular style of design, or is it to be of more widespread use to developers with differing styles? Personally, I would think having the control label match the name of the process variable controlled, with generic code connecting them, is an advantage for readability and testing, no? Note that using generic code for some controls doesn't preclude individual treatment of others (the "all controls" example I gave is just an example). Well, compile-time checking isn't possible at all in a messaging system, is it? If you send a U32 message to a loop expecting I32, or "BeginProces" to a loop expecting "BeginProcess", you'll learn about this error at runtime. Similarly for your "SetTemp" to 20psi message. BTW, it was the ground software, used to calculate how long to run the thrusters, that had the wrong-unit bug. The Mars Orbiter computer executed it's suicidal instructions flawlessly! -- James
  22. Hi Alex, I think your being way too ambitious, and trying to develop many advanced concepts simultaneously. Personally, I couldn't learn and use dynamic VIs, queues, QSM architecture, etc. on top of learning FPGA and your imaging equipment itself. Introducing one new thing is a good learning experience; introducing several is a terrible experience as you'll never untangle the nest of interacting bugs. And it's heavily over-engineered. This code is to collect images and display and/or save them, right? Why does it need six separate loops (four dynamically launched) to do this simple sequential thing. For example, your "Listen to Image FIFO Stream" loop loads it's image data into a set of three queues and sends it to your "Display and Save Image Data" loop; you could easily do that in one VI and save the (buggy, BTW) implementation of the queues. You could probably do this program with a single QSM, or at most a QSM with one extra loop controlled by a single notifier (as I suggested in your other thread). The best course of action I can suggest is: 1) Get a basic prototype working that does the basic image collection and display functions you want. ONE VI, ONE loop, no QSM, no architecture, no queues, no control references. Simple data flow only; it's just a prototype. 2) Use that experience to get a set of basic subVI's such as "Get image", "Save Image", "Trigger", etc. (no loops internally, these just do one action each). At this stage, think about clustering related information (type-def clusters or objects). 3) NOW start fresh on a architecture, using your new "driver" of subVI's. It would be best to use someone else's already debugged, and well thought-out, template (such as the JKI-QSM, which is what I use). I suspect you might only need one loop (with the Trigger->Get->Save steps in the timeout case of the JKI-QSM) but if not, use a separate loop on the same diagram controlled by ONE incoming communication method (not multiple polled control references and global variables). If you want to continue with your current monster, here's some issues I can see: 1) In the QSM design, every state calls "idle", including "idle", which causes the loop to execute the "idle" state (which does nothing and has no delay) at maximum speed, monopolizing the CPU (what's the "idle" state even for?). 2) Your three queues to send one image design is buggy since the queues can get out of step with each other when errors occur, Also, your queues will fill up and crash the program if the receiving "Display and Save Image Data" VI isn't running. And "Display and Save Image Data" will display and save default data if it doesn't receive an image every 5ms (the timeout you've added to each queue). 3) Your "Stop Stream Refnum" design neither starts, nor stops, your "Listen to Image FIFO Stream" VI. It doesn't actually do anything. As I said, simultaneous bugs on multiple levels is very difficult to work through. Personally, I only use dynamic VIs via a template (worked out and debugged on simple test cases), and use someone else's template for a QSM (JKI). Combined with an initial prototype to learn the new functionality (often I just play with the example VI's provided), this makes debugging manageable. -- James
  23. Just got LabVIEW 2011, and it is because of small differences in value of the floating point numbers. Increase the display format to 19 significant figures and you'll see you are actually asking in 1 mm is equal to 1.000000000000000890 mm. The correct answer is "no". In general, one should never use the "equals" operation with floating point numbers, because an insignificant (to us) difference is still a difference to the computer. Instead, one should subtract the two numbers, take the absolute value of the difference, and see if that is less than some small "tolerance" number.
  24. Alex, your project zip is missing the two dynamically-launched VI's. Can you upload a new zip that includes them? If so, I'll have a look at it.
  25. Hi Alex, It would be better if you continued your original topic, rather than starting a new one. Conversations like this serve as a resource for later readers (I've learned lots from reading conversations on LAVA) and splitting up the conversation across many topics makes it confusing and less readable. While dynamically launching a VI as a parallel process ("daemon") certainly works, its a bit tricky and often over-kill for what you need. I would really recommend you use a simpler solution with separate loops on the same block diagram, with queue/notifier/UserEvents connecting them. Like the Notifier-to/UserEvent-from design I suggested in your other topic, which does everything you want. Note that you can easily convert your simple solution to a dynamically-launched VI at a later date, but this is worth doing mainly only if you want to reuse the component in another program or have the ability to "launch" an arbitrary number of them. -- James
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.