Jump to content

drjdpowell

Members
  • Posts

    1,964
  • Joined

  • Last visited

  • Days Won

    171

Posts posted by drjdpowell

  1. To make sure I understand I've made another screenshot...I've created a class called Vehicle and a child class called car.

    To extend the analogy let's replace your numerics with something vehicles have. A vehicle has an engine, and a method to "Start Engine". You've created a "car", which being a vehicle has an engine that can be started, and you've given it a second engine (!) and were surprised when the "Start Engine" method started the first engine.

    OK, so, why did you add a second engine?

    -- James

    BTW. The "parent/child" terminology of OO is actually confusing; "Car" is not a child of "Vehicle", it is a type of vehicle.

    • Like 1
  2. A quick question though, when the VI that creates a queue is shut down, obviously it destroys that queue reference as all other VI's subsequently return an error. Is this the same as calling the "Release reference" queue command in terms of memory behaviour? In other words, if the queue reference is lost this way, is the memory still freed up?

    I believe so. It's "garbage collection"; where LabVIEW frees up the resources of the VI.

  3. I can't figure out how to create a scenario where a dynamically launched sub-vi creates a data pipeline queue (for offloading processing of high-speed data to another asynchronous parallel process), then registers the queue to the mediator which passes it to a sub-vi that calls for it.

    Alex, you can guess my advice: do away with the over-complexity of dynamic launching and the like. But ignoring that...

    Whatever way you do this, it is better to think of the consumer making the data pipeline and getting that reference to the producer, rather than the other way round. The consumer is dependent on the queue, the producer is not. If I were designing something like this with my messaging design shown above, the producer would send it's data to an "ObserverSet", which by default is empty (data sent nowhere). The consumer would create a "messenger" (queue) and send it to the producer in a "Hey, send me the data!" message (alternately, the higher-level part of the program that creates both consumer and producer would create the queue and the message). The producer would add the provided messenger to the ObserverSet, allowing data piping to commence.

    In the below example, the consumer code registers it's own queue with the producer and is piped 100 data points.

    post-18176-0-90101900-1317038602_thumb.p

    I had a look over AF, but I just don't think I'm ready (no free time atm) to battle with the OOP concepts.

    The Actor Framework is rather advanced. You might want to look at "LapDog" though, as it is a much simpler application of LVOOP that one can "get" more easily (I am surprised AQ thinks the Actor Framework is simpler).

    -- James

    • Like 1
  4. I'll add this VI to the Task Manager with a checkbox on the UI to enable/disable the search.

    You might want to instead have a numeric control of the clone number to search up to. Zero would mean disabled, the default could be 100 or so, and Users could increase the number if they were working with higher-numbered clones.

    BTW, an idea to improve the tool is to group VI's by class/library, like this:

    post-18176-0-29846300-1316716142.png

    Not sure if that's easy to do, but it would greatly improve readability for LVOOP projects that tend to have large numbers of identically-names VIs (as well as not particularly descriptive names such as "Do.vi").

    -- James

  5. It appears the Async call does not use sequential numbering:

    I downloaded the 2011 Actor Framework example (that uses Async Call by Ref) and it keeps increasing the numbers (even though the clones leave memory between runs) but sequentially. Rather strangely though, the numbers 1 and 2 get reused. Curiouser and curiouser...

    post-18176-0-04453700-1316618551.png

    For the older software, with the "Run VI" method, the numbers don't increase.

  6. Everyone: The checking for clones problem gets worse the more times you run your application. Statically allocated clones will generally keep their number. Dynamically allocated clones will move further down the number line. That poses a problem for the "test a few" idea.

    Not necessarily. If the original clones have left memory when the new ones are created, the numbering seems to start at the lowest available number. It's only when clones stay in memory for some reason (Front Panel still open, for example) that the numbers start marching forward. I was having the issue of Actor.vi clones staying in memory until I saved everything in LV2011 (your right, it is an older version of your framework that I am running) after which all the clone number start from 1 regardless of how many times I restart the application.

    I don't fully understand the issue, and unfortunately I don't fully understand why LabVIEW sometimes keeps clones in memory.

  7. Second: Affects both remote VI Server calls and local Asynch Call By Ref calls. You can't abort subVIs. The Abort VI method will return error 1000 unless the VI is the top-level VI. So what's the problem? When you launch a VI using the Asynch Call By Reference using "Fire & Forget" mode, it launches as a subVI, even though it will keep running if its caller quits. That means that even if the VI is not reentrant, so you can get a reference to it, you still can't tell it to abort. And there is no way to get the caller VI because the caller VI is a fake proxy (you can see it in the VI Hierarchy window). When you launch a VI remotely using the Run VI method or ACBR, you also have a proxy caller that isn't abortable.

    I don't know your Actor Framework very well, but when playing around with it using the Task Manager I found that the dynamically launched "Actor.vi" clones showed up as "RunTop" and I was able to Abort them from the task manager.

    post-18176-0-86951600-1316609159.png

    Running "Pretty UI" with 2 sensors added.

  8. Here's a quick hack where I test for clones, looking at AQ's Active Object Demo:

    post-18176-0-16503500-1316604975_thumb.p

    Searching for clones up to xxxxx:100 doesn't take much time (up to xxxxx:10000 had a delay of about 5 seconds for this project). One could decrease the likelihood of missing clones while still being fast(ish) by a search procedure such as: search up to 100, if find clones also search 100-1000, if find more clones search 1001 to 10,000.

    -- James

  9. You can open a VI ref to the clones by using Open VI Reference and passing a name in like "XYZ.vi:1". I tried checking each value sequentially until I got an error, only to discover that the numbers are not sequential. They can be any number up to MAX_Int32 (roughly 2 billion), so the "guess and check" method is out.

    How fast can they be checked? Because in my experience, they start at 1 and count up, so in the majority of cases trying every number up to, say, 100 will find them all (I think I got the count up to 50-odd a couple of times).

    The best workaround solution is to build a mechanism into your reentrant and dynamically launched VIs that allows them to be messaged by some tool to stop them -- like if they use a queue, register that queue with a central data store somewhere that you could run to kill all those queues, which would make those VIs exit on their own.

    If you create your message queue in the calling VI and pass it to the dynamic callee, then when the caller goes idle it invalidates the queue, which can be used to trigger shutdown of the callee. No good if you need the callee to be fully independent of the caller, but most of my dynamic VI's are "owned" by the process that starts them.

    -- James

  10. I need this to debug an event handler and to keep the parallel VIs from continuing to generate events. A couple of times, I have gotten very close to the source of a bug only to have LV run out of memory because the background VIs continued to generate events and queue stuff up. It's really annoying.

    The only alternate idea I can think off is to add the debug feature to whatever framework you are using to either dynamically launch the VIs, or to pass messages. The launching subVI could keep a list of references to the VIs that it launches (for access by the debug tool). Or if there is a common "enqueue" or "send message" subVI, that could be commanded by the debug tool to pause its own caller.

    But surely there must be a way to get a reference to all running VI's, including clones?

    -- James

  11. Darn "delete" browser button ate my first post! My rewrite will be entirely different in focus because my initial post wasn't that interesting anyway.

    For example, what happens if a dialog box blocks the thread? No display updates are shown until the dialog box is cleared.

    Ah yes, the synchronous call to the unpredictable "User" process, I have been bitten by that before. I've written dialog boxes that beep loudly or that auto-cancel after a short while. I've toyed with the idea of making an asynchronous dialog box that works with command-pattern-style messages (since what is a dialog box but a way of sending a message and getting a response) but haven't put the effort in.

    Your multi-loop design is certainly robust against such issues, but how do you handle information that the UI Input loop might need? What if you have a dialog box that needs to be configured using your "DeltaZ" value, for example? My Users are always asking for dialog boxes that redisplay all the configuration controls for them so that can realize what they forgot to set. I don't see how your UI input loop could implement such a dialog.

    The event queue will keep growing until eventually LV crashes with an out-of-mem...

    Just as an aside, has anyone heard of a design for a message-passing system where messages are flagged by what should be done if they are undeliverable for a significant time? Most of my messages convey current state information, and are rendered obsolete in a few seconds when the next message of that type arrives. Having the queue fill up with obsolete messages and crash the program seems silly. The ideal queueing system would know this and only save the last version for most messages. One could use a size-limited lossy queue, but unfortunately some messages are of the type that must be delivered.

    -- James

  12. Ah, I see, you have "internal to the UI component" messages in addition to messages such as "GoToCenterPosition" that are sent out of the component. Personally (illustrating alternate possible styles of programming that might use LapDog) I would probably try and write such a UI in a way that combines your top three loops in a single event-driven UI loop (using a User Event instead of a queue). This would eliminate "Inp:LoadBtnReleased" messages entirely. Your way is more flexible, I imagine, and allows the full use Queue functionality (so far, I'm happy with the limits of User Events).

    -- James

    BTW: is that a timed loop that reads a global variable? This is not your preferred architecture I would hazard to guess?

  13. I'm not saying using the control label as part of the message name is wrong. If that's a practice your development group standardizes on and everybody is comfortable with it then there's no problem. I'm just saying I don't like to use it for the reasons I stated.

    My development group of one is very good at standardizing. :)

    All the LapDog components (currently Messaging is the only released package, but Collections and Data Structures are around too) are intended to be usable by LVOOP developers regardless of their particular style. I don't provide everything needed to implement a given style, such as a Variant message, or a Command message, but they are very easy for developers to add on their own.

    The advantage of having a "VarMessage" as a standard part of the library is that you could add it to the enqueue polymorphic VI (in analogy to how you currently allow data-less messages), simplifying the wiring for those who use Variant messages. One can easily extend the polymorphic VI oneself, but then one has a customized LapDog library which makes upgrading to a new LapDog version trickier. Command Messages are different, I think, because they are inherently non-re-use (unless I'm mistaken, one would have a different tree of commands messages for each application).

    A VarMessage might also be an easier way in to LapDog messaging for those used to text-variant messages.

    Readability, probably yes. Testing, maybe. Flexibility, no. A lot of it depends on the project specifics, who the users are, and how much the user interface abstracts the implementation. If your software controls a system of valves and sensors and the operators are engineers who want direct control over everything, then yes, using control labels for message names might make sense. For higher level applications my fp events trigger "MyButtonClicked" messages. What that message actually does is handled elsewhere.

    Propogating fp control names down into functional code follows fairly naturally if you're using a top-down approach. In my experience the top-down approach leads to less flexible and tighter coupled code. Abstraction layers tend to leak. Using the bottom-up approach I name messages according to what makes sense in the context of that particular component, not based on what a higher level component is using it for.

    My experience is limited to much smaller projects than yours, and they are scientific instruments where one does need direct control of many things. And "abstraction layers" seem less attractive if your the only person on both sides of the layer. Also, I was more imagining a bottom-up approach, where the meaningful process variables are propagated up into the UI control labels. And one isn't constrained to do this; one has the flexibility to abstract things as needed.

    Currently, for example, I'm implementing control of a simple USB2000 spectrometer. I've written it as an active object that exposes process variables such as "IntegrationTime". In my simple test UI, I just dragged the process variables into the UI diagram, turned them to controls and wired them to send messages in the generic variants way I described in my examples. In the actual UI, which is a previously written program from a few years ago, the IntegrationTime message is sent programmatically based on other UI settings. Making a specific IntegrationTimeMessage class would have made writing the test UI much more work, without gaining me anything in the actual UI.

    BTW, you don't send "MyButtonClicked" messages, surely? Isn't that exactly the kind of UI-level concepts ("Button", "Clicked") you don't want propagating down into functional code?

    Eliminating (or significantly reducing) run-time typecasting errors was my original goal of LapDog.Messaging. Creating a unique type (class) for each message and encoding the message name as part of the type eliminates a lot of the risk, though not all of it. Even though I've adopted a more generic approach with the current LDM release, users can still implement a "one message, one class" approach if they want.

    I certainly see the advantages of the "one message, one class" approach. I'm just arguing for variants as a better generic approach over "one simple data-type, one class".

    -- James

  14. I understand why the states are unstable. The fact that idle can enque itself combined with the fact that the queue can receive a "Run" command while the idle state is happening and then the idle state enqueus itself means the states swap enqueuing themselves back and forth.

    Yes.

    As mentioned above, I think the key is to remove the idle state.

    Yes.

    The problem with this is, I can't then figure out how to handle typo errors in the state commands without the default structure having that error handling ability. That's fine for me, as I can be really rigorous, but if I want to give this to anyone else, for sure they'll make a typo at some stage. Anyone got any ideas?

    Huh?!? What's this got to do with a default state? The last time I looked at your code, the QSM structure was set up such that it always enqueued another state. Normally, the QSM doesn't do this. When there is no further state in the queue, the dequeue just waits until a new command comes in (or it hit's the defined timeout). Thus, these QSM designs sometimes have "timeout" states, but they never have "idle" states, nor is "default" used for anything other than typos.

    Report: It doesn't, at least on the VI I tested, yay! I realised that I actually can still have the typo error handling. When each parallel process starts up, it runs an Init state (w/e is needed), then goes into wait mode. Running the action command goes into a loop of continuous action, until a pause command is sent, then it goes to a pause state which flushes the queue, and then waits again. I'm pretty happy with that actually, can anyone see any obvious holes I'm missing or ways to break that pattern?

    OK, should have read your whole post before starting to reply. So, yes, that's better. But as I've pointed out before, your developing your QSM design in the middle of trying to get up to speed on dynamic launching AND debugging what I'm sure is a complex FPGA/imaging project. Doing all that at once is fraught with difficulty.

    -- James

  15. Labels are one of those things that--in my experience--developers tend to change without thinking too much. Writing algorithms that unnecessarily depends on label text causes brittle code.

    Well, your already using generic message "names" that are text and can thus be misspelled; you'll need some mechanism to handle such issues, such as producing "unknown message" errors in the receiving code. The same error will be triggered the first time the code runs with a changed control label. New developers will quickly learn to use something called a "caption" instead of messing with the label.

    This is one area we differ. I'm more concerned with creating code that is sustainable rather than code that is easy to write. Often times these goals exert opposing forces on your design decisions. In fact, there is a noticable decrease in my code's sustainability when I try to make code that is "easy" to write. Readability, testability, extendability, etc., all take priority over writability.

    That brings up a question only you can answer: is LapDog intended to support and encourage a particular style of design, or is it to be of more widespread use to developers with differing styles? Personally, I would think having the control label match the name of the process variable controlled, with generic code connecting them, is an advantage for readability and testing, no? Note that using generic code for some controls doesn't preclude individual treatment of others (the "all controls" example I gave is just an example).

    This is only true when the data from the two modules are linked together directly. If you drop a value in a variant message to send it to a different loop the compiler can't verify the run-time type propogation is correct. In the hypothetical world where Labview ran the Mars Orbiter, using typed units may not have prevented the problem.

    Well, compile-time checking isn't possible at all in a messaging system, is it? If you send a U32 message to a loop expecting I32, or "BeginProces" to a loop expecting "BeginProcess", you'll learn about this error at runtime. Similarly for your "SetTemp" to 20psi message.

    BTW, it was the ground software, used to calculate how long to run the thrusters, that had the wrong-unit bug. The Mars Orbiter computer executed it's suicidal instructions flawlessly!

    -- James

  16. I've attached a zip of my project as it currently stands.

    Hi Alex,

    I think your being way too ambitious, and trying to develop many advanced concepts simultaneously. Personally, I couldn't learn and use dynamic VIs, queues, QSM architecture, etc. on top of learning FPGA and your imaging equipment itself. Introducing one new thing is a good learning experience; introducing several is a terrible experience as you'll never untangle the nest of interacting bugs.

    And it's heavily over-engineered. This code is to collect images and display and/or save them, right? Why does it need six separate loops (four dynamically launched) to do this simple sequential thing. For example, your "Listen to Image FIFO Stream" loop loads it's image data into a set of three queues and sends it to your "Display and Save Image Data" loop; you could easily do that in one VI and save the (buggy, BTW) implementation of the queues. You could probably do this program with a single QSM, or at most a QSM with one extra loop controlled by a single notifier (as I suggested in your other thread).

    The best course of action I can suggest is:

    1) Get a basic prototype working that does the basic image collection and display functions you want. ONE VI, ONE loop, no QSM, no architecture, no queues, no control references. Simple data flow only; it's just a prototype.

    2) Use that experience to get a set of basic subVI's such as "Get image", "Save Image", "Trigger", etc. (no loops internally, these just do one action each). At this stage, think about clustering related information (type-def clusters or objects).

    3) NOW start fresh on a architecture, using your new "driver" of subVI's. It would be best to use someone else's already debugged, and well thought-out, template (such as the JKI-QSM, which is what I use). I suspect you might only need one loop (with the Trigger->Get->Save steps in the timeout case of the JKI-QSM) but if not, use a separate loop on the same diagram controlled by ONE incoming communication method (not multiple polled control references and global variables).

    If you want to continue with your current monster, here's some issues I can see:

    1) In the QSM design, every state calls "idle", including "idle", which causes the loop to execute the "idle" state (which does nothing and has no delay) at maximum speed, monopolizing the CPU (what's the "idle" state even for?).

    2) Your three queues to send one image design is buggy since the queues can get out of step with each other when errors occur, Also, your queues will fill up and crash the program if the receiving "Display and Save Image Data" VI isn't running. And "Display and Save Image Data" will display and save default data if it doesn't receive an image every 5ms (the timeout you've added to each queue).

    3) Your "Stop Stream Refnum" design neither starts, nor stops, your "Listen to Image FIFO Stream" VI. It doesn't actually do anything.

    As I said, simultaneous bugs on multiple levels is very difficult to work through. Personally, I only use dynamic VIs via a template (worked out and debugged on simple test cases), and use someone else's template for a QSM (JKI). Combined with an initial prototype to learn the new functionality (often I just play with the example VI's provided), this makes debugging manageable.

    -- James

    • Like 1
  17. Just got LabVIEW 2011, and it is because of small differences in value of the floating point numbers. Increase the display format to 19 significant figures and you'll see you are actually asking in 1 mm is equal to 1.000000000000000890 mm. The correct answer is "no". In general, one should never use the "equals" operation with floating point numbers, because an insignificant (to us) difference is still a difference to the computer. Instead, one should subtract the two numbers, take the absolute value of the difference, and see if that is less than some small "tolerance" number.

    • Like 2
  18. Hi Alex,

    It would be better if you continued your original topic, rather than starting a new one. Conversations like this serve as a resource for later readers (I've learned lots from reading conversations on LAVA) and splitting up the conversation across many topics makes it confusing and less readable.

    While dynamically launching a VI as a parallel process ("daemon") certainly works, its a bit tricky and often over-kill for what you need. I would really recommend you use a simpler solution with separate loops on the same block diagram, with queue/notifier/UserEvents connecting them. Like the Notifier-to/UserEvent-from design I suggested in your other topic, which does everything you want. Note that you can easily convert your simple solution to a dynamically-launched VI at a later date, but this is worth doing mainly only if you want to reuse the component in another program or have the ability to "launch" an arbitrary number of them.

    -- James

  19. Yeah, there's a problem in that I also need to pass information to the "consumer" loop along with the "go" type instruction. I need to pass a reference to an FPGA vi.

    The notifier can pass the information. My example shows a notifier with just a boolean, but you can instead use a cluster of information needed by the consumer loop. I've done exactly this kind of thing in the past. Use a User Event (again, a cluster) to pass the results back to the JKI loop event structure.

    -- James

  20. I found a nasty bug where a control that has units associated with it somehow appears to get corrupted. When a corrupted control and a normal control are fed into the equals function, it returns false. It's not limited to controls either, a corrupted and normal constant will return as not being equal. Example vis are attached.

    I don't have LabVIEW 2011 and can't open the VI's; but, could this be an issue with floating point numbers and the "equals" operation? One should never do "equals" with floating point, because the two numbers from different calculations can be tiny bit different even if the calculations should mathematically be identical.

    -- James

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.