Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    203

Posts posted by Aristos Queue

  1. A) I definitely would not provide default values for those input terminals. Make them required. I can't believe that the majority use case is going to be coin flipping between zero and 1. If you want to write a specialized "Coin Flip.vi", fine, but give that one a boolean output.

    B) What happens if High is less than Low? Perhaps an absolute value node should be added to the subVI?

    • Like 2
  2. It's not garbage collection in the usual technical sense of the word. "Garbage collection" is used in systems where two objects both use a third object. When the first object disappears, all top-level objects objects (like the second object) are traversed and any objects that don't get visited stay in memory. Since the third object gets visited, it stays. When the second object disappears, all objects are again again checked. It's used in systems where there isn't a reference count to determine whether or not to throw the item out of memory because incrementing and decrementing the reference count is too much thread friction. The trick is that the garbage collector isn't actually run after every top-level object gets thrown away... it runs periodically during down time and cleans up large swaths of available memory.

    LabVIEW just deallocates the queue when the VI that created it goes idle. There's no garbage collector algorithm that goes hunting for all possible queues to see which ones should live and which ones should die.

    And, yes, what the VI does when it goes idle is identical to calling Release Queue. The same function is invoked under the hood.

    • Like 1
  3. > Our could we hack some update code in congregation with NI?

    You don't have to hack.

    Suppose Alpha.vi calls Beta.vi. When Alpha.vi saves, it writes down the path to Beta.vi in its save image. When you load Alpha.vi, it tries to load Beta.vi at that path. If it doesn't find Beta.vi, it searches disk.

    Now, modify Beta.vi to be inside Yoda.lvlib. Now load Alpha.vi. What happens this time is Alpha.vi goes to the location it recorded, and it finds a file named Beta.vi. It loads it, and discovers that the file is actually Yoda.lvlib:Beta.vi. That's ok. Normally on load we require that qualified names match exactly, but we make an exemption for "caller last saved looking for an qualified name with no library at all and caller now found -- at *exactly* the same path (no searching) -- a VI of the right file name but different library name, so count that as a match." Now, if Alpha.vi is saved looking for Yoda.lvlib:Beta.vi and you then move Beta.vi out of Yoda.lvlib and into Yolanda.lvlib, now when you load Alpha.vi, it finds Beta.vi at the right path, but the qname doesn't match, so it considers the subVI to be missing, and searches accordingly.

    This is why upgrading from no library to library is easy. It is why upgrading to a different library ownership requires the same machinations as renaming the VI file itself.

    > ...though I do now appreciate having the smaller libraries, as loading any one

    > VI from a .lvlib loads the whole library. (If any does know how to do this,

    > please tell me -- I'll feel silly for 5 minutes, then really appreciate it!).

    There are five library types: lvlib, xctl, lvclass, lvsc and xnode (the last of these being somewhat mythical). Lvlib and lvsc do NOT load all of their member VIs. The others do. They do list all their VIs in the project tree, but the VIs are not loaded into memory. All of them do load their nested libraries into memory.

    > I wouldn't want the polymorphic VIs to be public and

    > make the specific instances private, main reason is that

    > Real-time cannot cope with variants, and you couldn't bypass

    > the public polymorphic VI by calling directly into the specific

    > instance you need.

    Um, yes, that's the whole point. You couldn't bypass the poly VI. Under what conditions is that undesirable?

    • Like 2
  4. AlexA: You are reinventing the wheel. No, actually, not the wheel. Messaging frameworks are fairly complex and way outstrip the wheel. But the jet engine... yes, you're reinventing the jet engine. And the problem is, when reinventing the wheel, most people get it right. No so much on the jet engine. This is NOT meant to insult your skills. It's just that I've spent years reviewing many people's messaging frameworks, and all of the ones rolled by individuals had some serious lurking timing issues. The frameworks that were built, shared with the community and refined are solid. I don't know what you're building, but I pretty much guarantee that LapDog, JAMA or Actor Framework can handle it. AF maintains simplicity relative to the other frameworks. LapDog is more powerful and JAMA is more powerful than that. Each power jump comes with additional complexity. Collectively, we give you enough thrust to reach the heights you aim for... or enough rope to hang yourself, depending upon your point of view. :-)

    Since AF is my baby, let me lay out how you'd use it for this case:

    • Prime actor is Mediator. You send "Launch Producer" or "Launch Consumer" messages to Mediator.
    • When sending "Launch Producer", you include in the message some ID of the type of data this producer produces.
    • Producers generate data. They collect that data in their local state unless/until they receive a "Here's A Consumer" message, at which point they send all their pent up messages into the consumer's queue. This is important -- and is a bug that I'm guessing exists in what you proposed -- because the lifetime of the queue is tied to the consumer, not the producer, so you don't have vanishing queue problems when a producer disconnects before the consumer is done processing all the values.
    • Consumers consume data. When Mediator spins one up, he gives the consumers the list of producer types and the producer send queues (does this directly during spin up... no need to pass a message to the consumer). The consumer picks the one she wants and sends "Here's a Consumer" message -- no need to go back to the Mediator. Thereafter, consumer just sits in the loop and eats data until she gets a Stop message.

    If you give me a couple days, version 3 of the AF all polished up will be posted at the site I linked earlier. If you decide to use one of the others, great. Just don't build a custom system unless you just cannot make the others work. Please. We lose too many programmer heroes when they get sucked into jet engines.

    • Like 2
  5. Darren Nattinger had a different attempt to the "find the clones" problem. His solution gets all the statically referenced clones.

    Get All VIs in Memory Including statically-referenced Reentrant Clones.vi

    Would one always know if he is working with higher-numbered clones? AQ suggested otherwise!

    My experience is that when working with dynamic clones, the count keeps going up on each debug run of the VIs, so you might know but it is easy to lose track.
  6. But surely there must be a way to get a reference to all running VI's, including clones?

    Surely not. See my post here.

    Should there be an option to let a pause condition in a clone, pause other clones of the same VI only, instead of pausing all VIs registered with the Simultaneous Breakpoint Manager?

    Not sure. The sad part about all this is that my interest extends to solving one particular bug. I'm not sure how much utility all of this will generally have in the future, so I'm not recommending anyone pour too much time into it. Having said that, I'm thrilled to have what you've built thus far!
  7. Ok, I see what you did in the Pause/Unpause code. You tried to fix what seemed like a bug in the UI -- the button as I had it just toggled the state, so if I had a mixed bag of some paused VIs and some unpaused VIs, hitting the button would pause some and unpause others.

    The toggle behavior was actually a feature to me. The toggle behavior lets me freeze all the server VIs, allow the clients to run for a while, then flip the two halves, given the server time to run. I can see the use of a flat out Pause button to just select a bunch and freeze them without accidentally unfreezing something else, but I'd still like my (Un)Pause button as a third button.

    Minor side issue: fix the label of the "Resume" button.

    Otherwise, looks very good.

  8. > I'm not sure really how big of a deal is this. If abort fails on a SubVI,

    > can't we simply open its BD and hit the abort button. Am I missing something here?

    The reason I started building these tools is that I have at least 10 up to a couple hundred independently running clones. Hitting all those Abort buttons takes a while. My work around has been to close the project, which throws away the entire application instance, but that looses all of my probes, which I found frustrating.

  9. drjdpowell: Your post unwittingly highlights something VERY interesting. You're working with the older version of the AF, where we used the "Run VI" method. In the 2011 NI Week release, we're using the Asynch Call By Reference because it makes it much simpler and faster to launch copies. But the speed improvement is marginal, and ACBR makes us lose the ability to abort the subVIs. Going back to the Run VI method would make them open as top-level. That's something worth considering!

    Everyone: The checking for clones problem gets worse the more times you run your application. Statically allocated clones will generally keep their number. Dynamically allocated clones will move further down the number line. That poses a problem for the "test a few" idea.

  10. This topic is incredibly timely for me, as I've been building my own tool to do this recently. On this topic, I have some bad news. But before that... Ravi, I liked your UI, so I integrated a few features I had in my app into yours. I hope you don't mind.

    Added:

    • Tracking of execution highlight and ability toggle it on all selected VIs
    • Tracking of paused VIs and ability to toggle pause on all selected VIs
    • Selection of project/target
    • Filtering of vi.lib
    • Filtering of global VIs
    • Filtering of control templates (.ctt)
    • Sorting by library name
    • Compressed the column text for some entries so more columns fit on the screen

    Here's the modified VI:

    LabVIEW Task Manager AQ Revision.zip

    Just a bit of tweaking and it can be added directly to the Project directory and launched from the Tools menu. Sorry... it's 2am and I'm not doing that extra bit of tweaking right now.

    Now, on to the bad news...

    I've spent the last week working on exactly this sort of tool, and I've had a series of problems that I could not solve. I looked at the Task Manager uploaded by Ravi Beniwal, thinking he might have overcome the issues somehow. His tool has the same bugs, though he may not realize it. After consultation with other LV R&D folks today, I can now say that these bugs cannot be fixed in LV 2011 or earlier. And I'm doubtful we can do anything about them in the time remaining for LV 2012 features.

    Two insurmountable problems I found...

    -----------------------------

    First: Affects reentrant VIs. There is no effective way to get a list of the clone VIs... you know, the ones that are named "XYZ.vi:1" or higher numbers.

    These are the VIs that you often most need to abort, since one of the more common patterns in LV is to kick off reentrant clones running independently and then, when something goes wrong, you need to kill them. But you cannot get a list of all the reentrant clones that a VI has. You can open a VI ref to the clones by using Open VI Reference and passing a name in like "XYZ.vi:1". I tried checking each value sequentially until I got an error, only to discover that the numbers are not sequential. They can be any number up to MAX_Int32 (roughly 2 billion), so the "guess and check" method is out.

    -----------------------------

    Second: Affects both remote VI Server calls and local Asynch Call By Ref calls. You can't abort subVIs. The Abort VI method will return error 1000 unless the VI is the top-level VI. So what's the problem? When you launch a VI using the Asynch Call By Reference using "Fire & Forget" mode, it launches as a subVI, even though it will keep running if its caller quits. That means that even if the VI is not reentrant, so you can get a reference to it, you still can't tell it to abort. And there is no way to get the caller VI because the caller VI is a fake proxy (you can see it in the VI Hierarchy window). When you launch a VI remotely using the Run VI method or ACBR, you also have a proxy caller that isn't abortable.

    -----------------------------

    I've talked to multiple R&D staff with really deep knowledge of this issue, and there is no solution, and when I raised the topic was basically the first time that those involved collectively realized just how problematic this is to solve. I'd like to say something will be better in 2012, but that seems unlikely at this late date (yes, a month after 2011 released is late in our dev cycle; we do have to finish development pretty far in advance to get the testing solid).

    The best workaround solution is to build a mechanism into your reentrant and dynamically launched VIs that allows them to be messaged by some tool to stop them -- like if they use a queue, register that queue with a central data store somewhere that you could run to kill all those queues, which would make those VIs exit on their own. Unfortunately, that lead me to discover the third issue. There's no solution in LV to have code that is conditionally compiled in only when debugging is enabled on the VI. Code that registers the VI with the central system is useful while debugging, but probably shouldn't be there when you release -- it's only going to create performance overhead and take up memory. But you'll have to remove it manually before you build your application OR create a custom conditional disable symbol in every project you write. I find this solution distasteful at best.

    Anyway, that's my sucky news on the LV Task Manager front. I hope the edits I did add are useful.

    • Like 1
  11. At some point, a debug problem becomes so thorny that a programmer must declare that the current debug tools are insufficient and spend time writing new tools before proceeding. In my current hobby-at-home project, I've hit that point. I spent this entire weekend writing a couple of tools for tackling parallel debugging.

    One tool that I haven't written (yet) is a custom probe to simultaneously pause parallel VIs, and I want to know if anyone has already written such a beast and would share it so I don't have to build it.

    What I'm imagining is this:

    A custom probe that has on its panel an array of paths to other VIs. When the custom probe trips, it passes True for its boolean output (to pause the running VI) and it also calls Pause VI method on all the VIs listed in the array. There would be an option for each path to pause all the reentrant clones of the path or a particular subset of reentrant clones.

    I need this to debug an event handler and to keep the parallel VIs from continuing to generate events. A couple of times, I have gotten very close to the source of a bug only to have LV run out of memory because the background VIs continued to generate events and queue stuff up. It's really annoying.

    Basically, I need this probe to do what a breakpoint typically does when there are parallel loops on the VI's diagram -- it pauses both loops. But put both of those loops into subVIs and now you have my problem.

    PS: Bonus points if the custom probe registers the pause with some central debug tool that has a list of all the paused VIs so that all of them can be told to unpause as a group rather than finding and hitting the Pause button on every diagram (which in my case could be 10 to 20 VIs).

    • Like 1
  12. For OO in general, no it's not a problem. In C# and JAVA, the *only* way to pass objects around is by reference.

    In LV specifically, tt's not a no-no for classes any more than it is for any other LV data type. Passing around references is generally bad in a parellel environment like LabVIEW -- they tend to cause a lot of overhead and are the source of the single hardest-to-solve class of bugs. Data flow is preferred for all LV activities, regardless of the data type.

  13. Short answer: How would you create an integer in LabVIEW that you could access from two different VIs? Whatever answer you choose for that question, the same answer would apply to you object question.

    Longer answer: In LabVIEW, parallel access to the same block of data is a separate topic from classes. You can use a global VI, a single-element queue, a data value reference, or many other strategies for sharing any piece of data between two parallel parts of your code.

    For straight up "I want to modify this object from two parallel VIs", a Data Value Reference is going to be the correct answer for sharing an object. For a more sophisticated approach that prevents some of the pitfalls of simultaneous access to the same shared data, check out this approach from NI Week 2011. There are multiple variations on that theme posted online (see Lapdog and JAMA) -- the tradeoff is both are more complicated, but at the same time, more powerful/flexible.

    Ok... that's the desktop/RT system. I presume that the host target is where your FPGA communications VIs will be living. If you're actually on the FPGA target, none of the above works. Go back to my "short answer". How would you share a single integer? The answer, to the best of my knowledge about FPGA, is that isn't really possible. FPGA has no concept of pointers or shared data. Instead, you write your VIs such that only one section of the code needs the integer at any given moment, and you pass the integer into that function.

    But for your host target needs, I think that should be enough info to get you started.

  14. The reason that we don't just use the private data control typedef is that there are several types -- mostly refnums -- where the display for probing is not the same as the control on the FP. The most common is an integer to display, but some of them, like the queue refnums, display as strings, populated with details about the queue. We decided that the generic probe for classes should follow the exact same behavior as the generic probe for all other types and let users build custom probes as needed for particular cases. At the time, it seemed the best way to handle it.

    Probes don't automatically inherit. Honestly, I can't remember why... it was almost 8 years ago that we made that decision, and in all these years, I don't think anyone has asked me about it. Can anyone think of a good reason we would have done that? Given the way classes are constructed in LV, it probably took effort to keep that property from inheriting, which means there was probably a reason --- not necessarily a good reason, just a reason. :book:

    You can use the same VI that is in your parent as the probe for your child, you just can't set it as the default probe for the class. You can even select and use it if the VI is private in the parent class. Probes are debug, not part of your actual application, and as such, they just ignore the access scope rules. That way if your parent has defined something to explore the private parts of the parent, you can use that probe to explore the private parts defined by the parent on the child.

  15. > I always figured there was some magical subtlety I was missing that made the pattern worthy of a special name.

    Nope. Patterns are patterns, even the very common ones. Having said that, it is amazing how often in software design that people go to great lengths to solve problem XYZ, and someone else pipes up with, "Why don't you just do MNO?" where MNO is some common action. That's often the case with the facade pattern... rather than trying to call an api directly, create a facade that you call through.

    • Like 1
  16. You're going to be forcing a relink of any existing VIs that expect to find their subVIs in user.lib but now find them in vi.lib. That's going to be a source-code level change for caller VIs (updating their saved subVI paths). It also means anyone loading any of these VIs dynamically needs to update their code. This is, in many ways, a change that breaks backward compatibility.

    Does VIPM give you any way to give feedback to users about the effects of a particular upgrade at the time that they choose to upgrade? If so, you probably want to employ that mechanism.

    Given that, I can imagine some people wanting to have both the user.lib and the vi.lib versions installed on the same LabVIEW. You could choose to not call this an upgrade but instead issue brand new packages, deprecating the ones in user.lib. That way people don't see these new versions as "upgrades" but actual new tools, which they would only adopt for new projects, and might continue to use the existing user.lib packages for existing projects.

  17. Jim Kring wrote:

    > which is (I believe) much less efficient

    I can confirm your belief. The native recursion is better performance.

    Greg Sands wrote:

    > Perhaps it would be a good opportunity to evaluate

    > whether a non-recursive implementation would be better

    All recursive algorithms can be re-written iteratively. You would be evaluating this trade-off on a case-by-case basis. As far as readability is concerned, in general, the more parameters that the recursive function has, the more the recursive solution is better than the iterative. For something with very few input parameters, the iterative is often easier to understand.

    The performance tradeoff is more clear cut in LabVIEW -- the iterative solution will generally out perform the recursive one. The LabVIEW compiler's dataspace call structure is different from the stack-based approach of most compilers. Our structure allows for better cooperative multitasking, but it does mean a relatively high overhead for recursive calls since we have to actually allocate heap space instead of just moving a stack pointer. However, I'm sure there is a level of complexity where the recursive solution wins, but it is probably fairly high up.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.