Jump to content

A query on queues: how to retrieve the number of remaining refnums


Recommended Posts

I just bounced this question off an AE, but I thought I might try the think tank as well: I have a named SEQ that holds a .NET reference shared by several modules; when releasing the queue in any of these modules I'd like to know if the queue was destroyed so I can also clean up my .NET reference as well.

I can:

1) re-open the queue and see if it was created and react on that, or

2) wrap the Obtain/Release functions and have my own counter, or

3) re-enqueue the reference every time I obtain the queue to piggy-back the Get Queue State node to see the number of times its been obtained (this kills the SEQ).

I'm not thrilled about any of these options - I would have sworn on some holy literature that some of the Release functions passed back that kind of information but looking now I can't find one that does.

Is this a flawed paradigm?

Link to comment

I think your options are undesirable, not flawed. These may be the best way to handle the issue without major rewrite.

On such things I have a "create" and "destroy" VI. The create runs before anything else; failure creates an error that causes everything downstream to fall through. The destroy does not run until I've verified all other code has terminated normally (or timed out waiting for such).

Link to comment

In reality, the .NET actually represents what should be another LV class, but up until this point it was unnecessary to encapsulate it that way with its own Create/Destroy methods called by the class which utilizes it. And now, of course, it doesn't make sense to spend the time on that encapsulation because it needs to Just Work.

Link to comment

No.1 won't work since if the queue is not created, you don't know if it's the last one or not (there may be 10)

There is a No.4

Only allow 1 queue reference so that you do not need to count. This is the method I use which means that you only have to call "Destroy" at the end of everything (it also means you don't get "runaway references").

Link to comment

No.1 won't work since if the queue is not created, you don't know if it's the last one or not (there may be 10)

I'm not sure I understand what you're saying, but the modules will never try to clean-up without having initialized. During cleanup, the module knows the queue must exist because it (the module) will have created it during initialization.

I wrote this up quickly for illustration's sake:

post-13461-0-91942100-1340058683_thumb.p

There is a No.4

Only allow 1 queue reference so that you do not need to count. This is the method I use which means that you only have to call "Destroy" at the end of everything (it also means you don't get "runaway references").

This doesn't work for my scenario because the .NET reference needs to be concurrently accessible between modules and cannot be destroyed/re-created in the interim.

In terms of wrapping your own, is the extensible session framework any use? This already has it all wrapped up already including this counter.

I hadn't heard of this before but I'll take a look at it. It's probably too heavy for this implementation (which ultimately revealed to me that the .NET reference in question actually deserved its own LV class).

I think no matter the method, this is impossible to do without a race condition.

Link to comment

I'm not sure I understand what you're saying, but the modules will never try to clean-up without having initialized. During cleanup, the module knows the queue must exist because it (the module) will have created it during initialization.

I wrote this up quickly for illustration's sake:

post-13461-0-91942100-1340058683_thumb.p

Then you don't need a count because you know there is always 1 Queue reference.

This doesn't work for my scenario because the .NET reference needs to be concurrently accessible between modules and cannot be destroyed/re-created in the interim.

The data (the .NET reference) won't be destroyed until you explicitly call the "Destroy" method in the wrapper. The only thing each call destroys is the Queue reference and then only if the queue already exists (so during execution of the wrapper the queue ref count is 2). This means that the Queue wrapper can be called from anywhere in the app,by any modules, in any order and will always maintain the queue with a single reference (which keeps the data alive). Your .NET reference will be fine ;) but you won't have to count anything or worry about releasing anything as the Queue reference is created and destroyed, on demand, without destroying the data.

Edited by ShaunR
Link to comment

Then you don't need a count because you know there is always 1 Queue reference.

I guess I haven't been clear. There is a single, named queue (of questionable length) but multiple VIs will obtain it - this is what leads to the terminology "multiple references." What I ultimately need is whether the queue is still valid in any context which is determined by a count of how many different contexts are subscribed.

The data (the .NET reference) won't be destroyed until you explicitly call the "Destroy" method in the wrapper. The only thing each call destroys is the Queue reference and then only if the queue already exists. This means that the Queue wrapper can be called from anywhere in the app,by any modules, in any order and will always maintain the queue with a single reference (which keeps the data alive). Your .NET reference will be fine ;) but you won't have to count anything or worry about releasing anything as the Queue reference is created and destroyed, on demand, without destroying the data.

Technically, your first sentence is correct, but's irrelevant if I release all the queue handles because I no longer have access to the .NET reference (and presumably it is automatically cleaned up, eventually). I completely understand that releasing the queue has no direct on the .NET reference, but again, this is irrelevant if I can no longer dequeue the .NET reference to act on it. If I were maintaining a reference to the .NET object anywhere else but this queue, that'd be great; however, the reference is only peek'd from the queue when it is needed and never passed around. There is no common place to keep the queue reference open, as the modules may drop in and out of existence at any time. So, ultimately, it's not about the data (.NET reference) accidentally being destroyed, it's about correctly disposing of the .NET reference when no one else is interested, while also making it publicly available when one or more modules are interested.

Hopefully that is more clear?

Link to comment

I think no matter the method, this is impossible to do without a race condition.

The example you posted does have a race condition, but I don't think this does.

[Edit - I was wrong, it does. This race condition manifests as a dangling DVR rather than a memory leak. At least a dangling DVR can be detected and you can recreate it in the rare case it happens.]

post-7603-0-28913300-1340062895_thumb.pn

However, personally I think your code structure requires the module to know information it shouldn't care about. Namely, it has to know if any other modules are still using the .Net object. I think creating the queue one time in external code and injecting it into the module when it is created/initialized is a much cleaner solution. Then the module just uses it as a resource and doesn't have to worry about clean up--that's handled by the calling code.

[Edit - Having just seen your explanation about modules dropping in and out at any time, I understand why you are making each module responsible for obtaining and releasing the queue.]

This doesn't work for my scenario because the .NET reference needs to be concurrently accessible between modules and cannot be destroyed/re-created in the interim.

Shaun's method will work. All the queue refs are not released--one persists until Destroy is called. But it doesn't solve your problem of requiring the module to know if everyone is done with the queue. It would still need a RefCounter or a single Destroy placed in the calling code when all the modules are done executing.

Link to comment

Shaun's method will work. All the queue refs are not released--one persists until Destroy is called. But it doesn't solve your problem of requiring the module to know if everyone is done with the queue. It would still need a RefCounter or a single Destroy placed in the calling code when all the modules are done executing.

Indeed. In fact. The only safe way is to have a specific Destroy (I'll be eagerly looking at the solutions here, since I have not found one other than the wrapper).

If you use a ref counter and clean up when the count reaches zero (within the modules). What happens if the modules get called sequentially?

The first module will create the queue, release it and therefore destroy the data since the counter is now 0 (no other modules have obtained a ref). When the next module in the sequence needs it it will be gone. This is the race condition that occurs when they are all running asynchronously.

Edited by ShaunR
Link to comment

When the next module in the sequence needs it it will be gone.

I believe this is intentional behavior in asbo's code. During initialization each module checks to see if the .Net reference exists. If it does not, the module creates it. Presumably persisting the .Net refnum when no modules need it isn't necessary. I can imagine situations where one might choose to use that strategy. Maybe a db connection or some other resource bottleneck. I am curious what kind of data this .Net refnum represents...

I'll be eagerly looking at the solutions here, since I have not found one other than the wrapper

The fundamental problem causing the race conditions is the test for clean up and the clean up itself is not an atomic operation. There's always the chance Module 2 will do something--after Module 1 tests for cleanup but before the cleanup is performed--that will make the test results invalid. Depending on the circumstances and implementation you'll end up with either a memory leak or dangling reference.

Here are some snippets illustrating the race conditions. The first is asbo's code showing a how a memory leak can occur and the second is my code showing how a dangling reference can occur. Asbo's code could also have a dangling reference if Module 2 was initialized between Module 1's test and cleanup. I don't think it's possible for my code to end up with a memory leak, but I'm not certain about that.

post-7603-0-26848800-1340118388_thumb.pn

post-7603-0-73143400-1340118389_thumb.pn

You need some way to make sure other Modules don't invalidate the cleanup test before the cleanup is performed. They can't be allowed to increment or decrement the RefCounter (whether explicit or implied.) You can use a semaphore like in my example above or wrap the Init and Cleanup processes in an action engine. I'm pretty sure it's possible to add a semaphore to asbo's code and avoid an explicit RefCounter, but I haven't thoroughly analyzed that. As near as I can tell you can't avoid race conditions using only unwrapped queue prims.

Link to comment

I believe this is intentional behavior in asbo's code. During initialization each module checks to see if the .Net reference exists. If it does not, the module creates it. Presumably persisting the .Net refnum when no modules need it isn't necessary. I can imagine situations where one might choose to use that strategy. Maybe a db connection or some other resource bottleneck. I am curious what kind of data this .Net refnum represents...

That is correct. The .NET refnum shouldn't persist because there will nothing to clean it up if the last module doesn't take care of it. In this case, the .NET reference is for a factory for a USB driver which handles device filtering and identification. Design constraints don't allow each module to have their own instance of the factory, so all modules in existence must share whatever one exists. If it doesn't exist, the module creates it and uses the queue to make it public.

The fundamental problem causing the race conditions is the test for clean up and the clean up itself is not an atomic operation. There's always the chance Module 2 will do something--after Module 1 tests for cleanup but before the cleanup is performed--that will make the test results invalid. Depending on the circumstances and implementation you'll end up with either a memory leak or dangling reference.

FWIW, I ended up implementing Option 3, which I'm pretty certain defeats the clean-up race conditions and there's only one at start-up (which is actually present in all the options) that two modules might try to create their own factory. In this instance, I think it's a noop because the .NET assembly /should/ return the same refnum it already dished out. If Option 3 isn't clear, I can draft an example real quickly for illustration.

Link to comment

...and there's only one at start-up (which is actually present in all the options)

There is in the sample code you posted because you're creating the .Net reference before obtaining the queue. It's easily fixed by creating the .Net reference in a case statement after obtaining the queue if the queue was newly created, along with the code to enqueue the refnum the first time.

In this instance, I think it's a noop because the .NET assembly /should/ return the same refnum it already dished out.

The returned .Net object is a singleton?

If Option 3 isn't clear, I can draft an example real quickly for illustration.

If you don't mind...

Link to comment
There is in the sample code you posted because you're creating the .Net reference before obtaining the queue. It's easily fixed by creating the .Net reference in a case statement after obtaining the queue if the queue was newly created, along with the code to enqueue the refnum the first time.

Actually, that's exactly how I implemented it.

The returned .Net object is a singleton?

Actually, I'm not sure. I'm kind of black-boxing at this point, and also marginalizing that question because it's very unlikely to be a problem in my use case - what I said about modules coming in and out at any time isn't /completely/ accurate, there will be some semblance of order during inits.

Okay, so here's my implementation of Option 3; first Init, then Close:

post-13461-0-94904100-1340154576_thumb.p

post-13461-0-63089700-1340154578_thumb.p

There is a small-ish race condition in the shutdown that I noticed when writing this up. Assume a last existing module A is shutting down while a new module B is starting up. If B obtains the queue (this, created? = false) between when A queues the last reference and when A releases the queue, B will wait indefinitely on the Dequeue call in the False case of the Init process as there will never be an element to dequeue (unless another module starts up). This is remedied by getting the queue status during Init and running the True case if there are zero elements on the queue.

Thoughts?

Link to comment

Thoughts?

I don't think this code will work the way you expect. As written, if you have four active modules then the first module that shuts down empties the queue and releases the .Net reference. In the cleanup code you're dequeuing the (one) element and immediately checking the queue status. It's going to return a count of zero.

This is remedied by getting the queue status during Init and running the True case if there are zero elements on the queue.

That doesn't fix the problem; you're just substituting one test for another test. The race conditions have nothing to do with the quality or specificity of the test being done. There is still a time delay between when Module A conducts the test and when it takes action based on the result of the test. During that time delay Module B can perform an action that makes the results of Module A's test incorrect, but A is already committed to a response so you'll end up with a bug.

Look back at the race conditions in the snippets here. Those are the best example I have right now that illustrate how parallel code executing between the test and the response invalidates the test results. You need to protect the test & response code block and make sure no parallel code can invalidate the test result before the response code executes. Then look at how the semaphores in this snippet only allow one complete test & create/destroy code block at a time.

Link to comment

My natural instinct here would say, if you try to implement a singleton, which you apparently do, why not replace the queue with a Functional Global Variable VI that exposes Optain and Release methods? That also allows you to include any internal refcounting that you may require. The Obtain case creates your .Net object when the refcount is 0 and always increments the refcount, returning that .Net refnum and the Release case decrements the refcount and closes the .Net refnum when it reaches 0. Since everything from testing the refcount to acting on it accordingly happens inside the FGV, the problem on potential race conditions doesn't even exist.

I'm sure this could be done with the singleton LVOOP pattern too, but functional global variables are ideal in LabVIEW to implement singletons. No need to do any semaphores or what else to avoid potential race conditions.

A SEQ may seem great to implement a storage of a singleton object without having to do a separate VI, but if you need any kind of control over this objects lifetime, a FGV is the preferred choice since it allows implementing the lifetime management inside the FGV without the danger of creating race conditions.

post-349-0-82429800-1340191653.png

Link to comment

I don't think this code will work the way you expect. As written, if you have four active modules then the first module that shuts down empties the queue and releases the .Net reference. In the cleanup code you're dequeuing the (one) element and immediately checking the queue status. It's going to return a count of zero.

A SEQ may seem great to implement a storage of a singleton object without having to do a separate VI, but if you need any kind of control over this objects lifetime, a FGV is the preferred choice since it allows implementing the lifetime management inside the FGV without the danger of creating race conditions.

It looks like both you of missed that Option 3 no longer utilizes a SEQ - every module which starts enqueues a copy of the .NET reference on the queue. The point of this is that it lets me utilize the number of elements on the queue as an indicator of how many modules are running. What is still fundamentally missing is semaphores around the queue operations (which would be more correctly refactored into a FGV as rolfk mentioned).

I can't right now, but I'll digest the rest of your posts later.

Link to comment

It looks like both you of missed that Option 3 no longer utilizes a SEQ - every module which starts enqueues a copy of the .NET reference on the queue.

Doh! That's a Preview Queue function in the False case, not a Dequeue function. My mistake. (I didn't realize that until I was coding up a more detailed explanation of why you were wrong. :unsure: )

In any case, the strategy probably will work but imo it obfuscates the code's intent. In addition to the confusion over what the code is doing (preview vs dequeue), once that is figured out the natural follow up question is why is a copy of the reference being put back on the queue? You can insert comments to explain all that, but there are simpler implementations that are much easier to understand. Your original code snippet much more clearly implements a singleton. This solution strikes me as cleverness trumping clarity.

a FGV is the preferred choice

which would be more correctly refactored into a FGV

More cookie poking? :lol:

I agree a FG is one way to prevent race conditions. I disagree it is "the preferred" or "more correct" implementation. Semaphores exist for the purpose of preventing code blocks from executing simultaneously. Using one explictly acknowledges the possibility of race condition and prevents it from occurring. There is a lot of value in using software constructs for their intended purpose. Future developers will look at the code, see the semaphore, and instantly think "race condition prevention." The ability to protect code blocks using a FG is (imo) a side effect of a setting a vi to be non-reentrant, but it isn't the primary purpose of a FG--else it would be called a Functional Semaphore. Besides, FG's have other nasty habits, like playing havoc on your dependency tree and not extending as cleanly as other implementations.

Link to comment

I agree a FG is one way to prevent race conditions. I disagree it is "the preferred" or "more correct" implementation. Semaphores exist for the purpose of preventing code blocks from executing simultaneously. Using one explictly acknowledges the possibility of race condition and prevents it from occurring. There is a lot of value in using software constructs for their intended purpose. Future developers will look at the code, see the semaphore, and instantly think "race condition prevention." The ability to protect code blocks using a FG is (imo) a side effect of a setting a vi to be non-reentrant, but it isn't the primary purpose of a FG--else it would be called a Functional Semaphore. Besides, FG's have other nasty habits, like playing havoc on your dependency tree and not extending as cleanly as other implementations.

Well an FGV is the most trivial solution in terms of needing to code. It's not as explicit as a specific semaphore around everything and not as OOP as a true singleton class, but in terms of LabVIEW programming, something which is truely tried and proven. I would also think that it is probably the most performent solution, as the locking around non-reentrant VIs is a fully inherent operation of LabVIEW's execution scheduling and I doubt that explicit semaphore calls can be as quick as this.

Also for me it is a natural choice since I use them often, even when the singleton functionality isn't an advantage but a liability, simply because I can whip them out in a short time, control everything I want and don't need to dig into how LVOOP does things.

And reading about people complaints of unstable LabVIEW IDEs, when used with LVOOP doesn't exactly make me want to run for it either. ;) I know this sounds like an excuse, but fact is that I have apparently trained myself to use LabVIEW in a way that exposes very little instability, unless I'm tinkering with DLLs, and especially self written DLLs during debug time, but that is something I can't possibly blame LabVIEW for.

  • Like 1
Link to comment

Also for me it is a natural choice...

I have absolutely no problem with it being your preferred solution, and I'm not disputing the possibility that it can be written faster and execute quicker than an equavalent OOP implementation. I'm not even claiming a FG is an inappropriate solution.

I'm specifically challenging the notion that it is "the preferred" or "more correct" implementation. There are lots of things about them that make them less desireable to people than alternatives. I haven't used a FG in years primarily because of the dependency tree effect and lack of extensibility I mentioned. Both of those things are vitally important to keeping code sustainable. Preserving sustainability is easily worth the extra 3 minutes to write a LVOOP solution and a couple microseconds of execution time in my applications.

Link to comment

I have absolutely no problem with it being your preferred solution, and I'm not disputing the possibility that it can be written faster and execute quicker than an equavalent OOP implementation. I'm not even claiming a FG is an inappropriate solution.

I'm specifically challenging the notion that it is "the preferred" or "more correct" implementation. There are lots of things about them that make them less desireable to people than alternatives. I haven't used a FG in years primarily because of the dependency tree effect and lack of extensibility I mentioned. Both of those things are vitally important to keeping code sustainable. Preserving sustainability is easily worth the extra 3 minutes to write a LVOOP solution and a couple microseconds of execution time in my applications.

Daklu, I'm not trying to be difficult here but rather would like to understand how a LVOOP singleton would have much less dependency tree effect here. Yes Obtain and Release would be two different method VIs and the data dependency between these two would be through the LVOOP object wire instead of encapsulated in the single FGV. But that would decouple only the Obtain and Release operation as far as the hierarchy tree is concerned, not the fact that you use this object in various, possibly very loosely coupled clients. I say loosely coupled since they obviously have at least one common dependency, namely the protected resource in the singleton. And while easy extensibility is always nice to have, I'm not sure I see much possibility for that in such singleton objects.

Would you care to elaborate on the dependency tree effect in this specific case and maybe also give an example of a desirable extension that would be much harder to add to the FGV than the LVOOP singleton?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.