Jump to content

Memory leak, but only in the exe?


Recommended Posts

So I have a LabVIEW application, which works fine under the IDE, but when built into an executable it suffers from a memory leak in the order of 5-10kB/s.

 

The code uses several dynamically spawned components, including an acquisition component, controller component and user interface (viewer) component, and a couple of viewer sub-components. These all communicate through user events, which uses a class factory pattern for the datatype.

 

In this situation I'm simply launching the executable and not performing any hardware interfacing or data gathering etc., my code is just sitting there waiting for me to start something. During this idleness, there are still some user events flying around, but not many (up to two or three per second to about 5 subscribers) and the contained data of each user event class instance is either nothing, or a few bytes max. Therefore, even if each event's contained data was being duplicated per subscriber, I can estimate that about 100 bytes of data is being allocated per second (and that assume the memory is not deallocated after use).

 

I cannot fathom how the code is gobbling up around 5 to 10 kB/s, and especially why this should only occur when built as an executable. Any thoughts anyone??

 

A bit of cross-post from here, but the topic changed as I discovered more about the issue: http://lavag.org/topic/17825-bad-crash-under-xp-embedded/

Link to post
Share on other sites

I tried that. The memory leak is slightly reduced with debugging enabled, but still exists. In fact, the numbers above are from my latest tests, which have debugging enabled. So in the original exe the leak rates were higher :-/

 

I'm stuck for ideas now. I'm thinking I might try remote debugging, but not sure how useful that will be...

 

Edit: Remote debugging wasn't much help. I'm now trying to backport it to LV2012...

Edited by Thoric
Link to post
Share on other sites

I too experience a memory leak of seemingly 4 - 16KB/s on a XP machine with LabVIEW 2013 RT. The leak will cause a very slow application after several hours (after approx. 5-6h), even though the memory is far away from full (still more than 2GB left). I remember something about queues allocating memory whenever you try to access them by name, about 4Bytes for each time or so?...

 

Found it: http://digital.ni.com/public.nsf/allkb/1EBEA74610577B8A86257156006985CB

 

Searching the internet further shows interesting topics (didn't try anything yet). AQ gave an interesting answer here: http://forums.ni.com/t5/LabVIEW/Obtain-Queue-memory-leak/m-p/1644744#M590534

 

Question is: Do we rapidly obtain queues by name without releasing them?

Also: As Semaphores are using queues internally they too are worth checking.

 

I'm just throwing my thoughts in here, but maybe you are experiencing a similar or even the same issue.

Link to post
Share on other sites

I have one Notifier, which I obtain and hold open just two references. No named queues.

However I do have lots of event structures with dynamic user events, lots of generated user events that send class instances around. Could it be that the class instances aren't being released after they're handled by the event structure? That could part explain the ever-increasing memory allocations.

Link to post
Share on other sites

Apprarently it's due to Dynamic Events! I've trimmed my code down, painfully removed all classes and DVRs (legacy fears) and the memory leak was still apparent. Now I've realised the rate of memory absorbption is exactly proportional to the rate of User Event Generation.

 

I didn't think unhandled User Events caused any issues, but in any case I handle all my Dynamic events in every Event Structure (one per component) so there are no unhandled dynamic events here. But for some reason LabVIEW is gobbling up RAM per user event I generate. The user event is now just a cluster control on the primary caller, which shares a Create User Event reference with all the dynamically called components such that they can message each other using the cluster content (which includes an enum subject and a variant data).

 

That's progress, but now what!?

Link to post
Share on other sites
Apprarently it's due to Dynamic Events! I've trimmed my code down, painfully removed all classes and DVRs (legacy fears) and the memory leak was still apparent. Now I've realised the rate of memory absorbption is exactly proportional to the rate of User Event Generation.

 

Ouch... that is quite worrying!

Link to post
Share on other sites

After modifying Jack's EventsAndSundry vi #02, I must amend my statement on the NI forum. If a dynamic event is registered but not handled, that does not directly cause a leak. The leak starts if either: the loop in which the event structure resides stops; or the event structure handles events slower than they are generated.

Link to post
Share on other sites
After modifying Jack's EventsAndSundry vi #02, I must amend my statement on the NI forum. If a dynamic event is registered but not handled, that does not directly cause a leak. The leak starts if either: the loop in which the event structure resides stops; or the event structure handles events slower than they are generated.

Yes, I can see how not running the event case structure would cause the queue to build up, or indeed if the events are created faster than can be processed. But neither is the case in my code, it runs as expected.

 

I your app working normally while this memory is building up?  If one of your components' event loop were somehow paused (by something that happens only in an exe) then User events would build up as observed.

Yes, my app is working fine.

 

One of the events was "new value available" raised by the hardware acquisition component which runs at 100Hz, and provided the latest value to all listeners (UI component, logger etc.) When I remove this event and use a named notifier for sharing this value the memory leak is reduced, but still there (there are other low rate events still happening). If I create a new event and raise it at 1kHz the memory leak is increased.

 

So last night I created a wholly new project, with a main VI that generates User Events, dynamically calling another VI that registers to the same User Event, which calls another dynamically launched VI which also registers to the User Event. Build it into an exe and....... no memory leak. Gahh!

The following has just occurred to me, and may be important - hopefully someone with more knowledge than I can advise:

 

In my User Interface component (which is called dynamically) I have a subpanel. The subpanel shows one of four VIs, all four of which are loaded into memory and swapped into and out of the subpanel control as required. Two of these subpanel VIs have event structures in them that are registered to the User Event. When I load these subpanel VIs I set them running using invoke method "Run VI" with "Wait Until Done" set to false so they're all running independent of whether they're actually shown in the subpanel or not. But, could it be possible that the three subpanel VIs that are not in the subpanel control are unable to react to the user events, and hence have an ever-increasing queue stack?

Link to post
Share on other sites
But, could it be possible that the three subpanel VIs that are not in the subpanel control are unable to react to the user events, and hence have an ever-increasing queue stack?

I doubt it, but a good debug mode is to have all your subUI VIs open their front panels instead of inserting them selectively in the subpanel.  Try that and see if that makes to memory issue go away (and check that all are functioning and not blocked when they are "closed").

Link to post
Share on other sites

Oh my God I've been a complete idiot!  :oops:

 

The source of the problem does lie with one of the subpanel VIs. I had the Register for Event function inside the while loop that contains the event structure, so it was creating a new event reference with each iteration. Rookie error, but such an easy thing to miss when you're looking hastily at the code for a problem.

 

Thanks everyone for your attention. This CLA is now off to question his qualification and career choice...  :P

 

 

Link to post
Share on other sites

In all the testing I did the problem didn't initially appear to occur in the IDE, but later I realised that it was just harder to spot. Task Manager isn't ideal for determining memory usage, which didn't help, and I suspect the LabVIEW IDE memory manager is taking care of a lot more business than what the exe will be, so the memory usage fluctuates by values in excess of the leak rate, which I guess made it difficult to identify the slow increases. In the exe this was much easier to see.

Edited by Thoric
Link to post
Share on other sites

Thorough and informative, as ever, Jack :-)

My buggy scenario was Handle Event and Leak as true (no shift register to store registration but then I didn't expect to need one as event registration was supposed to be outside the while loop).

Interesting idea Piranha Brigade. I like the idea of a troop of asynchronous slaves chomping away at tasks from a single publisher. What advantages would you expect events to have over queues though?

Link to post
Share on other sites
Queues are many-to-one. Events are one-to-many.

 

Not explicitly true -- for Queues, an enqueuer and a dequeuer are both accessing the same Queue object, whereas with Events, the enqueuer and dequeuer are interfacing with two separate objects -- the event publisher and the event registration/subscription.

 

Any number of writers may write to a Queue, and you may have any number of readers as well; stated another way, contrary to popular belief, you may have multiple asynchronous dequeuers pulling out of the same queue.

 

Events have a bit different semantic than Queues in LabVIEW -- there must be one and only one Handler bound to an Event Registration at any one time, yet there may be [0,N] Registrations per publisher. With this extra level of indirection, I don't know if we can even properly classify Events as being something-to-something, but rather (N publishers)-to-(M registrations)-to-(0 or 1 handlers bound to a registration at a time).

 

Breaking it down:

 

Queues are many-to-many, one-to-many, one-to-one, or many-to-one, depending on what makes sense for your application domain.

 

Events support the same configurations, though with one additional level of indirection (which enables a host of additional design patterns), and the caveat that there may never be multiple readers (Event Handler Structures) simultaneously bound to one Event Registration. This caveat is the one I would like to see lifted, that Events might act more like Queues, in order to better support concurrent processing.

Interesting idea Piranha Brigade. I like the idea of a troop of asynchronous slaves chomping away at tasks from a single publisher. What advantages would you expect events to have over queues though?

 

Actually the other way around -- Queues have the advantage in LabVIEW for the time-being for the Piranha Brigade pattern, because Event Registrations do not yet allow multiple handlers to be concurrently bound to a single registration. The run time behavior is literally undefined -- utterly random results in the total number of events handled, at least as of LV2012.

  • Like 1
Link to post
Share on other sites
Actually the other way around -- Queues have the advantage in LabVIEW for the time-being for the Piranha Brigade pattern, because Event Registrations do not yet allow multiple handlers to be concurrently bound to a single registration. The run time behavior is literally undefined -- utterly random results in the total number of events handled, at least as of LV2012.

 

Jack, can you go into more detail about why you would like to see UE behave more like queues with regards to registration? Is it purely so that communication mechanisms all behave as one would expect or do you see some sort of functional advantage to this? I am assuming the former, but I just wanting to clarify.

Link to post
Share on other sites

My mind is buzzing with new insight wrt user events schematics. I hadn't previously appreciated the potential that the register for events node provides us. The ability to nullify the registration, change to a different source etc. gives you much more control over how your system components can behave.I do remember your presentations Jack on the foibles of User Events, and I can see better why you spent time delving into all this. And I agree that multiple handlers per registration has many valid use cases, so let's hope NI are attempting to clean up this particular oversight. Is there an Idea Exchange for this, or campaign thread?

Link to post
Share on other sites

Your in danger here of becoming an architecture astronought. :D

Your typical architecture astronaut will take a fact like "Napster is a peer-to-peer service for downloading music" and ignore everything but the architecture, thinking it's interesting because it's peer to peer, completely missing the point that it's interesting because you can type the name of a song and listen to it right away.

 
So. Breaking it down.  :) 
 

Not explicitly true -- for Queues, an enqueuer and a dequeuer are both accessing the same Queue object, whereas with Events, the enqueuer and dequeuer are interfacing with two separate objects -- the event publisher and the event registration/subscription.

 
OK. Lets changed make a little change here.
-- the event enqueuer and the event dequeuer of that Event Registrations queue. Lets not get confused by extra abstraction layers.

 

So you are 1/2 right. Queues do access a single object. But so do events which have access to their own queue which just happens to get populated indirectly rather than directly by the enqueuer.
 

Any number of writers may write to a Queue, and you may have any number of readers as well; stated another way, contrary to popular belief, you may have multiple asynchronous dequeuers pulling out of the same queue.

 
Not really. You can only have one dequeue as dequeue destroys the element (readers and dequeue confusion here) and sure you can "peek" the queue, but that does not destroy the element. So having "multiple dequeuers pulling out of the same queue" is unpredictable and results in unwanted behaviour on the most part. In fact. It is a common bug by rookies. This is encapsulated by the axiom that queues are "many-to-one" and if you keep to that, you will be fine..
 

Events have a bit different semantic than Queues in LabVIEW -- there must be one and only one Handler bound to an Event Registration at any one time,

 
Why is that? Is it because the Event Registration primitive is, in fact, the symantics for a unique queue? If you do add multiple handlers, do you not end up with the problem I described previously about multiple dequeuers? I would say they have exactly the same semantics as Queues.(just different primitives) and you can only have one outbound (aka dequeue) at any one time. The difference arises from the way the Registration Queue is populated.

 

yet there may be (0,N) Registrations per publisher. With this extra level of indirection, I don't know if we can even properly classify Events as being something-to-something, but rather (N publishers)-to-(M registrations)-to-(0 or 1 handlers bound to a registration at a time).

 
Of course we can. Come back down to the troposphere for a second.  :P 

A fairly good approximation for Events in LabVIEW are a queue (lets call it "event queue") to a number of queues (lets call them the "handler_queues") where "event_queue" re-transmits the enqueued element to the other queues before being destroyed. In this approximation, we need a VI that adds a queue reference to the "event queue" and registers a unique ID for a "handler_queue" so that when an element is enqueued to the "event_queue" it copies the element onto each registered handler_queue (iterates through all handler_queue0-N). Each handler_queue (just the usual while loop, dequeue element and a case structure), is waiting and dequeues from its respective queue. So we can create event-like behaviour using queues, but have to do a lot more programming to realise it.
 
So we have one (event_queue) - to - many (handler_queues). This is essentially the Event system in LabVIEW.

We do exactly this all the time with TCPIP servers where events would be a much better solution but, sadly, lacking.
 

 
Breaking it down:
 
Queues are many-to-many, one-to-many, one-to-one, or many-to-one, depending on what makes sense for your application domain.


I will counter argue that if you are trying to use queues for anything other than many-to-one (one-to-one being one of the intersection edge cases I mentioned in my previous post); use something else. I have already outlined how you can use queues for event-like behaviour. But whats the point when it is handled by the language for you in Events? Just because you can, doesn't mean you should - the right tool for the job.

Many-to-many: Only practically relisable with architecture.
One-to-many: Events. Notifiers.
One-to-one: Anything you like, (Queues, Events, Notifiers, FP terminals, Globals et. al).
Many-to-one: Queues.
 
 

 Events support the same configurations, though with one additional level of indirection (which enables a host of additional design patterns), and the caveat that there may never be multiple readers (Event Handler Structures) simultaneously bound to one Event Registration. This caveat is the one I would like to see lifted, that Events might act more like Queues, in order to better support concurrent processing.

 

As I have said already. They behave exactly like queues which is why you cannot have multiple dequeuers (Event Handler Structures) attached to the same registration just as you cannot have multiple dequeuers for a queue without unpredictable and unwanted results.

 

I think the issue here is confusion between reading (aka peek) and dequeueing which has been lost as you've gone further up the abstraction thought process. The destruction of the element is a key attribute of queues and once omitted you need other mechanisms to either filter or remove elements.The difference between the events and queues isn't how they are consumed (events uses queues too). It is how they are populated and this gives rise to the ability to create "one-to-many" (events) from multiple  "many-to-one" (queues),all wrapped up in a nice, neat package.

Link to post
Share on other sites
Jack, can you go into more detail about why you would like to see UE behave more like queues with regards to registration? Is it purely so that communication mechanisms all behave as one would expect or do you see some sort of functional advantage to this? I am assuming the former, but I just wanting to clarify.

 

It's for both reasons -- to satisfy Principle of Least Astonishment, and to allow handler processes to be subscribed to multiple concurrent messengers. In the case of the Piranha Brigade, you might typically want two registrations per Piranha (worker) -- the Job Queue, and the Abort registration. The poor man's Abort is to simply flush the Job Queue, which causes all the workers to fall into their idle/stopped condition.

 

 

Is there an Idea Exchange for this, or campaign thread?

 

Not that I know of -- but it might be helpful to start a discussion to focus on this topic. I'm personally not running into roadblockers that can't currently be solved another way, but the ability would definitely clean up existing syntax.

 

Your in danger here of becoming an architecture astronought. :D

 

Nah, strictly the opposite! Solving existing problems such as load balancing with the absolute minimal syntax possible! Multiple readers consuming from one job queue is about a simple a load balancer as can be developed in LabVIEW. The queue acts as the endpoint from which one-or-many sources queue jobs, and multiple asynchronous workers are gobbling away at those jobs -- the queue acts as a passive load balancer, rather than needing to handle routing and scheduling between workers.

 

A fairly good approximation for Events in LabVIEW are a queue (lets call it "event queue") to a number of queues (lets call them the "handler_queues") where "event_queue" re-transmits the enqueued element to the other queues before being destroyed. In this approximation, we need a VI that adds a queue reference to the "event queue" and registers a unique ID for a "handler_queue" so that when an element is enqueued to the "event_queue" it copies the element onto each registered handler_queue (iterates through all handler_queue0-N). Each handler_queue (just the usual while loop, dequeue element and a case structure), is waiting and dequeues from its respective queue. So we can create event-like behaviour using queues, but have to do a lot more programming to realise it.

 

To clarify -- it's an improper mental model to consider the User Event publisher a "queue". When a User Event object is created, there is no underlying queue of messages that grows with the "Generate User Event" method, and so there exists nothing to "re-transmit" to the handler queues. A better mental model is to consider the Generate User Event method as a synchronous method that enqueues directly into each of [0,N] subscriber queues. It's not an asynchronous method that enqueues an event into its own queue which is then asynchronously re-transmitted to handler queues. This is why Events semantics are so much more powerful than Queues with regards to decoupling systems in LabVIEW, and why native Queues don't make good mental models for how native Events work -- a Publisher does not create a memory leak in the case of zero subscribers. (This mental model I'm describing is just a mental model -- in reality, the underlying implementation has a more sophisticated memory-saving technique by providing one globally-scoped-to-the-context Subscription queue per Event, where each message exists as only one copy with pointers to which registration queues have not yet handled the message. Consider 100 messages each of size 1 unit. Regardless if there exists 1 subscription or 10 subscriptions, there is still only 100 units of memory necessary to hold all subscriptions, plus the relatively-small overhead of references to each subscriber per message. If there are 0 subscribers, then 0 memory is allocated or queued, and each of the 100 messages fizzled into the ether synchronous to the posting of the message) Said another way, the union of Queues and Events in LabVIEW comprise The Superset Of Semantic Awesomeness, and I wish that all merits could be accessible by one transport mechanism API without having to compromise.

 

We do exactly this all the time with TCPIP servers where events would be a much better solution but, sadly, lacking.

 

Heartily concur. This can be generalized to say, lots of different APIs in LabVIEW would benefit from providing asynchronous output streams that adhere to these Events pub/sub semantics. Again, this is in the spirit of enabling concurrent systems development in LabVIEW, which converges to Actor design and the current topic of asynchronous dataflow on Expressionflow.

Link to post
Share on other sites
Not really. You can only have one dequeue as dequeue destroys the element (readers and dequeue confusion here) and sure you can "peek" the queue, but that does not destroy the element. So having "multiple dequeuers pulling out of the same queue" is unpredictable and results in unwanted behaviour on the most part. In fact. It is a common bug by rookies. This is encapsulated by the axiom that queues are "many-to-one" and if you keep to that, you will be fine..

I think Jack is referring to a 1-to-N "Worker Pool", where one desires one (and only one) worker to handle each task.  This is simple with a Queue but can't be done with User Events.  You're talking about a 1-to-N multicast, where every receiver gets a copy.  This is easy with User Events but requires an array of queues to work with Queues.

 

-- James

Link to post
Share on other sites
To clarify -- it's an improper mental model to consider the User Event publisher a "queue". When a User Event object is created, there is no underlying queue of messages that grows with the "Generate User Event" method, and so there exists nothing to "re-transmit" to the handler queues. A better mental model is to consider the Generate User Event method as a

<snip>

Heartily concur. This can be generalized to say, lots of different APIs in LabVIEW would benefit from providing asynchronous output streams that adhere to these Events pub/sub semantics. Again, this is in the spirit of enabling concurrent systems development in LabVIEW, which converges to Actor design and the current topic of asynchronous dataflow on Expressionflow.

 

Actually. Looking closer. My initial "thought" was wrong. Event registrations aren't equivalent to "Dequeue" (i.e. destroy the element) since you get multiple elements in multiple Event Structures. So if your "semantic awesomeness" is advocating they do behave as I described. Then we are in agreement.

Link to post
Share on other sites
As I have said already. They behave exactly like queues which is why you cannot have multiple dequeuers (Event Handler Structures) attached to the same registration just as you cannot have multiple dequeuers for a queue without unpredictable and unwanted results.

Multiple dequeuers on a Queue are predictable; one and only one dequeuer will get each element.  This can be wanted if what you want is a worker pool.  But multiple Event Structures drawing from the same Event Registration is unpredictable, because at least one, but possible multiple, of the structures will receive the event.  This is always unwanted behaviour, so one cannot do a worker pool with User Events.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.