Jump to content

What Method Would You Suggest?


Recommended Posts

I want to launch up to 12 cloned vi's from a top level vi.

launching and then stopping them at different times.

I want them to run as Preallocated Clone Reenterant Execution.

I intend these clones to run for an extended periods of time months to maybe years.

And finally I want to send information to and from the clones to the top level vi.

What method of communications to and from clones would you suggest?

Thanks for your thoughts...

Link to comment

Why the "Preallocated"; are you trying to have a firm limit of 12 clones due to the long-term running?  With "Shared Clones" this stuff has been implemented by numerous frameworks, but I have not seen a fixed-size pool of clones done.  I would implement such a thing with my existing Messenger-Library "actors", but with an additional "active clone count" to keep things to 12.

Link to comment
1 hour ago, drjdpowell said:

Why the "Preallocated"; are you trying to have a firm limit of 12 clones due to the long-term running?  With "Shared Clones" this stuff has been implemented by numerous frameworks, but I have not seen a fixed-size pool of clones done.  I would implement such a thing with my existing Messenger-Library "actors", but with an additional "active clone count" to keep things to 12.

If they have internal state memory then they have to be pre-allocated.

18 hours ago, rharmon@sandia.gov said:

What method of communications to and from clones would you suggest?

TCPIP is the most flexible, works between executables and across networks. Events and queues if they are in the same application instance..

Link to comment
On 1/29/2020 at 10:51 AM, ShaunR said:

If they have internal state memory then they have to be pre-allocated.

If you have some way to distinguish between them they don't have to pre-allocated. For example, DQMH uses "shared clone re-entrant execution" and when you launch one you get a Module ID back which you can use to address the clone.

Link to comment

Whining mode: I know they are your and Jeff's baby but the last time I had a small (1 VI doing some pipelined processing) greenfield opportunity to try them out they were pretty cool, seemed to work well in the dev environment (if a bit slow to do all the scripting steps) and I was excited to be using them...and then app builder happened. Quickly gave up trying to make my single VI build, tore out channels and replaced it with your other baby, and it built right away. Its unfortunate because the concept seemed to work as nicely as claimed, but...app builder. App builder truly ruins all good things 😢

On 1/28/2020 at 7:28 PM, rharmon@sandia.gov said:

I intend these clones to run for an extended periods of time months to maybe years.

Helpful mode: This is a key line -- I wouldn't necessarily trust myself to write an application to run standalone for that long without any sort of memory bloat/leak or crash. My personal recommendation would actually be to spawn them up as separate processes. You can build an exe which allows multiple instances and pass parameters (like a tcp/ip port) to it. If one fails you can restart it using dotnet events or methods (or presumably win32 if you, like shaun, have an undying hatred of dotnet). You can also use a tool like NSSM to automatically restart the supervisor process (assuming windows doesn't break in this time period)

Edited by smithd
Link to comment
13 hours ago, smithd said:

Helpful mode: This is a key line -- I wouldn't necessarily trust myself to write an application to run standalone for that long without any sort of memory bloat/leak or crash. My personal recommendation would actually be to spawn them up as separate processes. You can build an exe which allows multiple instances and pass parameters (like a tcp/ip port) to it. If one fails you can restart it using dotnet events or methods (or presumably win32 if you, like shaun, have an undying hatred of dotnet). You can also use a tool like NSSM to automatically restart the supervisor process (assuming windows doesn't break in this time period)

Even that way is hard, as you have to detect the problem to trigger a restart, and it is hard to come up with a foolproof detection method of all potential failure modes.

Link to comment

fair enough, but I guess as a bottom level statement I (perhaps misguidedly) trust windows to do a better job of cleaning up failed processes than labview to clean up dead clones. This is especially true if the workers are doing anything that isn't completely core to labview -- for example, calling 3rd party dlls (or imaq).

Link to comment

I have an app that uses a watchdog built into the motherboard.  Failure to tickle the watchdog will trigger a full reboot, with the app automatically restarting and continuing without human intervention.  In addition, failure to get data will also trigger restart as a recovery strategy.  Still failed at 55 days due to an issue that prevented a Modbus Client to connect and actual get the data from the app.  That issue would have been cleared up with an automatic reboot, but detection of that was not considered.

  • Like 1
Link to comment

We do this in most of our sensor monitoring applications; each sensor you add to your system for example is (handled by) such a clone. Every serial port we use for example is also shared between sensor clones through cloned brokers. Client connections with various other systems are another part handled by preallocated clones; dynamically spawned on incoming connections. 

Communication internally is mostly handled through functional globals (CVTs, circular buffers, Modbus registers etc), queues and notifiers. Externally it's mostly through Modbus TCP, RTU, OPC UA or application specific TCP -based protocols. These applications have run 24/7 for years without any intervention, on Windows computers (real or virtual), and sb(cRIO targets. In some of them we use DQMH, and have not run into any issues with that so far either.

If a memory leak is too small to be detected within hours or a few days of testing, it is probably so small that it will not cause a crash for years either.

Link to comment
13 hours ago, Mads said:

If a memory leak is too small to be detected within hours or a few days of testing, it is probably so small that it will not cause a crash for years either

My app failed due to Queues created but not being closed under a specific condition.  Memory use was trivial, and logged app memory did not increase, but at 55 days the app hit the hard limit of a million open Queues and stopped creating new Queues.

Link to comment
6 hours ago, drjdpowell said:

My app failed due to Queues created but not being closed under a specific condition.  Memory use was trivial, and logged app memory did not increase, but at 55 days the app hit the hard limit of a million open Queues and stopped creating new Queues.

One thing we do to help during memory leak testing is to place exaggerated or artificial memory allocations at critical points in the code,  to make it more obvious when a resource is created and destroyed (or not...) 🕵️‍♀️ That is not an option for the native functions...🙁 but, depending on the code, you might be able to run an accelerated life test instead...

Edited by Mads
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.