Jump to content

[LVTN] Messenger Library


Recommended Posts

Hi @drjdpowell,

Before I reinvent the wheel, do you have any examples of python modules sending messages to one of your LabVIEW TCP Event Messenger Servers (or a simple UDP Receiver / Sender pair)?

I'm interested in extending an existing LabVIEW Messenger Actor to guys on the python side as an API.  I'm not sure the best way for them to send message, but a simple one would be to simply send a string with the format (Message Label>>Msg Param1, Msg Param2....)

Link to comment
On 10/23/2023 at 9:14 PM, bbean said:

Hi @drjdpowell,

Before I reinvent the wheel, do you have any examples of python modules sending messages to one of your LabVIEW TCP Event Messenger Servers (or a simple UDP Receiver / Sender pair)?

I'm interested in extending an existing LabVIEW Messenger Actor to guys on the python side as an API.  I'm not sure the best way for them to send message, but a simple one would be to simply send a string with the format (Message Label>>Msg Param1, Msg Param2....)

Take a look here: https://github.com/VITechnologies/RemoteLabVIEWInterface

It works great! It uses JSON to pack up stuff from python and send over ZeroMQ.

 

 

  • Like 1
Link to comment
  • 5 weeks later...
  • 8 months later...

Dear colleagues and Dr. Powell (hopefully),

 

We've been developing a pretty large project (17 actors) with the Messenger Library for the past few months - great tool and serves our purposes well,

however, when we deployed it on cRIO NI-9042 and ran a long-term test, we noticed a slow but steady CPU load increase over time. Enough that with

all actors, it will crash the system overnight.

  • It seems that each actor contributes to this, which suggests it is an issue common to the framework
    • Possibly related more to self-addressed messages (how the Messenger Library does periodic actions) than messages between actors
  • Early testing suggests that using timed loops and queues does ‘not’ have this issue

 

General Testing procedure:

  • Run actors with configurations as described here and noted in the title of each plot
  • Have SystemMonitor actor log the average CPU usage and Free Memory over time
  • Plot those values
    • Note that the dX/dt values are just taking the max/min values of each series and plotting the change over time. So it’s just an approximation.
  • Limitations:
    • We do always have the logger writing to the file system. Based on the Test 4 conclusions, I ‘think’ this would not be the core of any problems.

 Test 1:

  • Just for developing my plotting script

 Test 2:

  • Pretty minimal-actor test with GUI
  • Showed usage going up over time

 Test 3:

  • Repeat of test 2, but without the GUI
  • Still showed usage going up over time, though a little bit slower
  • Conclusion: usage is not some bug within the GUI actor

Test 4:

  • Run test with all actors
  • Saw usage go up much faster
  • Conclusion: There muse be some CPU increase for each Actor. Which means it is ‘maybe’ not a bug introduced into any single actor, but something common to them all

Test 5:

  • Run with all actors, with their ‘read’ rates doubled
    • This would mean that actors would try to read any devices more often, and would involve more self-addressed messages
    • In most cases, this should ‘not’ have increased the amount of messages sent between actors
  • CPU usage went up much faster
  • Conclusion: CPU usage tied to message passing

Test 6:

  • Run all actors, with ‘read’ rates restored to normal (hopefully; it’s possible I missed some), but ‘publish’ rates doubled
    • This ‘publish’ rate is for the ‘state’ messages each actor sends to the GUI for display
  • CPU usage went up about the same as in test 4
  • Conclusion: messages addressed to other actors didn’t seem to increase CPU usage

Test 7:

  • Cut out the main body of the Logger and SystemMonitor actors, and run them within timed loops (ie: no Messenger Library)
    • Repeat of test 2, though it’s possible the CPU usage comes from part of the default Messenger Library template that I did not copy
    • Using Queues to pass data from the read loops to GUI updates, so we still have data passing
    • Running the loops at 10x of the main ‘read’ rate for these actors. This will also mean about 10x GUI updates.
  • Status:
    • I ran this over the weekend. When I came back in on Monday, the CPU usage seemed to still be steady at 0.20%
    • I made a mistake in how the logging was set up so don’t have the logs for this
    • I’m re-running the test now. Judging from the Test 2, a few hours should be enough to see an increase
  • Early Possible Conclusion: seeing the CPU not go up over the weekend makes me think this is indeed an issue within the Messenger Library

 

I am looking for any information that can shed light on high CPU load with Messenger Library and possible suggestions on how to mitigate the problem.

I hope, you guys can share your thoughts and ideas, and steer me in the right direction. At this point we are too far in the project and have very little time

to start over. Please help!

 

Thank you in advance.

 

02_test.png

03_test.png

04_test.png

05_test.png

06_test.png

Link to comment
  • 4 months later...
On 7/29/2024 at 2:58 PM, Sam Dexter said:

Dear colleagues and Dr. Powell (hopefully),

I'm sorry, you posted just before I went on a full month of holiday and I never saw this.  Do you still need help?  Any further info?

Link to comment

Hey James,

Could you go into a bit more detail about reasoning behind the EventDVR zero-copy code? It looks to me like the message is still going through a queue until it reaches the async EventDVR Forwarder. I don't see how this provides any benefit over dequeueing directly at the recipient, since it has to go through a queue anyways. It seems the coder is still responsible for making sure that there is only one recipient of the large data packet, otherwise any additional observers necessarily create a copy which makes application of the DVR moot.

 

Edit: I came across this thread: https://forums.ni.com/t5/JDP-Science-Tools/Notes-on-Memory-optimization/td-p/4412690 . It seems your points there imply that the DVR is more about overcoming a limitation of event based communication necessarily creating a copy whether it is warranted or not. (Me speculating) This limitation is unique to events and not an issue for queues-- is that correct?

Edited by Conner P.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.