Jump to content

Single threaded math .dll killing cpu and main VI


Recommended Posts

Hi,

I've inherited a .dll that analyzes a 2D array containing 4 waveforms, 32KB to 80KB data. The main LabVIEW VI is continuously streaming the 4 DAQ channels, analyzing, and displaying, everything executing in parallel. At higher sample rates the main VI becomes starved and processes running in parallel to the .dll slow way down, up to 30 seconds behind. I've attached a graphic of the task manager that shows an asymmetry in the cores of a Core2duo while the VI is overloaded by the .dll. I always see this asymmetry in the Task Manager when the Main VI has big latencies.

post-2832-008452300 1279824799_thumb.jpg

I've seen this exact behaviour before in a different VI, different project, when LabVIEW math subvis were coded serially instead of in parallel. Once the VIs were rewired to run in parallel, everything ran smoothly with balanced cores.

My challenge now is to convince someone else to refactor their .dll, and they think the best approach is to optimize the single-threaded .dll code to make it run faster. Do I have all my options listed below? What is my best argument to convince all the stakeholders to go with a solution that balances the analysis cpu load across cores? (and is this really the best direction to take?) Thanks, cc

Options:

1. port .dll to LabVIEW

2. refactor .dll to be multithreaded and run on multiple cores in a balanced way

3. Mess around with subvi priority for the subvi containing the offending .dll

4. refactor .dll to work faster but still only run in one thread on one core.

Link to comment
  • 2 weeks later...

Options:

1. port .dll to LabVIEW

2. refactor .dll to be multithreaded and run on multiple cores in a balanced way

3. Mess around with subvi priority for the subvi containing the offending .dll

4. refactor .dll to work faster but still only run in one thread on one core.

Well I guess they all are possible venues, but!!!!!

1) is best if you have the entire algorithm available and also have some good test cases. Porting the algorithm is one thing but proofing it does indeed do the same thing as the original is quite a different beast (and can often take up a lot more time than (re)writing the algorithm itself).

2) Depending on the algorithms used and the way the code is written this could be a complete rewrite as well maybe even employing a different algorithm (if possible) to actually allow parallelization.

3) Quickest and dirty solution

4) This is probably in terms of efforts quite a bit smaller than 1) and 2) but of course depends on the algorithm used if there is really much potential to gain performance.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.