ChrisClark Posted July 22, 2010 Report Share Posted July 22, 2010 Hi, I've inherited a .dll that analyzes a 2D array containing 4 waveforms, 32KB to 80KB data. The main LabVIEW VI is continuously streaming the 4 DAQ channels, analyzing, and displaying, everything executing in parallel. At higher sample rates the main VI becomes starved and processes running in parallel to the .dll slow way down, up to 30 seconds behind. I've attached a graphic of the task manager that shows an asymmetry in the cores of a Core2duo while the VI is overloaded by the .dll. I always see this asymmetry in the Task Manager when the Main VI has big latencies. I've seen this exact behaviour before in a different VI, different project, when LabVIEW math subvis were coded serially instead of in parallel. Once the VIs were rewired to run in parallel, everything ran smoothly with balanced cores. My challenge now is to convince someone else to refactor their .dll, and they think the best approach is to optimize the single-threaded .dll code to make it run faster. Do I have all my options listed below? What is my best argument to convince all the stakeholders to go with a solution that balances the analysis cpu load across cores? (and is this really the best direction to take?) Thanks, cc Options: 1. port .dll to LabVIEW 2. refactor .dll to be multithreaded and run on multiple cores in a balanced way 3. Mess around with subvi priority for the subvi containing the offending .dll 4. refactor .dll to work faster but still only run in one thread on one core. Quote Link to comment
Rolf Kalbermatter Posted August 3, 2010 Report Share Posted August 3, 2010 Options: 1. port .dll to LabVIEW 2. refactor .dll to be multithreaded and run on multiple cores in a balanced way 3. Mess around with subvi priority for the subvi containing the offending .dll 4. refactor .dll to work faster but still only run in one thread on one core. Well I guess they all are possible venues, but!!!!! 1) is best if you have the entire algorithm available and also have some good test cases. Porting the algorithm is one thing but proofing it does indeed do the same thing as the original is quite a different beast (and can often take up a lot more time than (re)writing the algorithm itself). 2) Depending on the algorithms used and the way the code is written this could be a complete rewrite as well maybe even employing a different algorithm (if possible) to actually allow parallelization. 3) Quickest and dirty solution 4) This is probably in terms of efforts quite a bit smaller than 1) and 2) but of course depends on the algorithm used if there is really much potential to gain performance. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.