stever Posted February 4, 2009 Report Share Posted February 4, 2009 Hello, I would like to be able to set different parts of my LabView code to run exclusively on different (and isolated) CPU cores (e.g. quad-core machine doing 1) random OS stuff; 2) data acquisition, 3) data processing, and 4) visualization tasks). My trusty NI sales engineer informed me there's no way to do this, but now that I have LabView (8.6), I have found the Timed Loop VI. While not exclusively for this purpose, it seems to be the only available solution for this (is this right?). So when I put a timed loop on a block diagram with some random CPU-intensive task, and wire an integer to the loop's Processor input, I can see the CPU load move from one core to another. My concern is the Timing Source. With just regular LabView I apparently only have access to the 1 kHz source (no RT target stuff to get the 1 MHz). I just want this loop to run as fast as it can, as if it were a regular while loop in the block diagram, I don't want it limited to running at 1 kHz. Is there a way to fix this? Can I tell a Timed Loop to "just run as fast/often as you can"? Thank you! Steve Quote Link to comment
Neville D Posted February 4, 2009 Report Share Posted February 4, 2009 QUOTE (stever @ Feb 3 2009, 04:01 PM) I just want this loop to run as fast as it can, as if it were a regular while loop in the block diagram, I don't want it limited to running at 1 kHz. Is there a way to fix this? Can I tell a Timed Loop to "just run as fast/often as you can"? No. But in my experience, if you just use regular loops and make any shared VI's re-entrant, the OS/LV will take care of scheduling the processor cores quite well. Alternately, you could add some NI hardware and use its clock source (you can do that in Windows as well), but I think it won't make a huge difference. I saw an NI-WEEK demo where going from 4 simultaneous FFT's on Windows to RT improved performance from 3.8x to 4x. N. Quote Link to comment
Chris Davis Posted February 5, 2009 Report Share Posted February 5, 2009 I've been working on this topic recently as well. NI introduced this in LabVIEW 8.5, although I get crashes when I try to set four separate while loops running on different cores. I haven't upgraded to 8.5.1 so your mileage may vary. In 8.6 I was able to get four timed loops to work on four separate cores. Anyway, back to your original question. I believe if you prototype your work you will see that for most code the timed loop will work and run as fast as possible when setting a dt of 0. But while you are prototyping you should try putting a "regular" while loop inside a single-run timed while loop. This will allow you to use the timed loop to only set processor affinity. Your code will be slightly "messier" but you may find it does exactly what you want. Enjoy, Chris Quote Link to comment
Neville D Posted February 5, 2009 Report Share Posted February 5, 2009 QUOTE (Chris Davis @ Feb 3 2009, 05:50 PM) But while you are prototyping you should try putting a "regular" while loop inside a single-run timed while loop. This will allow you to use the timed loop to only set processor affinity. Your code will be slightly "messier" but you may find it does exactly what you want. Thats a very cool idea. But did you see any performance benefit by manually farming out processing to different cores? N. Quote Link to comment
Grampa_of_Oliva_n_Eden Posted February 5, 2009 Report Share Posted February 5, 2009 QUOTE (stever @ Feb 3 2009, 07:01 PM) Hello,I would like to be able to set different parts of my LabView code to run exclusively on different (and isolated) CPU cores... I have LabView (8.6*),... I just want this loop to run as fast as it can, as if it were a regular while loop in the block diagram,... Thank you! Steve I don't remeber which version it was released in but the Timed Sequence lets you specify the CPU afinity. Ben Quote Link to comment
stever Posted February 5, 2009 Author Report Share Posted February 5, 2009 QUOTE I believe if you prototype your work you will see that for most code the timed loop will work and run as fast as possible when setting a dt of 0. I was looking at that. My only issue is that I don't know how the Timing Source interacts with the dt. You have to select something for Timing Source, and the only option is the 1 kHz clock. If you set dt to 0 does the structure run as fast as possible, or at 1 kHz? Documentation from NI is sorely lacking in this regard. QUOTE But while you are prototyping you should try putting a "regular" while loop inside a single-run timed while loop. This will allow you to use the timed loop to only set processor affinity. Your code will be slightly "messier" but you may find it does exactly what you want. That's a good idea. What I think I'll do is use a regular while loop (which will run as fast as it can), inside of which is a Timed Sequence whose Timing Source is "1 kHz Clock <reset at structure start>". Inside this Timed Sequence is my LabView code. I think this is equivalent to what you suggested. Thanks for the suggestion! Quote Link to comment
JustinThomas Posted February 6, 2009 Report Share Posted February 6, 2009 QUOTE (Chris Davis @ Feb 4 2009, 07:20 AM) while you are prototyping you should try putting a "regular" while loop inside a single-run timed while loop. This will allow you to use the timed loop to only set processor affinity. Really neat trick. QUOTE (stever @ Feb 5 2009, 12:33 AM) That's a good idea. What I think I'll do is use a regular while loop (which will run as fast as it can), inside of which is a Timed Sequence whose Timing Source is "1 kHz Clock <reset at structure start>". Inside this Timed Sequence is my LabView code. I think this is equivalent to what you suggested. I do not think this is the same as Chris' suggestion. You would have the overhead of calling the Timed loop for every iteration. I think it should be the other way around the Code inside the While loop which is inside the Timed Loop Quote Link to comment
stever Posted February 6, 2009 Author Report Share Posted February 6, 2009 You're right - thanks for pointing that out. And thanks to Chris and everybody! Quote Link to comment
Chris Davis Posted February 7, 2009 Report Share Posted February 7, 2009 QUOTE (Neville D @ Feb 4 2009, 12:18 PM) Thats a very cool idea. But did you see any performance benefit by manually farming out processing to different cores?N. Sorry I haven't gotten back to you neville. Honestly, I didn't see much performance gain when I was the one assigning the tasks to a specific processor. But I could see where the process of tasks switching processors would cause my program to take a hit (albeit not a big hit). Others, who are using more intensive tasks, could see other responses. But since I had tried it, and the original poster had a question about timed loops, I thought I would throw my 2 cents in. Quote Link to comment
Neville D Posted February 7, 2009 Report Share Posted February 7, 2009 QUOTE (Chris Davis @ Feb 5 2009, 07:17 PM) Honestly, I didn't see much performance gain when I was the one assigning the tasks to a specific processor. But I could see where the process of tasks switching processors would cause my program to take a hit (albeit not a big hit). Yes, thats exactly what I saw as well. It doesn't seem to make any difference. Better to let the OS/LV optimize CPU switching. Thanks for the reply! N. Quote Link to comment
damon1100 Posted May 25, 2009 Report Share Posted May 25, 2009 Hi friend, I guess I have the same problem as you have,have you found any solution for that yet? i Hope so. thank you damomn QUOTE (stever @ Feb 4 2009, 12:01 AM) Hello,I would like to be able to set different parts of my LabView code to run exclusively on different (and isolated) CPU cores (e.g. quad-core machine doing 1) random OS stuff; 2) data acquisition, 3) data processing, and 4) visualization tasks). My trusty NI sales engineer informed me there's no way to do this, but now that I have LabView (8.6), I have found the Timed Loop VI. While not exclusively for this purpose, it seems to be the only available solution for this (is this right?). So when I put a timed loop on a block diagram with some random CPU-intensive task, and wire an integer to the loop's Processor input, I can see the CPU load move from one core to another. My concern is the Timing Source. With just regular LabView I apparently only have access to the 1 kHz source (no RT target stuff to get the 1 MHz). I just want this loop to run as fast as it can, as if it were a regular while loop in the block diagram, I don't want it limited to running at 1 kHz. Is there a way to fix this? Can I tell a Timed Loop to "just run as fast/often as you can"? Thank you! Steve Quote Link to comment
stevea1973 Posted June 2, 2009 Report Share Posted June 2, 2009 One of the reasons that something like this might be useful is if you can do something like I have done before: Create a thread Set the affinity of thread for one cpu bump the thread priority to realtime This stops inturrupts (e.g. mouse, keyboard) and runs the thread pretty much to the exclusion of all else on that cpu (to be honest, it has been a while, I may have done it via a second process, and had some IPC). You have to be careful to only do this with a multi-processor machine and also to ensure that you don't realtime more than n-1 CPUs as otherwise the OS can't get in to process anything. I never did anything more than p-o-c with it, we decided not to bother in the end. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.