JoeQ Posted August 27, 2014 Report Share Posted August 27, 2014 I recently had a need to use a GPU and saw NI has a $1000 package for Labview. Looking at it, am I missing something or is there more to the library than just a wrapper for CUDA? I ended up just doing the code in C and making the call from Labview to my DLL. Still curious what you get for that $1000? Quote Link to comment
hooovahh Posted August 27, 2014 Report Share Posted August 27, 2014 Still curious what you get for that $1000? Support, documentation, and examples are a few things I know you get without ever having used it. If I were experienced with CUDA (which is sounds like you are) you might not see enough value in it. Download a trial and try it out, and if you do please report back your honest opinion of the toolkit for others to see, I get the feeling few have ever used it. Quote Link to comment
JoeQ Posted August 27, 2014 Author Report Share Posted August 27, 2014 Using the link you provided, I attempted to download the 32-bit version for 2012 and it fails. So I tried the 2013 version and answered a couple of questions. Then was blocked by our system... I'll try it from home and let you know how it works out. Gateway Anti-Virus Alert This request is blocked by the Firewall Gateway Anti-Virus Service. Name: MalAgent.H_1081 (Trojan) Quote Link to comment
shoneill Posted August 28, 2014 Report Share Posted August 28, 2014 So apparently you get a Virus for your 1000 Dollars? 1 Quote Link to comment
Antoine Chalons Posted August 28, 2014 Report Share Posted August 28, 2014 So apparently you get a Virus for your 1000 Dollars? That's not too bad... A nasty virus can end up costing you a lot more than that! Quote Link to comment
JoeQ Posted August 28, 2014 Author Report Share Posted August 28, 2014 I downloaded it from home selecting the 2013 version. However what it sent was the 2014 version. I did scan both the downloader and the installer for viruses and it did not find anything. Are the older versions archived some place where I can download them? I looked through their FTP site and could not find them. Quote Link to comment
Jordan Kuehn Posted August 28, 2014 Report Share Posted August 28, 2014 Try this link: http://ftp.ni.com/evaluation/labview/ekit/other/downloader/2013GPUAnalysis-32bit_downloader.exe You can find some software at ni.com/downloads which is where I was able to get this link. Quote Link to comment
JamesMc86 Posted August 29, 2014 Report Share Posted August 29, 2014 I think the NI toolkit has some functions like FFTs pre-wrapped so you don't have to get into the C code for some standard operations Quote Link to comment
JoeQ Posted September 2, 2014 Author Report Share Posted September 2, 2014 It took a few days to get the license server setup for 2014. My plan is to evaluate all of the latest tools. LV2014 is going in now. I think the NI toolkit has some functions like FFTs pre-wrapped so you don't have to get into the C code for some standard operations My hope is that they have something like this, above and beyond an FFT. I would actually like to see something that would convert the Labview code to CUDA, then call the compiler for you. It would seem the writing the code for the optimal performance using a GPU could be quite complex and I am not sure how they would go about this. Just how to best partition the design could be a problem. Looking forward to seeing what that $1000 package is. Quote Link to comment
hooovahh Posted September 2, 2014 Report Share Posted September 2, 2014 My hope is that they have something like this, above and beyond an FFT. I would actually like to see something that would convert the Labview code to CUDA, then call the compiler for you. ... Looking forward to seeing what that $1000 package is. Don't get your hopes but, I'm quite certain this functionality does not exist in this toolkit, but I too would love something like that. Our testers don't generally have lots of graphics horse power but if they do, off loading some of that to a GPU seems like a great idea. Quote Link to comment
Gary Rubin Posted September 2, 2014 Report Share Posted September 2, 2014 I was hoping for that too. When I looked at this about 5 years ago, it was just memory access (peaks and pokes) to the CUDA-compatible GPU card. Quote Link to comment
JoeQ Posted September 2, 2014 Author Report Share Posted September 2, 2014 The 2014 installation took about three and a half hours but went smooth. Adding the CUDA toolkit was simple enough to do as well. There was only seven days left on the evaluation but NI allowed me to extend it to 45 days. I started out running various programs with the 2014. I did not see a whole lot of difference between it and 2011. It appears to run about the same speed and editing seems about as fast. Serial ports are still broke so I am not expecting any big bug fixes. The new tan/brown icon stands out. A friend noticed there was no longer a sequence displayed in the icon and that alone was worth the upgrade. On the plus side it did not appear that they broke anything major that would prevent me from using it. I can't always say that. I brought up the GPU examples. There are four of them. The first just reads the information from the board. They show an FFT and some heat equation solver. If you load the solver example and display the hierarchy, you get a feel just how complex it is. Pushing into the program, they lock you out of viewing the source without having a license. IMO, this is the whole point of the evaluation is to see if they offer something that could be used. I would expect to be able to code something up with the trial version. Another thing I do not see in the demo (you can't develop code, it's a demo not a trial) is some sort of benchmark. I would have expected to see some different algorithms coded in native Labview, C, maybe a threaded C and then their CUDA code. Even if they locked you out of the CUDA, at least you could get an idea on performance gains between them. The source code to read the boards information and other simple examples are included in the Nvidia's CUDA development tools. Microsoft offers an express version of the visual studio. Both of these are free. Making calls to a DLL is no big deal with Labview so I am still at a loss as to what this $1000 tool kit is getting you. Does it somehow help you develop code for the GPU faster? Is the code they come up with better than what you could code in C? What are they hiding with their locked VI functions? Quote Link to comment
azhew Posted September 3, 2014 Report Share Posted September 3, 2014 I guess it would help you write code for the GPU without actually having to write CUDA. I have installed the entire toolkit (apparently you can with an academic license) and here is what I see: - VIs to allocate memory, free memory, copy your data to the device etc... - VIs built on the CUBLAS library (linear algebra: matrix product, triangulation...) - VIs for FFT and IFFT - a "GPU SDK", but I have not looked yet what you can do with it... So if you do large matrix manipulation, maybe this would be a straightforward way to accelerate your application with a GPU without needing to know cuda. If you already know cuda and have a lot of code written in it, you are probably better off making a library to use with labview. I will look more into the SDK later. Quote Link to comment
JoeQ Posted September 3, 2014 Author Report Share Posted September 3, 2014 - VIs to allocate memory, free memory, copy your data to the device etc... Good points. When looking at the memory for example, I did not see a way to define if I wanted it to be shared, pinned, etc. I didn't see a way to move data from pinned to shared. It seems like all I can define is the type and size. Maybe it is all done for you but hard to belive they could get any proformance this way. No source, so can't really say what they are doing. It would be interesting to do some benchmarks using only the libs they have made available. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.