Jump to content

Recommended Posts

I recently had a need to use a GPU and saw NI has a $1000 package for Labview.  Looking at it, am I missing something or is there more to the library than just a wrapper for CUDA?     

 

I ended up just doing the code in C and making the call from Labview to my DLL.   Still curious what you get for that $1000?  

Share this post


Link to post
Share on other sites

Still curious what you get for that $1000?  

Support, documentation, and examples are a few things I know you get without ever having used it.  If I were experienced with CUDA (which is sounds like you are) you might not see enough value in it.  Download a trial and try it out, and if you do please report back your honest opinion of the toolkit for others to see, I get the feeling few have ever used it.

Share this post


Link to post
Share on other sites

Using the link you provided, I attempted to download the 32-bit version for 2012 and it fails.  So I tried the 2013 version and answered a couple of questions.  Then was blocked by our system...   I'll try it from home and let you know how it works out.

 


 

Gateway Anti-Virus Alert
  This request is blocked by the Firewall Gateway Anti-Virus Service. Name: MalAgent.H_1081 (Trojan)

Share this post


Link to post
Share on other sites

I downloaded it from home selecting the 2013 version.  However what it sent was the 2014 version.   I did scan both the downloader and the installer for viruses and it did not find anything.   Are the older versions archived some place where I can download them?   I looked through their FTP site and could not find them.

Share this post


Link to post
Share on other sites

I think the NI toolkit has some functions like FFTs pre-wrapped so you don't have to get into the C code for some standard operations

Share this post


Link to post
Share on other sites

It took a few days to get the license server setup for 2014.   My plan is to evaluate all of the latest tools.  LV2014 is going in now.  

 

I think the NI toolkit has some functions like FFTs pre-wrapped so you don't have to get into the C code for some standard operations

 

 

My hope is that they have something like this, above and beyond an FFT.  I would actually like to see something that would convert the Labview code to CUDA, then call the compiler for you.    

 

It would seem the writing the code for the optimal performance using a GPU could be quite complex and I am not sure how they would go about this.   Just how to best partition the design could be a problem.  

 

Looking forward to seeing what that $1000 package is.

  

Share this post


Link to post
Share on other sites

My hope is that they have something like this, above and beyond an FFT.  I would actually like to see something that would convert the Labview code to CUDA, then call the compiler for you.    

 

...

 

Looking forward to seeing what that $1000 package is.

Don't get your hopes but, I'm quite certain this functionality does not exist in this toolkit, but I too would love something like that.  Our testers don't generally have lots of graphics horse power but if they do, off loading some of that to a GPU seems like a great idea.

Share this post


Link to post
Share on other sites

I was hoping for that too.  When I looked at this about 5 years ago, it was just memory access (peaks and pokes) to the CUDA-compatible GPU card.

Share this post


Link to post
Share on other sites

The 2014 installation took about three and a half hours but went smooth.  Adding the CUDA toolkit was simple enough to do as well.  There was only seven days left on the evaluation but NI allowed me to extend it to 45 days. 

 

I started out running various programs with the 2014.  I did not see a whole lot of difference between it and 2011.   It appears to run about the same speed and editing seems about as fast.  Serial ports are still broke so I am not expecting any big bug fixes.   The new tan/brown icon stands out.   A friend noticed there was no longer a sequence displayed in the icon and that alone was worth the upgrade.   On the plus side it did not appear that they broke anything major that would prevent me from using it.   I can't always say that.

 

I brought up the GPU examples.   There are four of them.  The first just reads the information from the board.   They show an FFT and some heat equation solver.  If you load the solver example and display the hierarchy, you get a feel just how complex it is.   Pushing into the program, they lock you out of viewing the source without having a license.   IMO, this is the whole point of the evaluation is to see if they offer something that could be used.  I would expect to be able to code something up with the trial version.

 

Another thing I do not see in the demo (you can't develop code, it's a demo not a trial) is some sort of benchmark.   I would have expected to see some different algorithms coded in native Labview, C, maybe a threaded C and then their CUDA code.  Even if they locked you out of the CUDA, at least you could get an idea on performance gains between them.   

 

The source code to read the boards information and other simple examples are included in the Nvidia's CUDA development tools.  Microsoft offers an express version of the visual studio.  Both of these are free.   Making calls to a DLL is no big deal with Labview so I am still at a loss as to what this $1000 tool kit is getting you.  Does it somehow help you develop code for the GPU faster?   Is the code they come up with better than what you could code in C?   What are they hiding with their locked VI functions?  

 

Share this post


Link to post
Share on other sites

I guess it would help you write code for the GPU without actually having to write CUDA. I have installed the entire toolkit (apparently you can with an academic license) and here is what I see:

 

- VIs to allocate memory, free memory, copy your data to the device etc...

- VIs built on the CUBLAS library (linear algebra: matrix product, triangulation...)

- VIs for FFT and IFFT

- a "GPU SDK", but I have not looked yet what you can do with it...

 

So if you do large matrix manipulation, maybe this would be a straightforward way to accelerate your application with a GPU without needing to know cuda.

If you already know cuda and have a lot of code written in it, you are probably better off making a library to use with labview.

 

I will look more into the SDK later.

Share this post


Link to post
Share on other sites

 

- VIs to allocate memory, free memory, copy your data to the device etc...

 

Good points.   When looking at the memory for example,  I did not see a way to define if I wanted it to be shared, pinned, etc.  I didn't see a way to move data from pinned to shared.    It seems like all I can define is the type and size.    Maybe it is all done for you but hard to belive they could get any proformance this way.   No source, so can't really say what they are doing.   It would be interesting to do some benchmarks using only the libs they have made available. 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.