Maca Posted November 24, 2008 Report Share Posted November 24, 2008 For those of you not a geeky as me and don't check slashdot at least one every five minutes: http://tech.slashdot.org/tech/08/11/23/068234.shtml Its a toolkit for some of NVIDIA's GPU's that allows offloading to the GPU. There is next to no information on the site about using LabVIEW but I believe it is possible, see the following thread: http://forums.nvidia.com/index.php?showtopic=65111&st=0 I would give it a go, but lets face it I am lazy and all my computers have onboard Intel display chipsets. Quote Link to comment
mje Posted November 24, 2008 Report Share Posted November 24, 2008 I've been interested in this since it came out, but have yet to find a case where it would be useful to me so haven't played with it yet. I suspect that since a lot of LabVIEW usage is instrument control and communication, that the applications would be few and far between. That doesn't mean it wouldn't be fun to try it out though! I also worry how difficult it would be to work around the by-value nature of LabVIEW such that data copying and memory re-allocation doesn't destroy any gains you get from the parallelism of CUDA: as with all things CUDA, benefits of implementing it would be very case specific. Quote Link to comment
Rolf Kalbermatter Posted December 7, 2008 Report Share Posted December 7, 2008 QUOTE (MJE @ Nov 23 2008, 08:18 AM) I've been interested in this since it came out, but have yet to find a case where it would be useful to me so haven't played with it yet.I suspect that since a lot of LabVIEW usage is instrument control and communication, that the applications would be few and far between. That doesn't mean it wouldn't be fun to try it out though! I also worry how difficult it would be to work around the by-value nature of LabVIEW such that data copying and memory re-allocation doesn't destroy any gains you get from the parallelism of CUDA: as with all things CUDA, benefits of implementing it would be very case specific. When you call DLL functions and configure the parameters right LabVIEW WILL pass the data pointer to the DLL and NOT copy all the memory before calling the function or returning from it. I haven't looked at CUDA and am not sure how it works but if what you mean by paralellisme is that you can call several fucntions in parallel to work on the same data then yes you would get a problem when calling that in LabVIEW. LabVIEW manages it's memory in its own and very dynamic way. It will make sure that a memory pointer passed to a DLL function will remain locked in place and valid for the duration of the function call but once the function returns that memory can be reallocated, moved, copied, overwritten, deallocated and whatever else you can think of at LabVIEW's will and at any time. So if you would need to call functions that will hold on to memory buffers beyond the function call itself you can't use LabVIEW's Call Library Node without some good LabVIEW-C voodoo power. All in all the perfect case to create a wrapper DLL to interface between LabVIEW and such libraries. However the biggest draw back I do see is the fact that I haven't owned an NVIDIA powered computer in all the time so far. Sort of makes such a solution very narrow scoped in terms of possible deployments as I'm sure I'm not the only person that does own a computer with a different graphics processor than from NVIDIA. Rolf Kalbermatter Quote Link to comment
Neville D Posted December 9, 2008 Report Share Posted December 9, 2008 QUOTE (Maca @ Nov 23 2008, 04:27 AM) There is next to no information on the site about using LabVIEW but I believe it is possible, see the following thread:http://forums.nvidia.com/index.php?showtopic=65111&st=0 Infact, NI researchers already have working LV code using CUDA on NVIDIA processors for high performance applications but I don't know when they will be officially releasing it. Here is an excerpt (see last para in the link): In addition to the Dell proof of concept, a prototype in which NVIDIA'sCUDA technology enables LabVIEW has been thoroughly benchmarked with impressive computational results. QUOTE (rolfk @ Dec 6 2008, 11:50 AM) However the biggest draw back I do see is the fact that I haven't owned an NVIDIA powered computer in all the time so far. Sort of makes such a solution very narrow scoped in terms of possible deployments Sure, thats how NVIDIA hopes to increase sales..!! BTW MACs use NVIDIA graphics (in addition to INTEL integrated graphics) so potentially this could increase performance on MAC machines in the near future as well. Anyway, I think its a good direction for processor hungry applications like Vision or control. Neville. Quote Link to comment
Rolf Kalbermatter Posted December 10, 2008 Report Share Posted December 10, 2008 QUOTE (Neville D @ Dec 8 2008, 03:51 PM) Sure, thats how NVIDIA hopes to increase sales..!! Nothing against that. I just say betting in an application on CUDA limits the possible platforms you can deploy this application to. QUOTE BTW MACs use NVIDIA graphics (in addition to INTEL integrated graphics) so potentially this could increase performance on MAC machines in the near future as well. Anyway, I think its a good direction for processor hungry applications like Vision or control. Software still would need seemless support for processing everything on the host CPU to make this useful in most applications. This makes such a solution a bit hard to do through the call Library interface and IMO it is much better to integrate that on a much lower level such as on the graphics routine kernel in a Vision library. Or in the analysis library. However latter may be a little hard for the LabVIEW Advanced Analysis Library since it makes mostly use of the Intel Math Kernel library and I'm not sure Intel is going to integrate CUDA into it nor even facilitate it for others to do so. Rolf Kalbermatter Quote Link to comment
Michael Aivaliotis Posted December 10, 2008 Report Share Posted December 10, 2008 Quote Link to comment
shoneill Posted December 10, 2008 Report Share Posted December 10, 2008 Won't Larrabee, Fusion >insert marketing name here< and so on make CUDA a short-lived thing? Aren't we moving towards general-purpose GPUs already so that a single standardised interface (Ã la OpenGL) would be the way to go? Maybe Nvidia will have an epiphany and make CUDA an open specification..... :headbang: I remember the early days of 3D acceleration in games where there different game binaries for each graphics card. OpenGL was (and may again be) a solution to those problems. Otherwise the idea is fascinating. Levenberg-Marquard optimisation on my GPU. That would be cool. Shane. Quote Link to comment
mje Posted December 10, 2008 Report Share Posted December 10, 2008 I think there will be a place for CUDA for quite some time. There are applications where the cost of an NVidia card, be it a $100 graphics card or a $10000 Tesla are literally, a drop in the bucket when the cost of the mated hardware + software is considered. The fact that more general-purpose GPUs from other manufacturers are on the horizon is nice, but solutions already exist today fron NVidia, ignoring that solution when it's available is not really a good idea in some cases. Having an open specification would be ideal, as in it would allow a programmer to detach the program logic from an implementation. However as it currently stands, there is only one implementation, so I see no loss in adopting CUDA. I have no doubt though, that things will be different in a few years. Alas, my BBCode-fu is weak, I'm not sure how to embed this, so I'll link: Quote Link to comment
Gary Rubin Posted December 10, 2008 Report Share Posted December 10, 2008 I've just started looking into this from the Matlab side. The Tesla ($1700 on TigerDirect) has a GPU, but is not actually a graphics card (i.e. no graphics output). It uses the GPU architecture with 240 processing cores to provide general purpose processing. What I'm still unsure of is how high-level languages like Matlab and LabVIEW can take advantage of the massive parallelization that the hardware is capable of providing. Quote Link to comment
Rolf Kalbermatter Posted December 10, 2008 Report Share Posted December 10, 2008 QUOTE (Gary Rubin @ Dec 9 2008, 12:03 PM) I've just started looking into this from the Matlab side. The Tesla ($1700 on TigerDirect) has a GPU, but is not actually a graphics card (i.e. no graphics output). It uses the GPU architecture with 240 processing cores to provide general purpose processing. What I'm still unsure of is how high-level languages like Matlab and LabVIEW can take advantage of the massive parallelization that the hardware is capable of providing. Some of the Advanced Analysis Library Or Vision functions could be offloaded from inside those functions if the library and hardware has been found present. However there is of course the issue about data transfer. I suppose offloading large data sets to do small discrete operations on the GPU wouldn't be to efficient because of the necessary transport of data from and to the GPU. So it would be mostly for specific very computationally intense algorithms only. I'm not sure if a simple FFT would be already enough for that. Rolf Kalbermatter Quote Link to comment
Gary Rubin Posted December 10, 2008 Report Share Posted December 10, 2008 QUOTE (rolfk @ Dec 9 2008, 01:42 PM) Some of the Advanced Analysis Library Or Vision functions could be offloaded from inside those functions if the library and hardware has been found present. However there is of course the issue about data transfer. I suppose offloading large data sets to do small discrete operations on the GPU wouldn't be to efficient because of the necessary transport of data from and to the GPU. So it would be mostly for specific very computationally intense algorithms only. I'm not sure if a simple FFT would be already enough for that. Rolf, You are exactly right. See http://www.ll.mit.edu/HPEC/agendas/proc08/Day1/12-Day1-PosterDemoA-Bash-abstract.pdf' rel='nofollow' target="_blank">here. I suspect that the performance enhancement of simply running on a GPU are marginal, as this paper would indicate. I would expect you could see the huge speedups once you figure out how to parallelize your calculations over 240 cores. Gary Quote Link to comment
shoneill Posted December 10, 2008 Report Share Posted December 10, 2008 And as if someone had read my post from earlier: Khronos releases the OpenCL 1.0 specification. Apparently ATI and NVIDIA are on-board. OpenCL should be in Mac OS X 10.6. Shane. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.