Gary Rubin Posted September 13, 2009 Report Share Posted September 13, 2009 I seem to recall that NI used to recommend writing really time-critical subroutines in C, then calling from LabVIEW. Is that still recommended (for LV8.6)? If so, is it better to call as a DLL or a CIN? Thanks, Gary Quote Link to comment
mzu Posted September 13, 2009 Report Share Posted September 13, 2009 (edited) According to LabVIEW documentation : "Assuming the underlying code is the same, the calling speed is the same whether you use a Calll Library Function Node or a CIN." After version 8.2, there is almost no incentive of writing a CIN instead of a dll. There was a beautiful explanation as to why: http://expressionflow.com/2007/05/09/external-code-in-labview-part1-historical-overview/, http://expressionflow.com/2007/05/19/external-code-in-labview-part2-comparison-between-shared-libraries-and-cins (For some reason links did not paste correctly) Edited September 13, 2009 by mzu Quote Link to comment
mzu Posted September 13, 2009 Report Share Posted September 13, 2009 I seem to recall that NI used to recommend writing really time-critical subroutines in C, then calling from LabVIEW. Is that still recommended (for LV8.6)? If so, is it better to call as a DLL or a CIN? Thanks, Gary For wonderful example, when C is much faster see This forum topic Quote Link to comment
Gary Rubin Posted September 14, 2009 Author Report Share Posted September 14, 2009 After version 8.2, there is almost no incentive of writing a CIN instead of a dll. Thanks. That's nice to hear, as I know how to call a dll, but have never done a CIN. Quote Link to comment
mzu Posted September 14, 2009 Report Share Posted September 14, 2009 Gary, this is pretty much the same deal. In case you need a CIN (using LabVIEW less then 8.2, etc) you're welcome to use my CIN Wizard for VS .2003 http://code.google.c...dvs2003labview/ Quote Link to comment
Rolf Kalbermatter Posted September 14, 2009 Report Share Posted September 14, 2009 Gary, this is pretty much the same deal. In case you need a CIN (using LabVIEW less then 8.2, etc) you're welcome to use my CIN Wizard for VS .2003 http://code.google.c...dvs2003labview/ Even pre 8.2 there is seldom or never the need for a CIN. Unless you work in LabVIEW prior to about 5.1, you can pass native data to a DLL too. LabVIEW having to pass C types to a DLL is the main reason why a DLL call can get slower. The only CIN features that were not reallly available before 8.2 in some ways for DLLs, were about dealing with asynchronous driver implementations (the CINAbort() function) and that has nothing to do with performance but about creating drivers that can be aborted when the LabVIEW context goes idle somehow. However that is not a magic bullet but needs to be designed into the CIN or DLL very clearly from start. In terms of performance there is basically no difference between calling a CIN or a DLL, provided they both implement the same functionality and in the same way. That said for many algorithms you can not gain much by putting it into C since LabVIEW itself is a compiling language. LabVIEW does not do optimizations in the same amount as highly optimized C code can have so for some routines there is some gain to be had, but in general if you start to look into this you should first have a look at the implementation of your LabVIEW code, as there is a good chance that you simply did the most easy algorithm in LabVIEW instead of the most optimal one. The main reasons to use C code is in fact either because you already have the C code, or because the C API you want to interface to is to complicated to be easily interfaced with LabVIEW directly. The decision to use LabVIEW native data types to pass to the C code can make those calls even faster (if you know how to deal properly with those data types in the C code, a bad implementation can cause memory leaks very easily and/or be even slower than having LabVIEW pass a C pointer to the function instead). In any of these cases except asynchronous driver abortion, there is no reason why a DLL wouldn't work either unless you work in pre 5.1 stone age. Rolf Kalbermatter Quote Link to comment
Gary Rubin Posted September 14, 2009 Author Report Share Posted September 14, 2009 (edited) but in general if you start to look into this you should first have a look at the implementation of your LabVIEW code, as there is a good chance that you simply did the most easy algorithm in LabVIEW instead of the most optimal one. The code in question performs a running median. Because the length of my window is 4, I've removed the Sort by calculating the median as (sum(Array)-max(Array)-min(Array))/2. It already runs very fast (~3.8us), but considering how often it runs, that adds up to a majority of my processing time. I could maybe add some bookkeeping to not have find Max and Min each time, but I'm not sure how much that would help, given my short window size. I guess the first thing to do is code up the slowest part in C and see how it compares to LV, then worry about transitioning the whole thing, if necessary. Edited September 14, 2009 by Gary Rubin Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.