Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by bsvingen

  1. Hello The speed of doing floating point operations is important to my application, but how optimize this is in G? I have therefore made a small test program that tests several different methods to see which is fastest. The tests consist of doing 2D vertex rotations of the form: x' = x*cos(theta) + y*sin(theta) y' = -x*sin(theta) + y*cos(theta) The different methods consist of: The included NI sub vi (LV8), an ordinary sub vi, an ordinary sub vi flagged as subroutine, call by refrence sub vi, "inlining" the diagram without any function call, using formula node with arrays, using formula node without arrays, DLL made in C direct call, DLL made in C that takes whole arrays and the same DLLs made in FORTRAN. I also tried a MathScript node, but thas was several thousand times slower than the others so i just removed it alltogeher. Some of the results are very obvious but some are very surprising (to me at least). The one with the diagram "inlined" (directly in the main vi) is set to 100 % while the other are percentages from that one (smaller is better). NI built in sub vi subroutine reference "inline" FormulaN 1 FormulaN 2 DLL C DLL C Arr DLL F95 DLL F95 Arr546 205 123 508 100 177 91 177 64 179 64 In general a G diagram takes 30-40% longer to execute that C or fortran (this i knew), but very surprisingly the formula node is faster than the diagram (10 % faster). However when arrays are put into the diagram (FormulaN 1) it slows down, and it seems to me that it is directly proportional to the amount of array indexing that is done.Using a subroutine is only about 25% slower than a direct diagram, however it already takes twice the time of C/FORTRAN.Using an ordinary sub vi takes twice the time of "inlining" the diagram and the speed of execution is now 3 times longer than C/FORTRAN.Using a call by reference node really bogs things down by a factor of 5 compared with "inline" diagram and a factor 8 compared with C/FORTRAN.Finally the included vi from NI is dead slow, it is in fact very hard to bult a simple routine like this that executes that slow (but i explain a bit more later).Direct call to DLLs is slower than "inlining" the code for this simple routine, but it is still faster than using an ordinary sub vi, and this is very strange. This means that overhead associated with calling a DLL is less than the overhead associated with calling a sub vi, why? The included vi from NI is in fact calling a DLL, so why is it so slow? The reason is that it is calling a DLL that obviously is optimized to take millions of vertices and rotate then the same angle (for instance when rotating a picture). Using it with different angles each time therefor consist in transforming the doubles to an array of one double and do the same for the output and using a routine that is optimized for something totally different then what you are trying to do. Used as it should be used with millions of vertices and the same angle it is extremely efficient, but when used for different angles it is unbelievable inefficient. One can only wonder why NI didnt take the extra 5 min to code this instance of the polymorfic vi using ordinary methods. For the DLLs i have use lcc C compiler and Salford FORTRAN 95, they are both free for non comercial use. I downloaded the FORTRAN compiler today, and was extremely impresses with the ease of making DLLs. It was just a matter of writing an ordinary subroutine and compile and link (I used ordinary F77 code as i have no idea of how to write F95 code ). All the hieryglyphic things associated with DLLs are completely hidden from the user. Anyway, these tests indicates that the fastest possible G coding is to use formula nodes, but do the indexing of any arrays outside the node. This will be faster than any other method in labview. Earlier, my impression was that formula nodes were slow, but that was probably because i did alot of indexing of arrays inside. For pure speed a DLL is the way to go, and with the ease this can be done in Salford, hmmm. The program is included in a zip file. Just unzip and run the "Test 2D Rotate.vi". Only LV8. Download File:post-4885-1148575817.zip
  2. He he I see the point about the C compiler. It will make labview something it is not supposed to be (although the point is more of a religious one than a practical one ) But then again, whats the point of a matlab clone (mathScript)? I would think that this is an even greater undertaking and is certainly making LV something it is not :laugh:
  3. No, it's not that important. Right now the core floating point extensive stuff is programmed in one large chunk. It's fast (fully optimized c code is only approx 150% faster), but not particularly readable although i have added comments all around. Adding too much comments also tends to confuse more than clear up . The problem is future modifications adding more complex things, since the complexity now is right on the edge of what is practical without dividing it up in more readable chunks, at least if some other persons is to understand the code within a reasonable amount of time. Doing this with sub vis makes no sense performance vise, so the only alternative is a DLL written in C. Using a DLL is of cource a very good alternative. I had that as a default for sime time, but decided not to simply for the reason of having only one platform and one compiler. There are two things that could solve this. One thing is a complete overhaul of the formula node, making it a complete C compiler within labview, as it should be IMO Right now it is to slow (probably only some scripting thing??) and do not have the ability for making functions, but i use them alot since they are very practical and not too slow for small things. The other alternative is inline functionality within the G compiler. The MathScript nodes of LV8.0 are probably a nice thing if you are used to matlab (which i am not ), besides, they are way to slow in both execution and compiling speed to be of any real use for me. Anyway, I would believe that an inline scripting tool should be fairly simple. You would anly use this for sub vis that othervise would be flagged as subroutines, thus no user intaraction, only strictly specified input and output, and only for simple functions like for instance geometric transformation routines and stuff like that.
  4. I have had the opposite experience. It is true that the total amount of overhead in absolute time for 1 mill calls is in the order of 100-500 ms. This therefore does not seem like a problem other than from an academic point of view. However, within such a sub vi you can put an incredible large amount of code doing floating point operation before the total amount of floating point time exceeds the time for one call to that sub vi. What seems like an academic problem is in fact a huge problem when doing floating point math. In fact it is practically impossible in most cases to make a numeric code doing math without loops that takes longer time to execute than it takes to call the code. Below is a snip from a post i did at NI site: For applications doing extensive floating point operations the overhead in making function calls makes no sense at all, simply because it is not neccesary. Getting rid of the function call overhead will speed up the application maybe 200 to several thousand percent depending on the amount of looping and the amount of function calls. C/C++ has this inline flag, while FORTRAN has inline subroutines and function by default, and this is not without reason. I have developed a fearly large simulation program in Labview, and i have compared the labview code using pure labview primitives (add, div, mult, subtract etc on vectors) with an optimized C routine. An optimized C routine is approx 150 % faster for pure floating point, but when trying to build sub vis to make the code more readable and easier to maintain, the added overhead makes it a waste of time, even when using the subroutine flag. The penalty is huge. It will be much easier, faster and more maintainable to make a DLL from C even though (presumably) there is som fair amount of overhead in calling that DLL from labview. The reason is mainly that C has this inline functionality. My point is that inline functionality is a very simple trick, but it does wonders for floating point math.
  5. Hello Would it be possible to write a small app that use scripting and preprocess a vi so that sub vis are inlined like in other programing languages. When execution speed is important as well as readability of the code, then inline functions is essential. Unfortunately function calling in labview is very slow and there is no option to inline the code, thus to make it fast, the code becomes unreadable. But with scripting i guess it will be possible to use inline functionality as a sort of preprocessor option to just copy and paste the diagram of the sub vi into the main program ?? The subroutines could be reckognised for instance with an _inline_ in their names. Would this be possible? It would be extremely usefull for apps that need to run fast. BSvingen
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.