crelf Posted October 25, 2006 Report Share Posted October 25, 2006 I also benchmarked this some time ago and got varying results. Depending on the data I got better results with explicit or implicit coercion. The few times I was really trying to shave down the microseconds, I measured better speed with the coercion dot. Can we please get a definative answer from NI on this one? Quote Link to comment
Guillaume Lessard Posted October 25, 2006 Report Share Posted October 25, 2006 Forgive me if I'm being dense here, but how is that different from doing this? As you found out, they both produce the same result. In terms of algorithm efficiency, though, using the 'sort' function is slower: The 'sort' function scales in time with array size n as n*log(n) This shuffling algorithm (the Knuth shuffle or Fisher-Yates shuffle) scales in time linearly with array size -- in addition to not requiring additional memory. Quote Link to comment
PJM_labview Posted October 25, 2006 Report Share Posted October 25, 2006 Hmmm. The few times I was really trying to shave down the microseconds, I measured better speed with the coercion dot. As I recall, I was testing a histogram-like binning algorithm. The coercion in question was for an array index that was calculated in floating-point. For some reason, an explicit conversion to i32 ran very slightly, but quite consistently, slower than leaving the coercion dot.Still, I almost always do my coercions explicitly anyway. -Kevin P. Hmm,The last benchmark I run were either on LV 7.0 or LV 7.1. At the time I consistantly got better result (not by much though, I think it was in the order of a couple %) using the explicit coercion. It is possible that the implicit coercion has been improved in subsequent LV version. PJM Quote Link to comment
Gary Rubin Posted October 25, 2006 Author Report Share Posted October 25, 2006 Now I'm curious. In my 10 years or so of technical computing, I'm having trouble thinking of many times where I've come across a need for randomizing an array. A Labview implementation of Boggle that I did on a long flight, and a card shuffling exercise in college come to mind, and obviously those don't need to be really fast. So, just out of curiosity, what type of real applications require such fast and efficient randomization? Quote Link to comment
syrus Posted October 25, 2006 Report Share Posted October 25, 2006 Now I'm curious. In my 10 years or so of technical computing, I'm having trouble thinking of many times where I've come across a need for randomizing an array. A LabVIEW implementation of Boggle that I did on a long flight, and a card shuffling exercise in college come to mind, and obviously those don't need to be really fast. So, just out of curiosity, what type of real applications require such fast and efficient randomization? I have implemented a number of artificial neural network models in LabVIEW. While performing stochastic optimization, i.e. "training the neural networks", I will often process the "training set" many times. This data set consists of thousands to hundreds of thousands of input output pairs in which both the input and the output are vectors of floating point numbers. I include the option to process the training set in a random order each time it is used. To do this random processing, I implemented the random permutation in LabVIEW. Quote Link to comment
Gary Rubin Posted October 25, 2006 Author Report Share Posted October 25, 2006 This data set consists of thousands to hundreds of thousands of input output pairs in which both the input and the output are vectors of floating point numbers. I see... You must have a lot of memory on that computer... Quote Link to comment
syrus Posted October 25, 2006 Report Share Posted October 25, 2006 I see...You must have a lot of memory on that computer... Yep. I've got 4GB on my workstation and have access to a server with 24GB of RAM. Unfortunately, LabVIEW is limited to just under 2GB per instance so I play some games with ramdisks and file I/O when dealing with large data sets. I'm really looking forward to the 64-bit version of LabVIEW. Quote Link to comment
Mellroth Posted October 25, 2006 Report Share Posted October 25, 2006 ...I include the option to process the training set in a random order each time it is used. To do this random processing, I implemented the random permutation in LabVIEW. Have you considered using random numbers with a seed specified, e.g. by using Uniform White Noise.vi. This way you could run exactly the same sequence of "random" tests with a new model. /J Quote Link to comment
Aristos Queue Posted October 26, 2006 Report Share Posted October 26, 2006 Can we please get a definative answer from NI on this one? It should help if you have multiple coercion dots on the same wire. This answer is not definative... I know that the above is one situation where explicit coercion helps. There may be others. Quote Link to comment
Gary Rubin Posted October 26, 2006 Author Report Share Posted October 26, 2006 This answer is not definative... I know that the above is one situation where explicit coercion helps. There may be others. Hmm.... I decided to check this out a bit more, using variations of the attached vi. First of all, for this particular operation, I found that the explicit coercion was faster. In my test, I always coerced a scalar. Maybe tomorrow, I'll try coercing an array. I noticed that the speed difference between the explicit and automatic has a lot to do with the type of coercion being done. It appears that what you're coercing to has more of an impact than what you're coercing from. Depending on the original and final data types, I saw that the explicit coercion was between 5 and 35% faster than the automatic coercion (i.e. coersion dot). See attached data excel doc. Download File:post-4344-1161823302.zip Quote Link to comment
Kevin P Posted October 27, 2006 Report Share Posted October 27, 2006 Depending on the original and final data types, I saw that the explicit coercion was between 5 and 35% faster than the automatic coercion (i.e. coersion dot). See attached data excel doc. Hmmm, curiouser and curiouser... I've only toyed around briefly so I don't have systematic charts for all the variations. But I kept seeing smaller (faster) times for Frame 1, the implicit coercion. For example, by simply taking the code as posted, enabling auto-indexing on the array at the For Loop boundaries, and making the input array large enough to matter, I got the screeshot below. I ran this on both LV 7.1 and 8.2 with similar behavior. I wonder if it's CPU-dependent somehow -- mine's an AMD Athlon XP... Maybe EVERYBODY gets to be right! :thumbup: -Kevin P. Quote Link to comment
Gary Rubin Posted October 27, 2006 Author Report Share Posted October 27, 2006 I ran this on both LV 7.1 and 8.2 with similar behavior. I wonder if it's CPU-dependent somehow -- mine's an AMD Athlon XP... Maybe EVERYBODY gets to be right! :thumbup: -Kevin P. Good point. The numbers I put in the spreadsheet were derived from my IBM Celeron laptop. My P4 desktop seems to execute the implicit faster. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.