-
Posts
28 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by Oakromulo
-
-
Rule of thumb for both LV and MS Windows: never install before SP1 unless you really need it.
-
-
Neat!
I see a prime candidate for Coerce to Type in there; this, and removing the decrement by pulling the ENUM output before the increment could tidy up that last bit of syntax.
+1 -1 that's so funny... after I read your reply, I still needed some seconds to find this lame operation! +1 Kudo for the Coerce to Type operator. This situation seems to appear almost every state-machine-like VI!
...... and here's the Sakamoto method.A little bit afraid to say... but... the c code looks so much more readable in your example! I mean it makes me afraid because this kind of comment lends itself very easily into love-hate posts...
-
I know the LV timestamp engine could easily be used to determine the day of the weak for a specific date. However, for those interested in a "behind the scenes" solution... the Zeller algorithm is really nice and possibly more optimized for certain applications.
-
Thanks again Darin!
Fortunately, when it matters to me, I never use code that I can not trace back to the source, so I have always used my own implementations.I really wish my boss could hear it.
-
Once LV lacks a primitive for this operation, I've written a quick & dirty VI to do random line permutations. I've tried to generate a random 1D array of I32 indexes, copy the 2D array and then do the line permutations one by one. Unfortunately, my code to generate the random I32 Array inside the VI seems to have an average O(n²) complexity. Do you know if there is a simple way to do O(n) or maybe O(n log(n) ) line permutations instead? Just out of curiosity, the randperm function from MATLAB has recently shifted from polynomial complexity to O(n log(n)), probably they've implemented some variation of the Fisher-Yates shuffle algorithm.
-
LabVIEW for Everyone was an amazing tool for my first two months in my new job. Definitely worth the read, though a little bit boring after some time if you have a intermediate programming experience in textual languages.
Don't forget to take a very comprehensive look at the NI CLAD material. Highlight every question you commit a mistake at the sample exams. This NI "Webcast" below is also very recommended.
http://zone.ni.com/w...oc/p/id/wv-1950
It's a very straightforward exam. Hard to pass without commiting any single mistake but easy to reach the 70% mark. My manager, with very little LV experience besides demoing stuff, was able to pass in his first try. Please come back later to tell us how was your experience!
P.S. It's not going to help with your exams, but if you want to know a little bit more about the RIO platform, you should spend some time with the NI CompactRIO Developers Guide:
-
So you 'Cant have your cake and eat it too'
Fast or beauty. Pick one.
(and I don't find the formula box 'beautiful' )
Ton
No free meal after all! At least there's Darin's Math Node to save development time...
Another major bug I found in the formula node. A program I have would nut run in LV 2011+. It would only run in pre 2011. I couldn't find out what was wrong, the results only gave NaN when in 2011+, but was OK in 2010. I eventually made a special "NaN finder" to track it down. The error was inside a formula node, and it was a major bug in LV. The formula node up and including 2010 did not work according to IEEE.
The function is z = (a/x)*exp(1/x) where a is a number from 0 to 1 (often, but not always 0) and x is a number from 0 - 1.5. When a = 0 the physical meaning should be z = 0. I thought this was mathematically correct because 0 * [any number ] = 0. In 2010 and earlier the result was 0 when a = 0 for any x (except x = 0), and everything was OK. The point is that this is not correct.
If x is less than approximately 0.00145, then exp(1/x) = Inf due to double precision. Then z = 0 * Inf = NaN (According to IEEE) while formula node in 2010 gives z = 0. In 2011+ this is fixed and the formula node gives correct result, z = NaN. But I have seen no mention of this bug, and no mention that it is fixed. Another problem with this is that if the formula node only calculates z = a * x, and a is 0 and x is Inf, the result is NaN also in 2010. So what exactly is going on inside the formula nodes is a great mystery.
I've been through a simpler, yet very annoying bug at LV 2010 when NI released that feature that enabled us to directly wire error clusters to a boolean operator. In the first release, it would always return false at every FPGA VI, no matter the cluster value. At the same time, I had a C Series module with a clock problem and I couldn't track the underflow error. A very skilled NI Support guy found the bug after almost a week full of compiles and equivalent code replacements. After that, no more "new" LV before SP1, just like Windows releases...
There are several things you can do to that formula node. The last time I checked (some years back), the formula node does no optimizing at all. You have to do that manually. For instance, the pow function is slow in any language. x*x = pow(x,2) mathematically, but x*x is much faster, x*x*x is faster than pow etc, etc. I use an ancient Maple (Maple R4) for that. A hopefully faster, but mathematically equivalent, formula node would (or could) be something like:
t1 = alpha*dedys; t4 = t1*(y-y*s); t6 = w/b; t7 = x-c; t8 = s*s; t14 = t7*t7; A[0] = c-t4*t6*t7/t8; A[1] = s-t4*t6*t14/t8/s; A[2] = p-t1*t6*x; A[3] = q-t1*t6; [/CODE]
Where the array is your result from the formula. I had to use array only for Maple to work.
Yeah... I've forgotten about the lot of possible optimizations and redundancies inside the node. Anyways, after this topic, I'll save the formula nodes for only when performance is not so much of a concern or those meet-the-godsent-manager moments: "hey, your code is as messy as your table".
-
Oh no... same thing with the next critical part of the neurofuzzy controller algorithm! The machine learning algorithm (performance bottleneck) with primitive array operators is again 4x faster (and uglier) than the formula node version.
Formula Node
Primitives
-
That's the main reason why most LV developers need an external monitor with a much lower pixel density when working with a laptop...
-
I can't speak for RT FIFOs, but I have read that the normal queues are implemented as circular buffers. Not sure how/when re-allocation happens, but I assume that once the buffer grows in size, it doesn't shrink back down. Hence the N-element enqueue/flush trick.
It makes total sense. RT FIFOs are probably implemented as standard fixed-size arrays.
-
I would think that RT FIFOs are faster, but you can preallocate a queue by adding N dummy elements of correct size to the queue and then flush the queue.
That's very interesting. I've always tought the flush operation would reduce the queue size to zero. By the way, do you know if LV Queues are implemented internally as something like a C++ STL Queue or a Forward List?
http://www.cplusplus.com/reference/queue/queue/
http://www.cplusplus.com/reference/forward_list/forward_list/
-
Ok... I'll categorize and put them there for evaluation.
-
A local variable will be the fastest except for putting the indicator outside (and won't kick in that particular optimisation as long as you read it somewhere I think). The queues, however will have to reallocate memory as the data grows, so they are better if you want all the data, but a local or notifier would be preferable as they don't grow memory.,
I've forgotten that only RT FIFOs are pre-allocated and so enable constant-time writes. This time I replaced the queue with a DBL functional global variable.
-
Yes. That's what you want, right? Fast? Also. LV has to task switch to the UI thread. UI components kill performance and humans cant see any useful information at those sorts of speeds anyway ( 10s of ms ) . If you really want to show some numbers whizzing around, use a notifier or local variable and update the UI in a separate loop every, say, 150ms.
Sure! I've added the indicators just for avoiding the "unused code/dangling pin" compiler optimization. You're right, it wasn't very clever, the queue idea is much better. The slow random number generator inside the for loop is there for the same reason to avoid unfair comparisons between the formula node SubVI and the standard one.
-
I just realized now that the percentiles have been calculated in a very wrong way. I invite you all to check the new comparison below with a queue structure.
Now with parallelized for loops and queue, the primitives were a full 4 times faster than the formula node SubVI!
-
Currently, I have tons of custom LV controls/icons in many different resolutions. Most of them have been adapted from Gnome and KDE. Does this kind of package apply for the community repository? Or is it just for block diagram stuff?
-
Move the indicators out of the for loops.
With output auto-indexing disabled, wouldn't the indicators outside the loop kick in compiler optimizations? Anyway, a queue in this case seems a better option.
-
If we're talking about NI Industrial PCs, another possibility would be to have a Certified PC platform to run NI ETS/Pharlap.
-
Even if some code doesn't actually respond to user interface it could still be useful to model it using a state machine pattern. Useful in a sense that it could actually improve both code readability and performance.
For example: imagine a VI with a "latch when released" switch and a boolean indicator. For every push in the switch you toggle an automatic blinking pattern to the output. I have implemented this solution both with and without state machines. Take a look below. Which one do you prefer?
-
Another test: the comparison with parallelized for loops with 4 instances (number of physical + HT cores) with the i5 laptop has resulted in an amazing 89% faster!
-
Ton,
Same thing here... I ran the first comparison again in my laptop at work and it was just 5% faster too!
Desktop (home):
AMD Phenom II 965BE C3 @ 3.7 GHz (quad core)
8GB DDR3-2000 CL5
Laptop (work)
Intel Core i5 M540 @ 2.53 GHz (dual core, Hyper Threading enabled)
6GB DDR3-1333 CL8
Both with Win7 x64 and LV2011.
bsvingen,
I think I'm going to try an equivalent DLL to be called from LV. I have little to no experience with DLLs on LV apart from the system ones.
vugie,
If I push the code inside a timed loop with manual affinity, is it safe to say it runs only in a single core?
-
Tim,
Probably not... though it'd be interesting to know a little bit more about what happens behind the node.
-
LaTex --> G... that's awesome. Definitely should become a core LV feature! It'd be nice to meet Darin in the next NI Week...
By the way, the equations represent a simplified First Order Sugeno Fuzzy Inference System. Always a good idea to add them to the VIs!
Programmatically check if VI is Desktop, Real-Time and FPGA compatible?
in LabVIEW General
Posted
Nice question... I'll call NI Support team to check that. The ability to know programatically if the VI is compatible with VxWorks or Pharlap would be really useful for some generic LV classes/libraries.