-
Posts
264 -
Joined
-
Last visited
-
Days Won
25
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by GregSands
-
-
-
Have you already looked at Gavin Burnell's Scripting Tools on the LAVA Code Repository? It has a number of routines that are really useful for creating XNodes, including "Copy and Wire Tagged Element" which is almost exactly what you're looking for - often, GenerateCode can simply be a call to this VI.
-
There doesn't seem to be a problem in LabVIEW 2012 (lvanlys.dll 12.0.0.3) or LabVIEW 2011 (lvanlys.dll 11.0.1.2), checked for both 32-bit and 64-bit. This is on a machine with only those versions installed.
-
What I usually do in these sort of cases is to add an image of the equations to the BD of the VI.
Another option might be to use a Math Node.
-
Q: What is the value of a Kudo in the?A: Not much.I've made an interesting observation.Roughly a month ago, two ideas were proposed within a day of each other. My suggestion was that, and Darin suggested that the. Both ideas were fairly straight-forward, both are coding-related, both had a simple image and clear explanation, and as of now, both have attracted about the same number of comments (12 vs 16) and kudos (65 vs 70). I haven't fully compared the kudos, but it appears there's even about the same number of NI voters and "high-rank" voters for each.However, and I don't think it would be just my opinion, Darin's idea is infinitely more useful and valuable than mine. It's an idea that would allow faster and easier programming, and be a noticeable improvement. Whereas error wire layering - it would be "nice" if it was implemented, but it's just cosmetic, not a game-changer. Yet they've attracted about the same number of kudos.So I can now understand when AQ and other NI reps say that popularity of an idea is a pretty poor indication of its value.PS - go vote forif you haven't already.
- 1
-
Drop a matrix control onto the FP.
Nice trick Darin - you get a numeric control which actually has vertical sizing. I never knew that.
-
I tried this on the logs and the smoothing assigned NaN values to some of the Y-Points. Do you know what could be causing this?
The final result is that the output curve has many broken regions. I have attached various screen shots that shows this.
Hard to tell without seeing the data, but if your screen-shot is correct, you have a "Weighting Fraction" equal to zero, so I wonder if that is causing the problem. I'm pretty sure that it should be greater than zero - it's the fraction of the dataset to use for fitting at each point.
-
Thoughts? Critisism? Praise? Ideas to make it better? Good for OpenG?
Without downloading or running, shouldn't this be "seconds to wait" x 100, not "divided by"?
-
Just has a cursory glance. But it looks like you are calculating the coefficients and passing the XY parms for the linear fit twice with the same data (it's only the weightings that change from the first "fit" to the second) . You could pre-calculate them in a separate loop and just pass them into the other loops.
I used a separate loop to start with, but the speed improvement was minimal, and the memory use would be increased fairly significantly.
-
Greg,
My first thought would have been to truncate the weighting calculation and fitting to only a region around the point where the weights are non negligible. Currently, the algorithm uses the entire data set in the calculation of each point even though most of the data has near zero weighting. For very large datasets this will be very significant.
Yes, in fact the weighting calculation already truncates the values so that they are set to zero outside of a window around each point. However with a variable X-spacing, the size of the window (in both samples and X-distance) can vary, so it would be a little more complicated to work out that size for each point.
Just had a quick go - with a fixed window size, you get a further 10x speedup, but if you have to threshold to find the appropriate window, it's only another 2-3x. Still, that's around 40x overall, with further increases using multiple cores.
I'd only ever used it for arrays up to about 5000 points, so it had been fast enough. Interestingly, the greatest speedup still comes from replacing the Power function with a Multiply -- never use x^y for integer powers!
Any further improvements?
-
- Popular Post
- Popular Post
I also like the Savitsky-Golay, but it only works for uniformly spaced data, whereas the above utility is also good for non-uniform X spacing.
I had already rewritten this utility for my own use, and checking it against the original I see it has a ~15x speedup on a single-core, and ~25-30x on my 2-core laptop -- should be even greater with more cores. Here's the main things I changed:
- passing individual X and Y arrays rather than a cluster
- replacing the Power function in the weighting routine with a multiply - this makes the most difference, about 8x
- turning off debugging - gives almost another doubling in speed
- moving some functions outside the loops, and sometimes removing loops altogether
- using parallel loops and sharing clones for subVIs inside parallel loops
If you can get away with SGLs rather than DBLs, you'll get a further speedup, and if your data is evenly spaced but you still want to use this algorithm, then you shouldn't need to recompute the weighting function throughout your data - it only changes towards the start and end.
You're welcome to use this rewritten code.
- 4
-
Here's a template that doesn't use the IPES, so should work in 8.6. It should be almost as efficient.
-
New version of OpenG Array XNodes uploaded with Crelf's R4 version as the templates.
Gavin -- there's a small bug in this implementation which I've described in the support topic.
-
The latest version 1.3.0 (based on these revisions) has broken the "Remove Duplicates" function by not correctly retaining the Array Split in the In Place Element Structure. I presume this was caused by back-saving to 8.6, whereas this functionality is only in 2009+. That appears to be the only function affected.
-
- Popular Post
- Popular Post
"Barely imperceptible sadistic tendencies."
Don't you mean "Beautifully Admirable Sadistic Tendencies And Radical Diatribes"?
- 3
-
-
All work fine for me on Win 7/LV 2012. Have you tried remapping the shortcuts in Tools/Options/Menu Shortcuts? In fact, trying to set new shortcuts there will show you whether the keystrokes are getting through to LabVIEW or not.
-
I think it is intended by precedence. In fact I'm totally surprised that List Directory does also filter directories based on the pattern. So what do you get if you want to list *.txt files? No directories at all?
Yes, my expectation is that you'd get no directories at all (unless the directory name was Something.txt which is always possible, if unlikely). asbo's suggestion is 2nd most logical, that you'd get a list of the directories that contain files matching the pattern (though that can be gleaned from the file paths). But I can't see any logic in returning all directories.
But changing the default is not really an option since it could and likely would break quite a few OpenG Tools such as the OpenG Package Builder, Commander and its decendent, the VIPM.
I can see that now the utility is written, it's pretty difficult to change it, although it could always be deprecated with a new version written. Failing that, perhaps a boolean, or a 3-way enum - e.g. All (default), Files Match, Directories Match - could be added. Or I guess a polymorphic function is another way to extend/correct it.
-
Oooh, I read a little too much between the lines of what you said. Interesting; does the original node returns directories and files or just directories?
The built-in List Directory function returns all files and directories which match the provided pattern. My point is that List Directory Recursive recursively finds all files in all sub-directories which match the pattern, but it returns all sub-directories whether they match the pattern or not. I don't know whether this was intended, or whether it is a bug in the implementation. If it was intended it would be interesting to know why, and if not, I think it should be changed - though that would break any existing usage.
-
I would agree that including any subdirectories which do not contain a matching file is a bug.
That's slightly different again - I was thinking it should contain all sub-directories whose names match the pattern, irrespective of any files in those directories. To me, that's the closest parallel to the core List Directory node.
-
I have just observed that the array of directory paths returned by List Directory Recursive is always every sub-directory of the root Path, no matter what pattern is used. My expectation (and what I'd wanted to use it for) had been that this output would contain only the sub-directories which matched the pattern, in the same way as the array of file paths is generated. Is this result the intended behavior, or a bug in the implementation?
-
Hah, I do not have any problem with further improvements (although this one is relatively minor I would say) - as long as I'm not the one to update all those polymorph instances again ;-)
Not just updating them all, but choosing what instances are provided - I use arrays of Images, but those are not useful for most people.
If there's ever a reason to have "generic" terminals, then this sort of polymorphic-heavy function gives a great example. Writing them as XNodes (thanks Gavin) means there's no need to code all the polymorphic instances, but there's a lot of extra coding still to do around the function itself. So yes, I know that generic terminals don't officially exist, and won't exist at all soon, and XNodes are somewhat hidden, but I hope there's a third alternative in the wings. Or that there's a slightly easier way to create XNodes.
-
Greg: Yes, making the subVI inline does make it possible for the output to avoid calculation entirely. Doing it on any subVI call is something that the compiler optimization team has on its list of possible optimizations, but not currently implemented.
So, in the case described here, does that include not allocating the "Indices of removed elements" array before the For loop? If so, I'm pretty impressed.
-
In my experience, keeping memory allocation to a minimum is at least as important as execution time. The original array should be reused for the output (where possible), and every array access checked that it is performed in-place (with or without the IPES). In addition, output arrays of indices should be optional as to whether they are generated, with the default being false.
As a related note - wouldn't it be nice if LabVIEW could choose whether to create an array depending on whether the sub-VI output is wired or not? I think LabVIEW does take wiring into account within a single VI, but not in terms of connections to sub-VIs - please correct me if I'm wrong. I wonder if marking the subroutine as "inline" would make any difference?
Hidden Primitives
in Development Environment (IDE)
Posted
There's a couple of others I found here: http://lavag.org/topic/15895-new-vi-objects/