Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by GregSands

  1. There's a couple of others I found here: http://lavag.org/topic/15895-new-vi-objects/
  2. Is this close enough? https://decibel.ni.com/content/docs/DOC-13859
  3. Have you already looked at Gavin Burnell's Scripting Tools on the LAVA Code Repository? It has a number of routines that are really useful for creating XNodes, including "Copy and Wire Tagged Element" which is almost exactly what you're looking for - often, GenerateCode can simply be a call to this VI.
  4. There doesn't seem to be a problem in LabVIEW 2012 (lvanlys.dll or LabVIEW 2011 (lvanlys.dll, checked for both 32-bit and 64-bit. This is on a machine with only those versions installed.
  5. What I usually do in these sort of cases is to add an image of the equations to the BD of the VI. Another option might be to use a Math Node.
  6. Q: What is the value of a Kudo in the Idea Exchange? A: Not much. I've made an interesting observation. Roughly a month ago, two ideas were proposed within a day of each other. My suggestion was that Error Wires should be placed under other wires, and Darin suggested that the Read/Write status of property nodes should be determined by how you wire them. Both ideas were fairly straight-forward, both are coding-related, both had a simple image and clear explanation, and as of now, both have attracted about the same number of comments (12 vs 16) and kudos (65 vs 70). I haven't fully compared the kudos, but it appears there's even about the same number of NI voters and "high-rank" voters for each. However, and I don't think it would be just my opinion, Darin's idea is infinitely more useful and valuable than mine. It's an idea that would allow faster and easier programming, and be a noticeable improvement. Whereas error wire layering - it would be "nice" if it was implemented, but it's just cosmetic, not a game-changer. Yet they've attracted about the same number of kudos. So I can now understand when AQ and other NI reps say that popularity of an idea is a pretty poor indication of its value. PS - go vote for Darin's idea if you haven't already.
  7. Nice trick Darin - you get a numeric control which actually has vertical sizing. I never knew that.
  8. Hard to tell without seeing the data, but if your screen-shot is correct, you have a "Weighting Fraction" equal to zero, so I wonder if that is causing the problem. I'm pretty sure that it should be greater than zero - it's the fraction of the dataset to use for fitting at each point.
  9. Without downloading or running, shouldn't this be "seconds to wait" x 100, not "divided by"?
  10. I used a separate loop to start with, but the speed improvement was minimal, and the memory use would be increased fairly significantly.
  11. Yes, in fact the weighting calculation already truncates the values so that they are set to zero outside of a window around each point. However with a variable X-spacing, the size of the window (in both samples and X-distance) can vary, so it would be a little more complicated to work out that size for each point. Just had a quick go - with a fixed window size, you get a further 10x speedup, but if you have to threshold to find the appropriate window, it's only another 2-3x. Still, that's around 40x overall, with further increases using multiple cores. SmoothCurveFit_subset.zip I'd only ever used it for arrays up to about 5000 points, so it had been fast enough. Interestingly, the greatest speedup still comes from replacing the Power function with a Multiply -- never use x^y for integer powers! Any further improvements?
  12. I also like the Savitsky-Golay, but it only works for uniformly spaced data, whereas the above utility is also good for non-uniform X spacing. I had already rewritten this utility for my own use, and checking it against the original I see it has a ~15x speedup on a single-core, and ~25-30x on my 2-core laptop -- should be even greater with more cores. Here's the main things I changed: passing individual X and Y arrays rather than a cluster replacing the Power function in the weighting routine with a multiply - this makes the most difference, about 8x turning off debugging - gives almost another doubling in speed moving some functions outside the loops, and sometimes removing loops altogether using parallel loops and sharing clones for subVIs inside parallel loops If you can get away with SGLs rather than DBLs, you'll get a further speedup, and if your data is evenly spaced but you still want to use this algorithm, then you shouldn't need to recompute the weighting function throughout your data - it only changes towards the start and end. SmoothCurveFit.zip You're welcome to use this rewritten code.
  13. Here's a template that doesn't use the IPES, so should work in 8.6. It should be almost as efficient. OpenG Remove Duplicates from 1 D Array Template.vi
  14. Gavin -- there's a small bug in this implementation which I've described in the support topic.
  15. The latest version 1.3.0 (based on these revisions) has broken the "Remove Duplicates" function by not correctly retaining the Array Split in the In Place Element Structure. I presume this was caused by back-saving to 8.6, whereas this functionality is only in 2009+. That appears to be the only function affected.
  16. Don't you mean "Beautifully Admirable Sadistic Tendencies And Radical Diatribes"?
  17. This may be of interest: http://forums.ni.com/t5/Machine-Vision/ADV-Toolkit-Imageprocessing-on-NVidia-GPU/td-p/1130963
  18. All work fine for me on Win 7/LV 2012. Have you tried remapping the shortcuts in Tools/Options/Menu Shortcuts? In fact, trying to set new shortcuts there will show you whether the keystrokes are getting through to LabVIEW or not.
  19. Yes, my expectation is that you'd get no directories at all (unless the directory name was Something.txt which is always possible, if unlikely). asbo's suggestion is 2nd most logical, that you'd get a list of the directories that contain files matching the pattern (though that can be gleaned from the file paths). But I can't see any logic in returning all directories. I can see that now the utility is written, it's pretty difficult to change it, although it could always be deprecated with a new version written. Failing that, perhaps a boolean, or a 3-way enum - e.g. All (default), Files Match, Directories Match - could be added. Or I guess a polymorphic function is another way to extend/correct it.
  20. The built-in List Directory function returns all files and directories which match the provided pattern. My point is that List Directory Recursive recursively finds all files in all sub-directories which match the pattern, but it returns all sub-directories whether they match the pattern or not. I don't know whether this was intended, or whether it is a bug in the implementation. If it was intended it would be interesting to know why, and if not, I think it should be changed - though that would break any existing usage.
  21. That's slightly different again - I was thinking it should contain all sub-directories whose names match the pattern, irrespective of any files in those directories. To me, that's the closest parallel to the core List Directory node.
  22. I have just observed that the array of directory paths returned by List Directory Recursive is always every sub-directory of the root Path, no matter what pattern is used. My expectation (and what I'd wanted to use it for) had been that this output would contain only the sub-directories which matched the pattern, in the same way as the array of file paths is generated. Is this result the intended behavior, or a bug in the implementation?
  23. Not just updating them all, but choosing what instances are provided - I use arrays of Images, but those are not useful for most people. If there's ever a reason to have "generic" terminals, then this sort of polymorphic-heavy function gives a great example. Writing them as XNodes (thanks Gavin) means there's no need to code all the polymorphic instances, but there's a lot of extra coding still to do around the function itself. So yes, I know that generic terminals don't officially exist, and won't exist at all soon, and XNodes are somewhat hidden, but I hope there's a third alternative in the wings. Or that there's a slightly easier way to create XNodes.
  24. So, in the case described here, does that include not allocating the "Indices of removed elements" array before the For loop? If so, I'm pretty impressed.
  25. In my experience, keeping memory allocation to a minimum is at least as important as execution time. The original array should be reused for the output (where possible), and every array access checked that it is performed in-place (with or without the IPES). In addition, output arrays of indices should be optional as to whether they are generated, with the default being false. As a related note - wouldn't it be nice if LabVIEW could choose whether to create an array depending on whether the sub-VI output is wired or not? I think LabVIEW does take wiring into account within a single VI, but not in terms of connections to sub-VIs - please correct me if I'm wrong. I wonder if marking the subroutine as "inline" would make any difference?
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.