Jump to content

GregSands

Members
  • Posts

    264
  • Joined

  • Last visited

  • Days Won

    25

Posts posted by GregSands

  1. Q: What is the value of a Kudo in the
    ?
    A: Not much.

    I've made an interesting observation.

    Roughly a month ago, two ideas were proposed within a day of each other. My suggestion was that
    , and Darin suggested that the
    . Both ideas were fairly straight-forward, both are coding-related, both had a simple image and clear explanation, and as of now, both have attracted about the same number of comments (12 vs 16) and kudos (65 vs 70). I haven't fully compared the kudos, but it appears there's even about the same number of NI voters and "high-rank" voters for each.

    However, and I don't think it would be just my opinion, Darin's idea is infinitely more useful and valuable than mine. It's an idea that would allow faster and easier programming, and be a noticeable improvement. Whereas error wire layering - it would be "nice" if it was implemented, but it's just cosmetic, not a game-changer. Yet they've attracted about the same number of kudos.

    So I can now understand when AQ and other NI reps say that popularity of an idea is a pretty poor indication of its value.

    PS - go vote for
    if you haven't already.
    • Like 1
  2. I tried this on the logs and the smoothing assigned NaN values to some of the Y-Points. Do you know what could be causing this?

    The final result is that the output curve has many broken regions. I have attached various screen shots that shows this.

    Hard to tell without seeing the data, but if your screen-shot is correct, you have a "Weighting Fraction" equal to zero, so I wonder if that is causing the problem. I'm pretty sure that it should be greater than zero - it's the fraction of the dataset to use for fitting at each point.

  3. Just has a cursory glance. But it looks like you are calculating the coefficients and passing the XY parms for the linear fit twice with the same data (it's only the weightings that change from the first "fit" to the second) . You could pre-calculate them in a separate loop and just pass them into the other loops.

    I used a separate loop to start with, but the speed improvement was minimal, and the memory use would be increased fairly significantly.

  4. Greg,

    My first thought would have been to truncate the weighting calculation and fitting to only a region around the point where the weights are non negligible. Currently, the algorithm uses the entire data set in the calculation of each point even though most of the data has near zero weighting. For very large datasets this will be very significant.

    Yes, in fact the weighting calculation already truncates the values so that they are set to zero outside of a window around each point. However with a variable X-spacing, the size of the window (in both samples and X-distance) can vary, so it would be a little more complicated to work out that size for each point.

    Just had a quick go - with a fixed window size, you get a further 10x speedup, but if you have to threshold to find the appropriate window, it's only another 2-3x. Still, that's around 40x overall, with further increases using multiple cores.

    SmoothCurveFit_subset.zip

    I'd only ever used it for arrays up to about 5000 points, so it had been fast enough. Interestingly, the greatest speedup still comes from replacing the Power function with a Multiply -- never use x^y for integer powers!

    Any further improvements?

  5. I think it is intended by precedence. In fact I'm totally surprised that List Directory does also filter directories based on the pattern. So what do you get if you want to list *.txt files? No directories at all?

    Yes, my expectation is that you'd get no directories at all (unless the directory name was Something.txt which is always possible, if unlikely). asbo's suggestion is 2nd most logical, that you'd get a list of the directories that contain files matching the pattern (though that can be gleaned from the file paths). But I can't see any logic in returning all directories.

    But changing the default is not really an option since it could and likely would break quite a few OpenG Tools such as the OpenG Package Builder, Commander and its decendent, the VIPM.

    I can see that now the utility is written, it's pretty difficult to change it, although it could always be deprecated with a new version written. Failing that, perhaps a boolean, or a 3-way enum - e.g. All (default), Files Match, Directories Match - could be added. Or I guess a polymorphic function is another way to extend/correct it.

  6. Oooh, I read a little too much between the lines of what you said. Interesting; does the original node returns directories and files or just directories?

    The built-in List Directory function returns all files and directories which match the provided pattern. My point is that List Directory Recursive recursively finds all files in all sub-directories which match the pattern, but it returns all sub-directories whether they match the pattern or not. I don't know whether this was intended, or whether it is a bug in the implementation. If it was intended it would be interesting to know why, and if not, I think it should be changed - though that would break any existing usage.

  7. I have just observed that the array of directory paths returned by List Directory Recursive is always every sub-directory of the root Path, no matter what pattern is used. My expectation (and what I'd wanted to use it for) had been that this output would contain only the sub-directories which matched the pattern, in the same way as the array of file paths is generated. Is this result the intended behavior, or a bug in the implementation?

  8. Hah, I do not have any problem with further improvements (although this one is relatively minor I would say) - as long as I'm not the one to update all those polymorph instances again ;-)

    Not just updating them all, but choosing what instances are provided - I use arrays of Images, but those are not useful for most people.

    If there's ever a reason to have "generic" terminals, then this sort of polymorphic-heavy function gives a great example. Writing them as XNodes (thanks Gavin) means there's no need to code all the polymorphic instances, but there's a lot of extra coding still to do around the function itself. So yes, I know that generic terminals don't officially exist, and won't exist at all soon, and XNodes are somewhat hidden, but I hope there's a third alternative in the wings. Or that there's a slightly easier way to create XNodes.

  9. Greg: Yes, making the subVI inline does make it possible for the output to avoid calculation entirely. Doing it on any subVI call is something that the compiler optimization team has on its list of possible optimizations, but not currently implemented.

    So, in the case described here, does that include not allocating the "Indices of removed elements" array before the For loop? If so, I'm pretty impressed.

  10. In my experience, keeping memory allocation to a minimum is at least as important as execution time. The original array should be reused for the output (where possible), and every array access checked that it is performed in-place (with or without the IPES). In addition, output arrays of indices should be optional as to whether they are generated, with the default being false.

    As a related note - wouldn't it be nice if LabVIEW could choose whether to create an array depending on whether the sub-VI output is wired or not? I think LabVIEW does take wiring into account within a single VI, but not in terms of connections to sub-VIs - please correct me if I'm wrong. I wonder if marking the subroutine as "inline" would make any difference?

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.