Jump to content

jaegen

Members
  • Posts

    152
  • Joined

  • Last visited

  • Days Won

    4

Posts posted by jaegen

  1. My gut says "BAD. Prone to abuse." I'm going with my gut on this one.

    What if the required output type changes, and it's way downstream? You won't get an error until you run. This should be a development-time error, not a run-time one.

    (Also, anything that AQ calls "magical" scares me :) )

  2. I think is due to the subroutine priority setting - the loops must not be able to run in parallel, therefore the VI is as currently as fast as we can make it (it acts the same as if those loops were run serially).

    Yeah, I figured the loop iterations were already running as fast as possible (with your standard test string, there are only 8 iterations total anyways right?). I'd forgotten/not noticed that the VI was set to subroutine priority.

    Therefore, if memory is an issue, I challenge anyone out there to optimise it but retain speed :)

    My suggestion above about chopping the boolean array saves a huge 224 bytes! :thumbup1:

    • Like 1
  3. Just a quick observation. Does this design trade memory for speed?

    If so, would this function ever be used in a memory constrained environment such as RT or FieldPoint?

    It appears that two copies of the string data (as U8 arrays, one reversed) are created to iterate over. Is the LabVIEW compiler is smart enough to only use one buffer for the U8 array data? What does LabVIEW Profiler tell us about the buffer allocations?

    I don't have 2009 installed, so I can't play with the examples.

    If there are two buffer allocations for the U8 array data, would there any difference in performance if the 'end trim' loop were to use the same U8 array and simply index from the end of the array (N-i-1) until a hit was found?

    I was curious about buffer allocations for the reversed array too. One of the things I tried was to force sequencing of the two case structures, rather than having them run in parallel, but nothing I did had any effect on the speed, even with a very long test string. However, if you pop up the buffer allocations display tool it does show a dot on the output of the reverse array node. I never looked into profiling memory usage.

    I also tried iterating backwards from the end of the array, but this was significantly slower than just reversing the array and autoindexing.

    I'd say all of this is probably moot though - RT and FieldPoint applications are very unlikely to be doing a lot of text processing, and if anyone is working with a long enough string to matter they should probably be doing something more customized to keep memory copies down.

    Jaegen

  4. So the question is - can anyone make it faster? :cool:

    Well after about an hour of trying the only thing I could come up with that wasn't slower was to delete all but the first 32 elements of the boolean lookup array constant, since every character greater than 0x20 is not whitespace. But given that this didn't seem to speed up things at all, only saves 224 bytes, and further obfuscates the code, it's probably not worth it.

    Nice code Darin.

    Jaegen

  5. I can't add anything to ned's reply, but did want to mention that a common mistake to watch for (I make it all the time, and you even made it near the end of your post) is to type 198 or 162 instead of 192 and 168.

    Does anyone know if there's a psychological basis for this error? :P Seriously, what is it about these numbers that makes them so prone to mix-ups?

    Jaegen

  6. <nitpick>

    Something about the upgrade, and perhaps the theme LAVA is using, is causing the text of a post to be indented slightly around the user info/avatar section. It only shows up if there's enough content in the post to reach below the div, like dannyt's post above:

    post-932-0-47466900-1313600908.png

    Here's what it looks like if I click and drag to select content in Chrome:

    post-932-0-34717000-1313600913.png

    You can clearly see that the border around the "author-info" div is forcing the post content to the right.

    Obviously not a major issue, but it's a bit distracting.

    Otherwise, I love the upgrade.

    </nitpick>

    EDIT: My pictures didn't show up first try...

  7. I know NI's answer to #2. My answer would have been a CSV file...

    We use CSV files a ton (too much) here too, but be warned: CSVs don't like Europe (or Europe doesn't like CSVs) - any country that uses commas as a decimal separator can't use commas as a column separator. After being bitten by this years ago all my text logging code allows for a re-configurable separator and file extension.

    Jaegen

    • Like 2
  8. P.S. I'm also periodically stressing about whether I passed the CLA whenever I think about it. Fingers crossed...

    And I passed! Woo hoo! I suddenly feel like a member of an elite cadre of hard-core developers.:ph34r: That or a loosely defined herd of NI-brainwashed nerds. Same difference... :D

    Now to convince management to send me to the CLA summit ...

    Jaegen

  9. I'll bite:

    I'm going through the rather tedious process of porting our test stand code over to yet another version of test stand. Essentially, this means I'm re-creating low-level I/O code to feed into the underside of our hardware abstraction layer, but since the I/O list looks mostly the same as an existing one (but of course never exactly the same), it means a lot of copying, pasting, and typing until things look right. This is particularly painful when I'm keen to start testing out LV 2011 and all the new and interesting things I saw at NI Week. Oh well, maybe next week ...

    Jaegen

    P.S. I'm also periodically stressing about whether I passed the CLA whenever I think about it. Fingers crossed...

  10. Thanks for the insight AQ.

    I wasn't implying type defs should change - just asking whether the "don't add type defs as data members of a class" rule changes if the type def belongs to the class. If I open the class and modify the type def, doesn't a new version get saved to the class mutation history (since its data type has changed)?

  11. Is the class mutation issue the only reason to avoid using type defs as data members? And does the mutation issue exist if the type def belongs to the class? (In this case, the class is guaranteed to be in memory when you edit the type def.) I've never had issues with type defs in class data, but I haven't really used the mutation history feature, and I don't think I've ever used a type def as a data member that didn't belong to the class (other than simple, unchanging things like enums).

    Jaegen

  12. Don't forget - if all you want to do is use "Get" and "Set" instead of "Read" and "Write", you can just modify the default values for the appropriate controls on "...\LabVIEW 2009\resource\Framework\Providers\LVClassLibrary\NewAccessors\CLSUIP_LocalizedStrings.vi".

    That being said - this is a great addition that will make automatic updating of the icon possible.

    Thanks,

    Jaegen

    • Like 2
  13. I have to agree with AQ. I would support changing the sequence structure to have a (much) thinner border, but I think the null wire concept just opens up a whole bucket of confusion.

    You could always create a null class, with a thin light-grey wire. (I realize that still means you have to create actual connector pane terminals for it).

    Jaegen

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.