Jump to content

Trim Whitespace (String Package)


Recommended Posts

Just a quick observation. Does this design trade memory for speed?

If so, would this function ever be used in a memory constrained environment such as RT or FieldPoint?

It appears that two copies of the string data (as U8 arrays, one reversed) are created to iterate over. Is the LabVIEW compiler is smart enough to only use one buffer for the U8 array data? What does LabVIEW Profiler tell us about the buffer allocations?

I don't have 2009 installed, so I can't play with the examples.

If there are two buffer allocations for the U8 array data, would there any difference in performance if the 'end trim' loop were to use the same U8 array and simply index from the end of the array (N-i-1) until a hit was found?

I was curious about buffer allocations for the reversed array too. One of the things I tried was to force sequencing of the two case structures, rather than having them run in parallel, but nothing I did had any effect on the speed, even with a very long test string. However, if you pop up the buffer allocations display tool it does show a dot on the output of the reverse array node. I never looked into profiling memory usage.

I also tried iterating backwards from the end of the array, but this was significantly slower than just reversing the array and autoindexing.

I'd say all of this is probably moot though - RT and FieldPoint applications are very unlikely to be doing a lot of text processing, and if anyone is working with a long enough string to matter they should probably be doing something more customized to keep memory copies down.

Jaegen

Link to post
Share on other sites

I was curious about buffer allocations for the reversed array too. One of the things I tried was to force sequencing of the two case structures, rather than having them run in parallel, but nothing I did had any effect on the speed, even with a very long test string.

I think is due to the subroutine priority setting - the loops must not be able to run in parallel, therefore the VI is as currently as fast as we can make it (it acts the same as if those loops were run serially).

Therefore, if memory is an issue, I challenge anyone out there to optimise it but retain speed :)

Link to post
Share on other sites

I think is due to the subroutine priority setting - the loops must not be able to run in parallel, therefore the VI is as currently as fast as we can make it (it acts the same as if those loops were run serially).

Yeah, I figured the loop iterations were already running as fast as possible (with your standard test string, there are only 8 iterations total anyways right?). I'd forgotten/not noticed that the VI was set to subroutine priority.

Therefore, if memory is an issue, I challenge anyone out there to optimise it but retain speed :)

My suggestion above about chopping the boolean array saves a huge 224 bytes! :thumbup1:

Edited by jaegen
  • Like 1
Link to post
Share on other sites

...Every bit counts ...lol

What about using the Inline functionality?

Hi Claude

Thanks for the feedback.

Unfortunately, the OpenG codebase is in LabVIEW 2009.

Inlining was not exposed until LabVIEW 2010.

This is something we could discuss when we upgrade in the future, except for in this case the Execution Priority would be ignored, and that is responsible for the speed enhancements (along with some brilliant coding from the LAVA guys)!

Cheers

-JG

Link to post
Share on other sites

I am going to go ahead and lock this topic.

I am leaning towards jaegen regarding the excessive use of strings (e.g. use non-contiguous memory allocations) on RT and whether it is an issue.

However, if anyone in the future thinks otherwise, and wants to post, please do and we can also integrate the code :)

I am really impressed by the outcome of this process.

I think the community has put together a very nice piece of code:

post-26690-0-35795100-1314075511.png

This VI will be in the upcoming String Package release.

Also look out for the next review!

Thank you very much for taking part.

Cheers

-JG

  • Like 1
Link to post
Share on other sites
  • 1 month later...
Guest
This topic is now closed to further replies.
  • Similar Content

    • By Randy_S
      Good morning,
      I have a DLL I created (in Labview)  that takes in string information, does some magic crunching on it to create a password that is passed out as a string.  This DLL will be called from CVI and probably from a C# application.
      This should be so simple, however I cannot figure out how to pass the string in and get the modified string out of my DLL.
      I've had no luck at all getting this DLL to work by trying many different things.  It crashes LV, returns and error, or simply passes nothing out during my trials.
      I've included the project (its small) in zipped format.  Can one of you kind souls take a look at my code and tell me what I'm doing wrong, or what I need to do correctly to both, configure the DLL and then call it from LabVIEW?
       
      Thank you!
       
      Randy
      Password Generator.7z
    • By Manudelavega
      As far as I now there is 2 way to define how to format a numeric into a string using the floating point representation (%f):
       
      - Setting the number of significant digits (%_2f)
      - Setting the number of digits or precision (%.2f)
       
      However I often find myself in the need of a way to define the width of the string. For example I always want 3 digits maximum. Using significant digits won't work if the number is 0.0012 since this only has 2 significant digits, so the string will be 0.0012 even though I would like to see 0.00 (so 0 if we hide trailing zeros).
       
      On the other hand, using digits of precision won't work if I have a number like 123.45 since it has 2 digits of precision, so the string will be 123.45 even though I would like to see 123.
       
      Now the obvious brutal way would be to use a string subset and only keep the first 3 digits (I guarantee that my number is never bigger than 999). But I really hope there is a more elegant way. I can't imagine that nobody has run into this requirement before. How did you tackle it?
    • By piZviZ
      I am able to read HEX file into Labview using read from Text file..I am getting data into labview in terms of string.
       
      Problem =>  Now i want to load this HEX file into microcontroller using VISA-rs232 .Is it require to further conversion or i can load HEX file(in form of string) directly into microcontroller ?
      rs232.vi
    • By Majo
      Hello, 
      I need a little help. 
      I want to send from LabVIEW to Arduino via serial port the string for example "ABC" and so on. 
      The problem is that when I send the string from LabVIEW .......the Arduino do nothing. Sometimes shows some kind of sign.
      When I send the string via Visual Studio it works perfectly.
      Is there someone who can help me.
       
      I attached picture of vi and Arduino code
       
      Thank you very much.   
       
       

      Arduino code.txt
    • By EricLarsen
      I've got a 3rd party DLL I call from within my application.  One of the functions returns a pointer to a variable length null terminated string.  I use the mysterious GetValueByPointer.xnode to dereference the string and return it's value.  This works great from within the Labview development environment.  I'm using 2013Sp1.
       
      But when I compile the application into an executable, the xnode returns an error 7.  It appears the DLL is returning a valid pointer, but the xnode can't dereference it.  My hunch is that the pointer is crossing some kind of protected boundary, but I don't know for sure.
       
      I suppose I could write a wrapper around the DLL call, but I really don't want to have to do that if there is a simple workaround.  Any ideas?
       

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.