Jump to content

jfazekas

Members
  • Posts

    14
  • Joined

  • Last visited

Posts posted by jfazekas

  1. QUOTE (Aristos Queue @ Jan 27 2009, 12:30 PM)

    Shouldn't have any effect -- if the array needed to be copied then the class will need to be copied. Basically, if you're doing something that requires a copy, then a copy is going to be made. Whatever it is, stop doing it. :-) Some things that would cause a copy:

    * Using any sort of Global VI to store your data

    * a functional global where you use get and set actions to copy the value out of the global and then back into it later

    * forking the array wire to two write operations (such as Replace Element or Sort 1D Array etc). As long as you never fork the wire or fork to all readers or to a single writer and the other branches are all pure-functional readers, then you shouldn't have any data copies.

    What are you doing in those analysis functions? Are they "destructive analysis"? In other words, do they do stuff that replaces values in the array, which would cause a copy to be made so that you can call the next analysis function?

    Basically my approach is this. Create a class. The data object has an array of u8. My class 'INIT' function initializes the array - 12 megs in size. All of the funtions either write (using Replace array subset) or read (using Array Subset) from the class data object. By the way, I limit the Read/Write functions to 30kb as input or output (never read or write more than 30kb at once). I never fork a class wire in any of my use cases. So I think I'm doing the best I can to minimize copies. I have several several analysis functions that do iterative reads on different sections of the data (no writes).

    If the class wire goes to a shift register, is a copy made? Do tunnels into any specific structures?

  2. Sorry. Here is VI in 8.5 speak

    QUOTE (Neville D @ Jan 26 2009, 01:59 PM)

    Your right. By ref will be slow. I would stick with direct wires as the fastest way, and look at the data copies. Why are they being formed? Can you do anything about it? LV is smart enough to NOT make copies unless absolutely necessary.

    Just passing a wire into a subVI does not mean a copy of the data is made for that subVI (unless there is some branching that changes the data).

    See if you can use the inplace element structure to speed things up if replacing elements in a complicated array.

    Another approach is to chunk your data into a few manageable sets and work on those (maybe in parallel? Multicore optimization with smaller data! Hey thats a win-win!)

    Neville.

    Thanks for your reply. I find it extremely tedious to try and detect copies. Yes, I remember that "show buffer allocations" does not tell you where copies are made. In the end, you're probably right and I should just pass the array around. Do you know if it would help to typdef the array into a LV Class control?

  3. I'm in a bit of a pickle and would like to ask for suggestions. My application requires working on a large set of data that is represented in a single array of u8 integers. The array is about 12 megabytes and is fixed in length.

    Once the data set is aquired I have a library of 8 functions that do all sorts of analysis and anomoly checking on the data.

    I've studied the GigaLabVIEW examples and see a huge benefit to passing a reference between my subVI's instead of passing the 12-meg wire around my application. This eliminates the inevitable data copies (of a large wire) and I do see a benefit to the application's memory footprint.

    My problem is that this is slow. Some of my data analysis functions are iterative and I want them to run 500,000 times. There is a big hit to speed when you have to access the data many times via reference.

    To demonstrate the obvious (to myself) I made the quick example below and see a 200x difference in execution speeds. It looks to me like I have a choice to either suffer multiple data copies using the byval approach or suffer speed using byref approach.

    Maybe LV9 will have native byref functionality (wish wish).

  4. Unless I am overlooking the obvious, I think a new labview category dedicated to Statechart is a worthy addition to the site. I've completed a few applications using Statecharts and think SC has a bright and glorious future, however dark and dank the experience seems to be at present.

  5. QUOTE (David Boyd @ Apr 30 2008, 06:22 PM)

    Another trick I've used is to run a second plot which gets the value 'NaN' until I want to mark an event in the plotting. I then plot a point at -Inf, followed by +Inf, then return to plotting NaN. This guarantees a vertical line draw that does not affect autoscaling behaviors (nor is it affected by scale changes). The visual effect is to have a hairline vertical marker appear over/under the other trace(s) in the chart at the sample of interest.

    Dave

    Dave, while you were plotting -Inf and +Inf on the second plot, what did you do with the first plot? Didn't this mess up the timescale for the first plot data?

  6. Perhaps this is trivial, but it simplified a design for me.

    I needed a graph with two plots to update every 100 msec forever (history of last 1000 points). The user wanted the plot colors to change for sections of the plot to indicate some system information. My previous design used FIFO arrays and a waveform graph to display the plots as separate waveforms. The attached pictures should help clarify.

    Today I stumbled onto a much easier method using a waveform chart. Using a 'NaN' constant as a place-holder for the secondary plot color the same user interface requirements were satisfied. Wish I had thought of this before.

  7. I think it would save a lot of time to assign keyboard shortcuts to

    1. create constant

    2. create control

    3. create indicator

    You would have to put the wire tool over a connector/terminal so that labview would know what the data type should be. In other words the data type of the item created would match whatever the wiring tool is pointing to.

    right click -> context menu -> click on item is okay -- but could be faster.

    Anyone concur?

  8. QUOTE(Aristos Queue @ Feb 13 2008, 05:28 PM)

    Your post left me completely confused because of this line:

    I thought you were talking about comparing "Flatten To String" against "Variant To Flattened String" in order to do real transmission from one application instance to another application instance (say, over a TCP/IP link or somesuch).

    You're just talking about handing data from one VI to another VI on the same machine.

    So OF COURSE handling as variant is substantially faster. Why? VARIANTS AREN'T FLAT. The data is picked up off the wire whole, with all the hair hanging off of it (arrays of clusters of arrays of clusters of arrays of... etc) and put in the variant along with the type of the wire. Then when we convert it back to data, there's just a check that the type descriptors are compatible and we're done. When flattening to a string, there's the whole traversal of the type descriptor to create the flattened string and then unflattening requires not just traversal of the type descriptor but also parsing of the string and memory allocation.

    You got a 50% speed difference in your test. That's with a simple array. The more complex the type the greater the difference between these will be.

    But the original article that you linked to is talking about something entirely different.

    I didn't mean to misconstrew the NI article. It was my starting point for considering the attribute ability of the variant.

    I'm not 'just' talking about handling data from one VI to another on the same machine. Sometimes it is a necessary evil to send more than one data type over a single wire. There are a couple of example VIs I see on LAVA where different data types (flattened to a string) are passed between loops using Queues. (Publish-subscribe topic for example).

    I was interested in discovering any performance differences between passing data types by variants or strings, that is all. You pointed out that Variants are not flat, which is good for people to know. It wasn't obvious to me.

  9. There is an article on the NI site entitled "Differences Between Flatten to String.vi and Variant to Flattened String.vi" http://digital.ni.com/public.nsf/allkb/36C...625729E0007AE75

    Variants have a functional advantage over flattened strings due to the attribute feature. Personally I've never used attributes, but I am very interested to know if the variants flatten and unflatten data faster than the 'flatten to string' counterpart.

    Here are two VI's I made to look at the speed difference between the two data abstraction methods. In one test case I only flattend and unflatten some data. In another case I flatten, send to Q, receive, unflatten.

    It is interesting to me that the unflatten activity for both Variant and String methods are much faster than the flatten activity.

    On my system the Variant method is always significantly faster (more than 50%).

    Perhaps someone who is interested will test how the use of Variant attributes affects the flatten/unflatten speed.

  10. The LV Class feature has some nice control and privacy aspects compared to the age-old method of simply passing around a TypeDef'd Cluster. More specifically, I like how the Class data cannot be unbundled outside of the Class VI members. Here is an attempt to measure the extra overhead associated with the LV Class. Basically I just measured how long it took to perform 100,000 write-read cyles on a single member of the data type. This test was repeated for both the 'Cluster' and Class method.

    Conclusion: Cluster method is always faster. For extremely simple data types, the cluster method (on my PC) is twice as fast. I did an example of a very complicated data type and the cluster method was 1.5 times faster.

  11. This is the conclusion I wanted to verify.

    QUOTE(silmaril @ Jan 27 2008, 12:42 PM)

    So I think it's fine to use UI events and maybe user events for UI puposes, but for general communication purposes I'll stick to queues and notifiers.

    I was hoping to use the event structure for general communication purposes since it would permit 'waiting' for multiple queues simultaneously. LV has a wait on multiple notifiers (if they are of the same data type) but no such function exists for queues -- especially of different data types.

    I'll stick to queues and notifiers until LV offers software interrupts that are not built on the UI thread.

  12. I attended a Labview class this week and learned how to use the Dynamic Dispatch events to accomplish communications between parallel loops (threads). This was extremely exciting to me since the event structure queues up events and can pass dispatch data of any data type. The sexiest thing about this technology is the ability to setup multiple event structures to receive the same dynamically generated user event. (Great for shutdown, error, and debug entry notifications.)

    Historically I have accomplished most of this functionality using regular Queues that pass variants which are then cast to the appropriate data type.

    For the most part, I think it works like a champ. I made a quick test VI to compare performance of the event structure approach to the Queue approach.

    Observations:

    1. For small data types (in units of bytes) the event structure is at least five times slower than the Queue mechanism.

    2. My observations were not altered by using data types of increased complexity (structures of arrays of structures).

    3. As the data types became more bulky (in units of bytes) the event structure became far slower than the Queue mechanism.

    4. I am suspicious that there is an issue with sending a single event to multiple event structures. The last test case in my example locks up the program when an event is dropped.

    Speed is a relative term, I know. But in the best case I could only pass a simple data type 1000 times a second.

    Here I attach the test VI for your comments / improvements.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.