Jump to content

jfazekas

Members
  • Posts

    14
  • Joined

  • Last visited

LabVIEW Information

  • Version
    LabVIEW 2018
  • Since
    2000

jfazekas's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. QUOTE (Aristos Queue @ Jan 27 2009, 12:30 PM) Basically my approach is this. Create a class. The data object has an array of u8. My class 'INIT' function initializes the array - 12 megs in size. All of the funtions either write (using Replace array subset) or read (using Array Subset) from the class data object. By the way, I limit the Read/Write functions to 30kb as input or output (never read or write more than 30kb at once). I never fork a class wire in any of my use cases. So I think I'm doing the best I can to minimize copies. I have several several analysis functions that do iterative reads on different sections of the data (no writes). If the class wire goes to a shift register, is a copy made? Do tunnels into any specific structures?
  2. Sorry. Here is VI in 8.5 speak QUOTE (Neville D @ Jan 26 2009, 01:59 PM) Thanks for your reply. I find it extremely tedious to try and detect copies. Yes, I remember that "show buffer allocations" does not tell you where copies are made. In the end, you're probably right and I should just pass the array around. Do you know if it would help to typdef the array into a LV Class control?
  3. I'm in a bit of a pickle and would like to ask for suggestions. My application requires working on a large set of data that is represented in a single array of u8 integers. The array is about 12 megabytes and is fixed in length. Once the data set is aquired I have a library of 8 functions that do all sorts of analysis and anomoly checking on the data. I've studied the GigaLabVIEW examples and see a huge benefit to passing a reference between my subVI's instead of passing the 12-meg wire around my application. This eliminates the inevitable data copies (of a large wire) and I do see a benefit to the application's memory footprint. My problem is that this is slow. Some of my data analysis functions are iterative and I want them to run 500,000 times. There is a big hit to speed when you have to access the data many times via reference. To demonstrate the obvious (to myself) I made the quick example below and see a 200x difference in execution speeds. It looks to me like I have a choice to either suffer multiple data copies using the byval approach or suffer speed using byref approach. Maybe LV9 will have native byref functionality (wish wish).
  4. Unless I am overlooking the obvious, I think a new labview category dedicated to Statechart is a worthy addition to the site. I've completed a few applications using Statecharts and think SC has a bright and glorious future, however dark and dank the experience seems to be at present.
  5. QUOTE (David Boyd @ Apr 30 2008, 06:22 PM) Dave, while you were plotting -Inf and +Inf on the second plot, what did you do with the first plot? Didn't this mess up the timescale for the first plot data?
  6. Perhaps this is trivial, but it simplified a design for me. I needed a graph with two plots to update every 100 msec forever (history of last 1000 points). The user wanted the plot colors to change for sections of the plot to indicate some system information. My previous design used FIFO arrays and a waveform graph to display the plots as separate waveforms. The attached pictures should help clarify. Today I stumbled onto a much easier method using a waveform chart. Using a 'NaN' constant as a place-holder for the secondary plot color the same user interface requirements were satisfied. Wish I had thought of this before.
  7. I think it would save a lot of time to assign keyboard shortcuts to 1. create constant 2. create control 3. create indicator You would have to put the wire tool over a connector/terminal so that labview would know what the data type should be. In other words the data type of the item created would match whatever the wiring tool is pointing to. right click -> context menu -> click on item is okay -- but could be faster. Anyone concur?
  8. QUOTE(Aristos Queue @ Feb 13 2008, 05:28 PM) I didn't mean to misconstrew the NI article. It was my starting point for considering the attribute ability of the variant. I'm not 'just' talking about handling data from one VI to another on the same machine. Sometimes it is a necessary evil to send more than one data type over a single wire. There are a couple of example VIs I see on LAVA where different data types (flattened to a string) are passed between loops using Queues. (Publish-subscribe topic for example). I was interested in discovering any performance differences between passing data types by variants or strings, that is all. You pointed out that Variants are not flat, which is good for people to know. It wasn't obvious to me.
  9. There is an article on the NI site entitled "Differences Between Flatten to String.vi and Variant to Flattened String.vi" http://digital.ni.com/public.nsf/allkb/36C...625729E0007AE75 Variants have a functional advantage over flattened strings due to the attribute feature. Personally I've never used attributes, but I am very interested to know if the variants flatten and unflatten data faster than the 'flatten to string' counterpart. Here are two VI's I made to look at the speed difference between the two data abstraction methods. In one test case I only flattend and unflatten some data. In another case I flatten, send to Q, receive, unflatten. It is interesting to me that the unflatten activity for both Variant and String methods are much faster than the flatten activity. On my system the Variant method is always significantly faster (more than 50%). Perhaps someone who is interested will test how the use of Variant attributes affects the flatten/unflatten speed.
  10. Just a note to everyone regarding time measurements. Run enough iterations so that the times measured are at least 1 second. If you are running just a few cycles I find that the state machine overhead makes up significant portion of the time measurement. I'll post what I find regarding the dispatch terminals. Thanks for the input.
  11. The LV Class feature has some nice control and privacy aspects compared to the age-old method of simply passing around a TypeDef'd Cluster. More specifically, I like how the Class data cannot be unbundled outside of the Class VI members. Here is an attempt to measure the extra overhead associated with the LV Class. Basically I just measured how long it took to perform 100,000 write-read cyles on a single member of the data type. This test was repeated for both the 'Cluster' and Class method. Conclusion: Cluster method is always faster. For extremely simple data types, the cluster method (on my PC) is twice as fast. I did an example of a very complicated data type and the cluster method was 1.5 times faster.
  12. QUOTE(crelf @ Jan 28 2008, 02:49 PM) yes, I see that now. Thanks.
  13. This is the conclusion I wanted to verify. QUOTE(silmaril @ Jan 27 2008, 12:42 PM) I was hoping to use the event structure for general communication purposes since it would permit 'waiting' for multiple queues simultaneously. LV has a wait on multiple notifiers (if they are of the same data type) but no such function exists for queues -- especially of different data types. I'll stick to queues and notifiers until LV offers software interrupts that are not built on the UI thread.
  14. I attended a Labview class this week and learned how to use the Dynamic Dispatch events to accomplish communications between parallel loops (threads). This was extremely exciting to me since the event structure queues up events and can pass dispatch data of any data type. The sexiest thing about this technology is the ability to setup multiple event structures to receive the same dynamically generated user event. (Great for shutdown, error, and debug entry notifications.) Historically I have accomplished most of this functionality using regular Queues that pass variants which are then cast to the appropriate data type. For the most part, I think it works like a champ. I made a quick test VI to compare performance of the event structure approach to the Queue approach. Observations: 1. For small data types (in units of bytes) the event structure is at least five times slower than the Queue mechanism. 2. My observations were not altered by using data types of increased complexity (structures of arrays of structures). 3. As the data types became more bulky (in units of bytes) the event structure became far slower than the Queue mechanism. 4. I am suspicious that there is an issue with sending a single event to multiple event structures. The last test case in my example locks up the program when an event is dropped. Speed is a relative term, I know. But in the best case I could only pass a simple data type 1000 times a second. Here I attach the test VI for your comments / improvements.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.