Jump to content

Cluster vs. Waveform Performance


Recommended Posts

I have a vi that reads a bunch of channels of data, puts each channel into a cluster with some other information (sample rate, channel number, location, etc) and puts that cluster into a queue. Somewhere else in my program is a vi that reads the queue, pulls out the data and info and does stuff with it.

Thanks to all the time I've been spending with sound vis lately, I've been working with waveforms more. I'm wondering what the impact on my code efficiency would be if I were to use the waveform data type instead of a cluster, and store ancillary channel information as attributes.

My program is currently loping along doing the cluster transfer. But I only have 65 channels. I am about to start a system that will have 1300+ channels. So I've been going over code with a fine-toothed comb to figure out where I can make things more efficient (faster/less memory usage). My first thought is that using a native LV data type like a waveform to transfer data and attributes around is going to be more efficient than a cluster. Is that naive? Does it even matter? Using a cluster seems much more readable and easier to code, IMHO. But if using a waveform will help move things along better, that might be important for the new project.

Any thoughts?

Cat

Link to comment

I'm pretty sure a waveform is just a specialized cluster with a unique wire appearance, like an error cluster. Although it's not definitive proof of this, the help for flattened data says "LabVIEW flattens waveforms like clusters." I'd use the waveform data type if it's convenient, but I doubt that performance will be different than using an equivalent cluster.

Link to comment

I think the question is more about moving the additional non-waveform elements of a cluster type into the attributes of a waveform type.

Personally, I'm going to guess that the overhead of type conversation/flatting into the attribute tree will actually slow down your code and probably consume a bit more memory. Not mention the ergonomics point you already touched on.

If you know your waveforms get passed around a lot, it might be a good idea to look at DVRs so you can ensure copies are only made when absolutely necessary. After that, focus on anywhere you extract data from the waveforms and make sure you're touching as little as necessary for whatever operation is being done.

  • Like 2
Link to comment

On a project once I needed to take a large sample of data, then break it up into chucks and analyze each chunk. I figured I'd use the waveform data type, because the analyzing I was doing was using a few NI VIs that used the waveform datatype. So I read, then used the split waveform VI then analyzed. I found that my VI was running really slowly and the slowness was from the split, and concatenate waveform VIs. I found it was much faster to read as an array of doubles, split or concatenate the array, then turn it into a waveform for the analysis.

I wanted to tell this story because in my case it (for some reason) was much for efficient to split and merge arrays then convert to a waveform, then it was to work with the waveform from the start.

Link to comment
Waveforms are clusters.

Short and to the point. :) Thanks!

I'm assuming since you are using clusters you are not using RT FIFO's. That said, if you are using RT FIFOs, I believe waveform attributes are lost when using them so be aware of this.

No, I'm not using RT FIFOs. But this is good to know if I ever do.

Link to comment
Personally, I'm going to guess that the overhead of type conversation/flatting into the attribute tree will actually slow down your code and probably consume a bit more memory.

I agree, I was hoping that using a waveform data type was going to magically make all that worthwhile in the long run. But since a waveform is just a cluster...

If you know your waveforms get passed around a lot, it might be a good idea to look at DVRs so you can ensure copies are only made when absolutely necessary.

Thanks for reminding me about DVRs. I looked at them briefly when I upgraded to LV11 (from 8.6.1), but it didn't help the application I was working on at that time. This app might benefit.

I wanted to tell this story because in my case it (for some reason) was much for efficient to split and merge arrays then convert to a waveform, then it was to work with the waveform from the start.

Huh. That is a little counter-intuitive.

I think I am going to stay away from waveforms, unless there is a real driving reason to use them.

Link to comment

I'm assuming since you are using clusters you are not using RT FIFO's. That said, if you are using RT FIFOs, I believe waveform attributes are lost when using them so be aware of this.

Citation needed. I'm not saying you're wrong, but there's nothing in the design of the RTFIFOs (they share a lot with the Queues) that would necessitate this, so I'd be surprised. It may be something in the transmission requirements that I'm not aware of. Please test and confirm this.
Link to comment
Citation needed. I'm not saying you're wrong, but there's nothing in the design of the RTFIFOs (they share a lot with the Queues) that would necessitate this, so I'd be surprised. It may be something in the transmission requirements that I'm not aware of. Please test and confirm this.

I don't have a citation, but here's a snippet to show the issue.

post-11742-0-95393300-1352228252.png

I have very little experience with RT FIFOs, but I expect this likely arises to them needing constant sized elements, thus not accepting variants.

Link to comment

Weird.

The constant size thing can't be the issue... after all, the waveform itself contains a variable sized array for the numbers. The array of variants shouldn't have any impact.

Can't explain it, but useful to know.

...with attributes.

The attributes are just another field of the cluster, it just gets exposed on the block diagram differently. Still just a cluster.
Link to comment

I believe the support for waveforms is a bit of a kludge. You need to specify the waveform size when you create the FIFO, so I suspect that under the hood the primitive is mutating the waveform to a constant sized element, discarding the variant, and not transmitting the native cluster. Pure conjecture of course...

Link to comment

I should clarify "kludge" though. I don't mean to imply it's bad code. To reduce jitter it makes sense RT FIFOs would have to be typed to a fixed size element. However somewhere along the line there must have been a desire/requirement to support the native waveform type and since they're variable sized some form of compromise would have to be made. I suspect waveforms are handled as a special case by the primitives since LabVIEW does not expose constant sized arrays/waveforms to us.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.