Jump to content

Recommended Posts

I am mentoring an FRC team and we were discussing a design where we have a couple channels of information, some of which is very high frequency and losing data is not too important, so we are using a notifier. We also have a different channel of data which is much lower frequency, but losing any would be an big problem and we are using a queue.

 

Now the tricky part we would like to be able to wait on data from either source (preferably not busy wait). I have come up with a a couple possibilities:

 

  1. Busy wait on both, looping between checking both sources and a 0 delay in the loop to prevent starving the rest of the code.
     
  2. Tweak the queue based send to also push data to the high volume notifier, when I get a read on the notifier also check the queue and process it.
     
  3. User events? Not sure, but this looks right; however, note that the notifications/queue messages are generated in one subVI (which is called by external framework code) and being sent to another subVI which has a number of long running control loops in it.

 

I was hoping for something a little more elegant, that I have overlooked. Any pointers?

 

Thanks from an old C programmer still trying to get comfortable in labview.

Link to comment

My gut reaction is to throw in a third item. Probably a notifier. It'd be a "something's ready" notifier. You consumer would sit idle waiting on this notifier, then when it got a notification, it'd dequeue/get notification (both with timeout 0) and figure out what was sent.

 

When you enqueueu/send notification, you also hit the "something's ready" notifier.

 

I think AF's priority messages handle a similar situation. I've never looked too in depth into the implementation, but it's got a bunch of queues under the hood. It'd probably be worth checking out

Link to comment

Thanks for the ideas. I was leaning toward your solution after further discussion here as well. I am now going to investigate the actor framework, as I had not heard of it previously and it seems like a pretty good fit at first glance. Either way thanks again.

Link to comment
  • 2 months later...

I know it's an old thread, but I had one idea you might find interesting:

Do a parallel wait and whichever receives data sends "dummy" data (e.g. NaN) to the other one. When inspecting the received data, ignore it if it is the dummy data. This way you can make sure both wait-nodes will abort when at least one receives data.

 

post-27809-0-32755400-1412894489.png

 

But personally I would prefer using events. Of course this means you get all the data from your high frequency channel. If that is not a problem it could serve as a nice side effect.

Link to comment

As a word of warning, any system of waiting on multiple different channels of information in one loop is very tricky to get right.  Not impossible, but one is well advised to attempt to get away with one channel instead, or alternately with additional receiving loops, one per channel. 

Link to comment

Why not just use the same queue for all data? Have N different loops listening to N different channels. You can make your data type a waveform and set t0 on acquisition of data. You could even go one further and make it a cluster that encapsulates waveform + identifier. So you know at the other end whether you've got slow or fast data that's come in.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.