Jump to content

Recommended Posts

I am mentoring an FRC team and we were discussing a design where we have a couple channels of information, some of which is very high frequency and losing data is not too important, so we are using a notifier. We also have a different channel of data which is much lower frequency, but losing any would be an big problem and we are using a queue.

 

Now the tricky part we would like to be able to wait on data from either source (preferably not busy wait). I have come up with a a couple possibilities:

 

  1. Busy wait on both, looping between checking both sources and a 0 delay in the loop to prevent starving the rest of the code.
     
  2. Tweak the queue based send to also push data to the high volume notifier, when I get a read on the notifier also check the queue and process it.
     
  3. User events? Not sure, but this looks right; however, note that the notifications/queue messages are generated in one subVI (which is called by external framework code) and being sent to another subVI which has a number of long running control loops in it.

 

I was hoping for something a little more elegant, that I have overlooked. Any pointers?

 

Thanks from an old C programmer still trying to get comfortable in labview.

Link to comment

My gut reaction is to throw in a third item. Probably a notifier. It'd be a "something's ready" notifier. You consumer would sit idle waiting on this notifier, then when it got a notification, it'd dequeue/get notification (both with timeout 0) and figure out what was sent.

 

When you enqueueu/send notification, you also hit the "something's ready" notifier.

 

I think AF's priority messages handle a similar situation. I've never looked too in depth into the implementation, but it's got a bunch of queues under the hood. It'd probably be worth checking out

Link to comment

Thanks for the ideas. I was leaning toward your solution after further discussion here as well. I am now going to investigate the actor framework, as I had not heard of it previously and it seems like a pretty good fit at first glance. Either way thanks again.

Link to comment
  • 2 months later...

I know it's an old thread, but I had one idea you might find interesting:

Do a parallel wait and whichever receives data sends "dummy" data (e.g. NaN) to the other one. When inspecting the received data, ignore it if it is the dummy data. This way you can make sure both wait-nodes will abort when at least one receives data.

 

post-27809-0-32755400-1412894489.png

 

But personally I would prefer using events. Of course this means you get all the data from your high frequency channel. If that is not a problem it could serve as a nice side effect.

Link to comment

As a word of warning, any system of waiting on multiple different channels of information in one loop is very tricky to get right.  Not impossible, but one is well advised to attempt to get away with one channel instead, or alternately with additional receiving loops, one per channel. 

Link to comment

Why not just use the same queue for all data? Have N different loops listening to N different channels. You can make your data type a waveform and set t0 on acquisition of data. You could even go one further and make it a cluster that encapsulates waveform + identifier. So you know at the other end whether you've got slow or fast data that's come in.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Matt_AM
      Hey fancy folk,
       
      Problem/TL;DR:
      I've been having a problem getting all of my timing to sync up for a 4 station tower I'm running. I am using a part of my code to store a start and stop timestamp to analyze data coming from XNET via XY read to determine if a motor is assisting or not.  When I first start a station, things are aligned, then after time, they drift (makes sense with clocks based on different crystals).  I tried to set the master timebase to the same clock via PXI trig (for some reason my card wouldn't let me connect a clock to PXI star and I know PXI trig lines can cause double clocking) and doing a soft reset every 24 hours (resets my CAN cards).  After a few days of a station running, the timestamp and XNET read XY no longer align and the timestamp can be up to 10 seconds "earlier" than my XNET read.  If it would help, I can attach a cycle from my log to show what I mean by things not lining up.  I have a work around that I'll talk about below, but I'd prefer to get to the bottom of why I can't sync my tasks, code, and CAN cards. 
       
      Background/hardware setup:
      We have a four station tower where each station commands a motor in position mode which is connected to a motor in a torque assist mode against a brake.  We are doing lifetime testing of the torque assist motor. The tower has a PXI 1010 chassis in it with 4 PXI 8512/2 cards (CAN coms), a PXI 6713 (brake set), a PXI 6602 (IG set), a PXI 6052E (coms for SCXI chassis in PXI1010), and 2 SCXI 1121 (allows for 8 torque sensor readings, 2 per station).  There is 1 PXI 8512/2 per station - each motor gets its own port because all 8 motors have the same Arbitration ID, but that's a different conversation.  I am using LabVIEW 2015 on a PC which communicates to the PXI 1010 in this tower.  We were using 8.6 and I "upgrade to our latest and greatest" a few months back when I updated the code.
       
      Software attempt at sync:
      After the first few days of running the code and realizing things weren't staying sync'd, I started to try and give everything a common clock.  Issue ran into how do I properly sync the PXI 8512/ to an SCXI 1121?  My thought is that if I set the master timebase the same to both, then the PXI 8512/2 and SCXI 1121 should be able to divide down the master time base to the proper sampling rate and since they are using the same Master timebase, everything will be good.  So in my test init section, I use "DAQmx signal connect" to connect the PXI 6713's 20Mhz timebase to PXI Trig 7 and set the analog in task from the SCXI 1121's to use the master timebase as PXI Trig 7.  Likewise in the station init, I am using the Trig 7 as the Master timebase for the PXI 8512/2.  I am performing an XNET read XY for my sessions and a waveform read (later converted to XY data) for the SCXI 1121's analog in task.  I am displaying all of this data on an XY graph.
       
      Problem in my code:
      I created 1 station and made it a preallocated clone reentrant VI.  In the station main, I have an XNET Read XY session loop, a log loop, a state machine, and a time monitoring loop.  The XNET Read XY loop reads all motor feedback data (command current of the position motor) and shoves it to an XY array notifier (used in the station's state machine as well as the top level's display loop).  The station's state machine is the part that sends out the command to the XNET write which commands the position motor to move in a desired movement profile while also setting the torque assist motor to its proper mode.  When I go to write these values, I am acquiring the timestamp of the state machine and storing that into a notifier.  Once the desired movement profile is complete, I am storing the timestamp again.  The state machine then checks the XNET Read XY array notifier and grabs the data between the start and end timestamp.  It then analyzes the position motor's commanded current to determine if the desired movement was assissted or not.
       
      Note: I was originally using the analog torque in from the 1121, but noticed that the shift between the station's timestamp and my XY torque in (converted from waveform) data shifted faster than the XNET read XY of the commanded current.  Ideally, I'd like to move back to my torque sensor.
       
      Failure point:
      The main failure point of my code is when I compare the state machine's start and end timestamp to the XY data from my XNET read XY timestamps.  The state machine's timestamp can get 5-10 seconds "faster" than the XNET read timestamp.  I use quotes because I am monitoring my state machine and the FP XY graph and when the state machine tells the position motor to start moving, my XY graph updates as well and doesn't lag 5-10 sec beind.  I believe that the reason for the time difference is due to the clock skew of the code and the PXI 8512/2.  I've tried looking in multiple locations for where LV actually gets it's timestamp via "Get Time/Date In Seconds" VI but can't confirm anything.  My general assumption is that it queries the PC's clock to get that information. 
       
      Work around:
      I have 2 bits in my XNET write frame for the position motor.  I can use those 2 bits as flags for when the code tells the position motor to start moving and when movement finished and set the "Echo TX" setting for XNET to T.  This would allow me to read when messages were being sent out from my card so I can determine when the start and end time were sent to the position motor.  I'd repurposed the get timestamp notifier to store data in my XNET read XY loop instead of my state machine.
       
      Where I am at:
      Any tips or insight into synchronization between everything would be greatly appropriated.  I've been reading NI documentation about how things should be handled behind the scenes but couldn't figure out how to get my PXI 8512/2 and SCXI 1121 clocks sync'd without using the DAQmx connect terminals.  I think that the only way I can actually get things synced properly is by somehow getting the clock from my PC to my SCXI but I have no idea how to do that. 
       
      I am thinking about going down my work around because that is the path of least resistance at this point, but I am genuinely curious how I properly sync all my stuff.  I feel like something like this will plague any sort of long term life cycle testing.  I'd much rather spend time to design right now than suffer from a half baked attempt when I have to fix the code later.
       
      If you'd like me to add snippets of code or delve into more details, I can. 
       
      Thanks for reading,
      Matt
    • By Aniket Gadekar
      Hello Everyone,
      I have created this feature to create named variables of any data type in memory and access its value from any part of data code which is under same scope using its name.
      This variables stores instantaneous value. Best use case of this toolkit is acquire data set variable values & read from any loop. Do not use for Read-Modify-Write
      Once variables are created in memory, you can be grouped them and access its values using names.
      You can create variable for any data datatype & access its value using its Name. I have tested this toolkit for memory & performance, which is much faster than CVT & Tag Bus Library
      Please check and let me know your suggestions. use LabVIEW 15 sp1
      BR,
      Aniket Gadekar,
      aniket99.gadekar@gmail.com
      DataVariableToolkit.zip
    • By Calorified
      I am trying to read the value of some data in a while loop from a sub-vi (attached as FTP WPF UDP below) into the top-level vi (attached as Proportional Controller VI) but I find that the data is timing out in my top-level vi whereas it runs smoothly withing the subvi. I have used the while loop and the timing in the subvi is synchronized to the external code that sends data via udp to the subvi. 
      Is there a way you think this can be fixed?
      Below is my attached code.
      Thank you!
       
      Proportional Controller.vi
      FTB WPF UDP - Original.vi
    • By _Y_
      Hi all,
       
      I got a strange queue behavior that is probably related to race conditions. The problem is eliminated (a way around is found), so there is no need in a quick solution. However, I would be happy to understand reasons for such a phenomenon.
       
      BD of the “problematic†VI is attached. The queue size is limited to 1. The program consists of multiple asynchronous processes.
       
      Execution reaches the Enqueue Element node than stops. The timeout never happens; the program hangs. However, there is more. If I try to execute the VI step-wise, everything works fine until the execution enters the internal Case Structure. When it enters, three Stepping buttons become disabled.
       
      If I try to decompose the program, i.e. to use the same VI in a simple single-process program, both problems disappear.
       
      Have you encountered such a strange behavior? Any ideas that can help to the poor creature (me) to avoid sleepless nights?

    • By Michael Aivaliotis
      I did a presentation at NIWeek 2013 on the new user event features in LabVIEW 2013. I will post a link here when it's ready for online consumption. For now, I'd like to start a discussion on what all the LAVA members think of the changes and additions. If you have questions on how one feature or another work, please post here and we'll get it answered.
       
      To summarize, here is what was changed, added:
      New - Event Inspector Window (you're gonna love this!) New - High Priority events New - Flush Event Queue Function New - VI Scripting methods and properties for events New - Mouse Scroll Wheel Event Improvements to the Edit Events Dialog: It's now resizable (Finally) You can filter search the list of event sources and events. For easy navigation. You can limit instances of the event in the static queue (Similar to Flush Queue function)
      [*]Finally, there was a behaviour change: Non-handled, dynamically registered events do not reset the Event structure timeout terminal. In LabVIEW 2012 and older: Non-handled, dynamically registered events reset the Event structure timeout terminal.


      Let's keep this positive. We all need to learn how to use these new features and how to integrate them within our frameworks. I know a lot of you are using user events as the main communication mechanism for your processes and modules. Let's figure out how to make our code better with all this new cool stuff.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.