Search the Community
Showing results for tags 'xnet'.
-
Hey fancy folk, Problem/TL;DR: I've been having a problem getting all of my timing to sync up for a 4 station tower I'm running. I am using a part of my code to store a start and stop timestamp to analyze data coming from XNET via XY read to determine if a motor is assisting or not. When I first start a station, things are aligned, then after time, they drift (makes sense with clocks based on different crystals). I tried to set the master timebase to the same clock via PXI trig (for some reason my card wouldn't let me connect a clock to PXI star and I know PXI trig lines can cause double clocking) and doing a soft reset every 24 hours (resets my CAN cards). After a few days of a station running, the timestamp and XNET read XY no longer align and the timestamp can be up to 10 seconds "earlier" than my XNET read. If it would help, I can attach a cycle from my log to show what I mean by things not lining up. I have a work around that I'll talk about below, but I'd prefer to get to the bottom of why I can't sync my tasks, code, and CAN cards. Background/hardware setup: We have a four station tower where each station commands a motor in position mode which is connected to a motor in a torque assist mode against a brake. We are doing lifetime testing of the torque assist motor. The tower has a PXI 1010 chassis in it with 4 PXI 8512/2 cards (CAN coms), a PXI 6713 (brake set), a PXI 6602 (IG set), a PXI 6052E (coms for SCXI chassis in PXI1010), and 2 SCXI 1121 (allows for 8 torque sensor readings, 2 per station). There is 1 PXI 8512/2 per station - each motor gets its own port because all 8 motors have the same Arbitration ID, but that's a different conversation. I am using LabVIEW 2015 on a PC which communicates to the PXI 1010 in this tower. We were using 8.6 and I "upgrade to our latest and greatest" a few months back when I updated the code. Software attempt at sync: After the first few days of running the code and realizing things weren't staying sync'd, I started to try and give everything a common clock. Issue ran into how do I properly sync the PXI 8512/ to an SCXI 1121? My thought is that if I set the master timebase the same to both, then the PXI 8512/2 and SCXI 1121 should be able to divide down the master time base to the proper sampling rate and since they are using the same Master timebase, everything will be good. So in my test init section, I use "DAQmx signal connect" to connect the PXI 6713's 20Mhz timebase to PXI Trig 7 and set the analog in task from the SCXI 1121's to use the master timebase as PXI Trig 7. Likewise in the station init, I am using the Trig 7 as the Master timebase for the PXI 8512/2. I am performing an XNET read XY for my sessions and a waveform read (later converted to XY data) for the SCXI 1121's analog in task. I am displaying all of this data on an XY graph. Problem in my code: I created 1 station and made it a preallocated clone reentrant VI. In the station main, I have an XNET Read XY session loop, a log loop, a state machine, and a time monitoring loop. The XNET Read XY loop reads all motor feedback data (command current of the position motor) and shoves it to an XY array notifier (used in the station's state machine as well as the top level's display loop). The station's state machine is the part that sends out the command to the XNET write which commands the position motor to move in a desired movement profile while also setting the torque assist motor to its proper mode. When I go to write these values, I am acquiring the timestamp of the state machine and storing that into a notifier. Once the desired movement profile is complete, I am storing the timestamp again. The state machine then checks the XNET Read XY array notifier and grabs the data between the start and end timestamp. It then analyzes the position motor's commanded current to determine if the desired movement was assissted or not. Note: I was originally using the analog torque in from the 1121, but noticed that the shift between the station's timestamp and my XY torque in (converted from waveform) data shifted faster than the XNET read XY of the commanded current. Ideally, I'd like to move back to my torque sensor. Failure point: The main failure point of my code is when I compare the state machine's start and end timestamp to the XY data from my XNET read XY timestamps. The state machine's timestamp can get 5-10 seconds "faster" than the XNET read timestamp. I use quotes because I am monitoring my state machine and the FP XY graph and when the state machine tells the position motor to start moving, my XY graph updates as well and doesn't lag 5-10 sec beind. I believe that the reason for the time difference is due to the clock skew of the code and the PXI 8512/2. I've tried looking in multiple locations for where LV actually gets it's timestamp via "Get Time/Date In Seconds" VI but can't confirm anything. My general assumption is that it queries the PC's clock to get that information. Work around: I have 2 bits in my XNET write frame for the position motor. I can use those 2 bits as flags for when the code tells the position motor to start moving and when movement finished and set the "Echo TX" setting for XNET to T. This would allow me to read when messages were being sent out from my card so I can determine when the start and end time were sent to the position motor. I'd repurposed the get timestamp notifier to store data in my XNET read XY loop instead of my state machine. Where I am at: Any tips or insight into synchronization between everything would be greatly appropriated. I've been reading NI documentation about how things should be handled behind the scenes but couldn't figure out how to get my PXI 8512/2 and SCXI 1121 clocks sync'd without using the DAQmx connect terminals. I think that the only way I can actually get things synced properly is by somehow getting the clock from my PC to my SCXI but I have no idea how to do that. I am thinking about going down my work around because that is the path of least resistance at this point, but I am genuinely curious how I properly sync all my stuff. I feel like something like this will plague any sort of long term life cycle testing. I'd much rather spend time to design right now than suffer from a half baked attempt when I have to fix the code later. If you'd like me to add snippets of code or delve into more details, I can. Thanks for reading, Matt
-
Hi, I have contacted NI sales services but it's a great frustration as usual, so I will try to get some support here Basically for a project I need 2 CAN ports and I decided to go with XNET and Compact DAQ. I have 2 solutions I try to choose from: Solution 1 --> One 4-slot chassis with 2 NI-9862 modules (one port per module) Solution 2 --> One 1-slot chassis with 1 NI-9860 module (this module has 2 ports) I am confident that solution 1 will work well since I already had a project in the past with one 4-slot chassis (cDAQ-9174) and one NI-9862 module. But going with solution 2 will allow me to cut cost significantly. I just want to make sure it will be absolutely seamless and transparent for the software. Does anybody have experience with the NI-9860? Can it be considered as the equivalent of 2x NI-9862 as far as the software is concerned (LabVIEW driver) or does it remove some performance/flexibility/other? Thanks!