Hey fancy folk,
I've been having a problem getting all of my timing to sync up for a 4 station tower I'm running. I am using a part of my code to store a start and stop timestamp to analyze data coming from XNET via XY read to determine if a motor is assisting or not. When I first start a station, things are aligned, then after time, they drift (makes sense with clocks based on different crystals). I tried to set the master timebase to the same clock via PXI trig (for some reason my card wouldn't let me connect a clock to PXI star and I know PXI trig lines can cause double clocking) and doing a soft reset every 24 hours (resets my CAN cards). After a few days of a station running, the timestamp and XNET read XY no longer align and the timestamp can be up to 10 seconds "earlier" than my XNET read. If it would help, I can attach a cycle from my log to show what I mean by things not lining up. I have a work around that I'll talk about below, but I'd prefer to get to the bottom of why I can't sync my tasks, code, and CAN cards.
We have a four station tower where each station commands a motor in position mode which is connected to a motor in a torque assist mode against a brake. We are doing lifetime testing of the torque assist motor. The tower has a PXI 1010 chassis in it with 4 PXI 8512/2 cards (CAN coms), a PXI 6713 (brake set), a PXI 6602 (IG set), a PXI 6052E (coms for SCXI chassis in PXI1010), and 2 SCXI 1121 (allows for 8 torque sensor readings, 2 per station). There is 1 PXI 8512/2 per station - each motor gets its own port because all 8 motors have the same Arbitration ID, but that's a different conversation. I am using LabVIEW 2015 on a PC which communicates to the PXI 1010 in this tower. We were using 8.6 and I "upgrade to our latest and greatest" a few months back when I updated the code.
Software attempt at sync:
After the first few days of running the code and realizing things weren't staying sync'd, I started to try and give everything a common clock. Issue ran into how do I properly sync the PXI 8512/ to an SCXI 1121? My thought is that if I set the master timebase the same to both, then the PXI 8512/2 and SCXI 1121 should be able to divide down the master time base to the proper sampling rate and since they are using the same Master timebase, everything will be good. So in my test init section, I use "DAQmx signal connect" to connect the PXI 6713's 20Mhz timebase to PXI Trig 7 and set the analog in task from the SCXI 1121's to use the master timebase as PXI Trig 7. Likewise in the station init, I am using the Trig 7 as the Master timebase for the PXI 8512/2. I am performing an XNET read XY for my sessions and a waveform read (later converted to XY data) for the SCXI 1121's analog in task. I am displaying all of this data on an XY graph.
Problem in my code:
I created 1 station and made it a preallocated clone reentrant VI. In the station main, I have an XNET Read XY session loop, a log loop, a state machine, and a time monitoring loop. The XNET Read XY loop reads all motor feedback data (command current of the position motor) and shoves it to an XY array notifier (used in the station's state machine as well as the top level's display loop). The station's state machine is the part that sends out the command to the XNET write which commands the position motor to move in a desired movement profile while also setting the torque assist motor to its proper mode. When I go to write these values, I am acquiring the timestamp of the state machine and storing that into a notifier. Once the desired movement profile is complete, I am storing the timestamp again. The state machine then checks the XNET Read XY array notifier and grabs the data between the start and end timestamp. It then analyzes the position motor's commanded current to determine if the desired movement was assissted or not.
Note: I was originally using the analog torque in from the 1121, but noticed that the shift between the station's timestamp and my XY torque in (converted from waveform) data shifted faster than the XNET read XY of the commanded current. Ideally, I'd like to move back to my torque sensor.
The main failure point of my code is when I compare the state machine's start and end timestamp to the XY data from my XNET read XY timestamps. The state machine's timestamp can get 5-10 seconds "faster" than the XNET read timestamp. I use quotes because I am monitoring my state machine and the FP XY graph and when the state machine tells the position motor to start moving, my XY graph updates as well and doesn't lag 5-10 sec beind. I believe that the reason for the time difference is due to the clock skew of the code and the PXI 8512/2. I've tried looking in multiple locations for where LV actually gets it's timestamp via "Get Time/Date In Seconds" VI but can't confirm anything. My general assumption is that it queries the PC's clock to get that information.
I have 2 bits in my XNET write frame for the position motor. I can use those 2 bits as flags for when the code tells the position motor to start moving and when movement finished and set the "Echo TX" setting for XNET to T. This would allow me to read when messages were being sent out from my card so I can determine when the start and end time were sent to the position motor. I'd repurposed the get timestamp notifier to store data in my XNET read XY loop instead of my state machine.
Where I am at:
Any tips or insight into synchronization between everything would be greatly appropriated. I've been reading NI documentation about how things should be handled behind the scenes but couldn't figure out how to get my PXI 8512/2 and SCXI 1121 clocks sync'd without using the DAQmx connect terminals. I think that the only way I can actually get things synced properly is by somehow getting the clock from my PC to my SCXI but I have no idea how to do that.
I am thinking about going down my work around because that is the path of least resistance at this point, but I am genuinely curious how I properly sync all my stuff. I feel like something like this will plague any sort of long term life cycle testing. I'd much rather spend time to design right now than suffer from a half baked attempt when I have to fix the code later.
If you'd like me to add snippets of code or delve into more details, I can.
Thanks for reading,
By Gan Uesli Starling
I have a LabVIEW program, which when operating with simulated input data is blazing fast during mock-up. As soon as I hooked into a real cDAQ on Windows, however, the update speed fell off to 4 seconds. That's awful slow. I don't really have a whole lot of channels. I have read tasks for Volts of maybe 5 channels, write tasks for both On./Off for maybe 10 channels (simultaneous) and write tasks for 4-20mA of only 5 channels. How may I improve on that? Is it maybe a priority issue in Windows or to do with USB? What? How to improve? Thanks in advance.
Lets pretend I wanted to simulate a solar panel using a cDAQ, a programmable power supply and a light sensor. Id have to measure the voltage from the light sensor and the PSUs current, use those values to find the respective operating point on my light/voltage/current-curve and update the PSUs settings. Lets also say I wanted to do this for multiple systems in parallel, all connected to 1 analog input and 1 analog output module in 1 cDAQ.
What would be the best way to achieve the quickest response time? Simply reading and writing single samples seams to be pretty slow (though I can encapsulate everything neatly this way).
Is there a better way?
I've been struggling with this problem for months now... I want to use a cDAQ9171+NI-9234 with an Advantech PC (AIMB-580). I have 3 computers like this and all with the same problem. I plug the USB cDAQ-9171, Windows detect the device and loads the proper driver (cDAQ-9171), it is displayed in MAX but self-test or reset always fails with error. At the same time, NI Devive Loader service hangs up. If I try to restart or stop this service it cannot complete the operation, the only way to finish this operation is to unplug the usb cable from the PC, then the service restarts correctly. Additionally the windows' event logger shows the next error message:
In one PC I re-installed Windows, installed DAQmx 14 and the problem was still there, this is with a Windows fresh-install... After this, I removed DAQmx 14 and installed DAQmx 9.8 (supplied in CD) and the it worked!!! I upgraded to DAQmx 9.9 and it was still working, so I stoped there as it was all I needed (9.8 has some bugs with NI-9234 that 9.9 solves).
One thing to note, when it works, Windows first install cDAQ-9171 driver, but then inmediatly it changes to USB flash firmware updater, after this it returns again to load cDAQ-9171 and detects the NI-9134. When it don't work it only installs the cDAQ-9171 and doesn't load anything more...
I tried in a second PC to repeat the process, but I have been not able to repeat the success, only the fail... I get the same error every time I plug the USB and I have not idea of what else try... I have disabled UAC, cleared the MAX data, reinstall windows 2 times, tried with DAQmx 9.8, 9.9 and 14. Forced USB flash firmware updater as Windows driver, changed BIOS settings (disable hyper-threading, VT extensions, USB legacy, etc.). I am almost sure that it is some kind of weird incompatibility or similar, the thing is that NI device loader always hangs-up. In the next days I'm going to try to copy the same exact configuration of BIOS from the working PC, just to be sure.
I am designing a new project and I need to know if I can get away with purchasing 2 9181 chassis instead of one 9188 chassis. I will be using a 9213 for thermocouple readings and a 9208 for 4-20mA current readings. The signals dont nee to be syncronized and will only have to read once a minute or so. My question is whether or not I can read the signals from both chassis at the same time through MAX and easily imput into a LabVIEW program.
I am currently using a single 9188 to accomplish the same tasks at a different location but am trying to cut costs on the next project. The 9184 is not an option due to the software version I am running.