Jump to content

Incorrect values and timing drift when logging REST data to Excel in LabVIEW VI


sts123

Recommended Posts

Posted (edited)

I have a LabVIEW VI that opens a REST client and, inside a While Loop, sends multiple HTTP POST requests to read data from sensors. The responses are converted to numeric values, displayed on charts/indicators, and logged to an XLSX file using Set Dynamic Data Attributes and Write To Measurement File, with the loop rate controlled by a Wait (ms). For timestamps I use Get Date/Time In Seconds that links to each of the Set Dynamic Attributes. The charts and displays appear to update correctly, but there are problems with the Excel logging. When the update rate is set higher than 1 sample/s, the saved magnitudes are wrong and are only correct at exact 1-second points (1 s, 2 s, 3 s, etc.); the values in between are incorrect. When the update rate is set to 1 sample/s, the values are initially correct, but after ~30 minutes the effective time interval starts drifting above 1 s. This looks like a buffering, timing, or logging issue rather than a sensor problem. I’ve attached the VI and would appreciate advice on how the VI should be restructured so that the values and timestamps written to Excel are correct and stable over long runs and at higher update rates. 

 

I am also attaching the vi where I tried to implement the buffer using the Feedback Node and Case Structure. However there is a problem as the Case Structure is never executed as the buffer at the start is 0. 

Json_getData_3.vi

 

Json_getData_5_Buffer.vi

Edited by sts123
Posted

The first thing is that you should make sure your VI's diagram and front panels fit a normal screen size and that front panels show up at the center. That way you can collaborate with people with such screens easier, and force yourself to modularize (all that code copied everywhere; make subVIs of it and reuse them(!)) and keep the code tidy. Right now when I open the thing it is huge and off screen, even though my resolution is 2560*1600. This, and the messy non-modularized code will scare off most people from trying to help you as it becomes an unnecessary hassle from the start (dealing with such things instead of an actual logical puzzle is too boring 😉 ).

Without an example of the issue you are describing (how does the output file look compared to what you expected e.g.?) and looking at the messy code this first reply now will focus on the style and structure of the code rather than the flagged issue:

All the repeated data fetching code e.g. could be reduced to generate an array of fetch parameters and a single for-loop that generates the fetch commands and fetches the data - and outputs an array of the results. If there is a fetch command that avoid having to fetch one parameter at a time I would use that instead as well to eliminate the overhead of each request, otherwise it may sum up and limit your fetch rate....Even better; use a fecth that returns the histroy of multiple items instead of fetching one sample of one item at a time....(I do not know if the API offers this though).

Timing-wise you have everything in one huge loop and there is nothing to ensure that it actually runs at the given rate (If you want a loop to run once a second e.g. you have to make sure the code inside it does not take more time to execute and at a minimum you should replace the wait function with a Wait Until Next ms Multiple. Split the code into separate loops and/or VIs that run in parallell instead, making sure each diagram or at least each loop is small enough to be seen on a normal display, allowing the user to get an overview at least vertically, some horisontal scrolling to follow the data flow might sometimes be OK. There are a lot of designs patterns that might be suitable for your code (Producer-Consumer, QMH etc) , but just separating the DAQ (REST Client) bit from the user interface handling e.g. is a good start)

I would just skip the signal functions all together. Take the array of values you have fetched and convert it to a spreadsheet string (CSV format e.g.) with a time stamp added to each row and write that to a text file.  If you really want to sample multiple times per second you might want to look at circular buffers and only write every now and then. At the level you are now just using the in-built buffers of the charts you already are using might be easier though. Have a look at some of the logging examples included with LabVIEW or available at ni.com to see how those structure the logic.

 

 

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.