Jump to content

Timed loop actual start timestamp gives erroneous value


Recommended Posts

Our data logging engine is based on a timed loop, and the Actual Start left node is used to log the current absolute time. 99% of the time it works perfectly, but once in while it will give the value 11/11/1903  10:57:12.702 PM. Each time it gives an erroneous value, it's this exact value. Has any of you ever seen that? We use LV2011 32bit.

Link to comment

Our data logging engine is based on a timed loop, and the Actual Start left node is used to log the current absolute time. 99% of the time it works perfectly, but once in while it will give the value 11/11/1903  10:57:12.702 PM. Each time it gives an erroneous value, it's this exact value. Has any of you ever seen that? We use LV2011 32bit.

 

I've never seen the Actual Start return other than a local timer value.

It should start at 0 (or the offset you specified in the loop setup), then increment with the specified loop delta using the current timing source as timing unit.

 

How do you do the conversion to an absolute timestamp? Adding timer value to a loop start timestamp?

 

/J

Link to comment
  • 4 weeks later...

BUMP. I think this is an issue worth to worry about. Do you think I should request NI to create a CAR?

If you can reproduce it reliably and can narrow it down to a reasonable set of code, absolutely. I wasn't able to find anything like this in the records when I did a quick search a moment ago. Obviously the simpler the code, the better. Also be sure to note down what type of RT target you're using (I didn't see that in your post above) and then any software you might have installed on the target, esp things like timesync which might mess with the time values in the system.

 

For now, I'd say the easiest workaround would be to just get the current time at the start of each loop iteration using the normal primitive.

Link to comment

I just created a very lean exe that will simply pop-up a message when the time stamp will be bad. I'll post here to say whether I could reproduce the issue or not.

 

I've thought about the workaround you suggest, but since the primitive you talking about doesn't have an error input, it would execute in parallel with other constructs inside the loop, which doesn't provide the same accuracy. Unless I use a sequence of course, but I don't want to add more code than is necessary in this loop since I need it to run fast.

 

Thanks for your reply :)

Link to comment

I just created a very lean exe that will simply pop-up a message when the time stamp will be bad. I'll post here to say whether I could reproduce the issue or not.

 

I've thought about the workaround you suggest, but since the primitive you talking about doesn't have an error input, it would execute in parallel with other constructs inside the loop, which doesn't provide the same accuracy. Unless I use a sequence of course, but I don't want to add more code than is necessary in this loop since I need it to run fast.

 

Thanks for your reply :)

In order to better schedule the code, a timed loop is placed into its own higher-priority thread. Since there is one thread dedicated to the loop, labview must serialize all timed loop code (as opposed to normal labview where code is executed on one of a ton of different threads). This serialization is one of the reasons the recommendation is only to use them when you absolutely need one of the features, generally synchronizing to a specific external timing source like scan engine or assigning the thread to a specific CPU. So it will not truly execute in parallel, although of course you still have to show labview the data dependency in one way or another.

For some reason this is not documented in the help for the timed loop, but instead for the help on how labview does threading: http://zone.ni.com/reference/en-XX/help/370622K-01/lvrtbestpractices/rt_priorities/

 

I also wanted to note that a sequence structure does not add overhead usually, although it can make code execute more slowly in some specific situations by interfering with the normal flow of data. That shouldn't be the case here as you're only using a single frame. Its essentially equivalent to a wire with no data.

 

Also, timed loops are generally not recommended on windows, which it sounds like thats what you're using. They may still be used, and should function, but because windows is not an RTOS you won't actually have a highly deterministic program. This makes it more likely that the costs of the timed loop outweigh the benefits.

  • Like 1
Link to comment

Also, its worth noting that timed loops are generally not recommended on windows, which it sounds like thats what you're using. They may still be used, and should function, but because windows is not an RTOS you won't actually have a highly deterministic program. This makes it more likely that the costs of the timed loop outweigh the benefits.

 

Wow this changes things a lot. We have timed loops all over the place (CAN driver, datalogging engine, graphing engines, automation engine, ...). Are you suggesting to replace them with regular WHILE loops?

 

Edit: Can you direct me to a resource explaining the cost of timed loops? I couldn't find anything but advantages online...

Edited by Manudelavega
Link to comment

Wow this changes things a lot. We have timed loops all over the place (CAN driver, datalogging engine, graphing engines, automation engine, ...). Are you suggesting to replace them with regular WHILE loops?

 

Edit: Can you direct me to a resource explaining the cost of timed loops? I couldn't find anything but advantages online...

Ah, so you want documentation. Thats tougher. I found this (http://digital.ni.com/public.nsf/allkb/FA363EDD5AB5AD8686256F9C001EB323) which mentions the slower speed of timed loops, but in all honesty they aren't that much slower anymore and you shouldn't really see a difference on windows where scheduling jitter will have way more effect than a few cycles of extra calculations.

 

The main reasons as I understand it are:

-It puts the loop into a single thread, which means that the code can run slower because labview can't parallelize much

-TLs are slightly slower due to bookkeeping and features like processor affinity, and most people don't need most of the capabilities of timed loops.

-It can confuse some users because timed loops are associated with determinism, but on windows you don't actually get determinism.

 

Broadly speaking, the timed loop was created to enable highly deterministic applications. It does that by allowing you to put extremely tight (for labview) controls on how the code is run, letting you throw it into a single thread running on a single core at really high priority on a preemtive scheduling system. While these are important for determinism in general you're going to get better overall performance by letting the schedulers and execution systems operate as they'd like, since they are optimized for that general case. Thus, there is a cost to timed loops which makes sense if you need the features. On windows, none of those features make your code more deterministic since windows is not an RT OS, so the benefits (if any) are usually outweighed by the costs.

 

As for if I'm suggesting you replace them all -- not necessarily. Its not going to kill your code to have them, of course, so making changes when the code does function might not be a good plan. Besides that, it sounds like you're using absolute timing for some of these loops, which is more difficult to get outside of the timed loop (its still, of course, very possible, but if I'm not mistaken you have to do a lot of the math yourself) So, for now I'd say stick with the workaround and maybe slowly try changing a few of the loops over where it makes sense.

Link to comment

Things are much clearer now. Thanks! I guess my main reason for using a timed loop is that I "feel" the duration of each iteration (I mean the time gap between the start of iteration n and the start of iteration n+1) is easily controlled and measured. If I use a while loop and I want each iteration to last 10ms, all I can do is throw a "wait until next multiple" that executed in parallel with the rest of the code inside the loop, but I have no guarantee that the duration I'm talking about will be 10ms, and it's hard to measure as well...

Link to comment
  • 2 weeks later...

Things are much clearer now. Thanks! I guess my main reason for using a timed loop is that I "feel" the duration of each iteration (I mean the time gap between the start of iteration n and the start of iteration n+1) is easily controlled and measured. If I use a while loop and I want each iteration to last 10ms, all I can do is throw a "wait until next multiple" that executed in parallel with the rest of the code inside the loop, but I have no guarantee that the duration I'm talking about will be 10ms, and it's hard to measure as well...

 

You don't get that guarantee anyways on Windows. Last time I did a check, the lowest timing resolution of loops under Windows was 10 ms, no matter if I used Timed loops or normal Wait Until Next Multiple ms. With a wait of 0ms I could make it go pretty fast but the Timed Loop barked on the 0ms value. A wait of less than 10ms definitely showed the discrete nature of the interval being usually near 10ms or almost 0ms if the code inside allowed the near 0ms delay. And no there was no guarantee that it would never take more than 10ms, there were always out-layers above 10ms if I did test for more than a few seconds.

Link to comment

You don't get that guarantee anyways on Windows. Last time I did a check, the lowest timing resolution of loops under Windows was 10 ms, no matter if I used Timed loops or normal Wait Until Next Multiple ms. With a wait of 0ms I could make it go pretty fast but the Timed Loop barked on the 0ms value. A wait of less than 10ms definitely showed the discrete nature of the interval being usually near 10ms or almost 0ms if the code inside allowed the near 0ms delay. And no there was no guarantee that it would never take more than 10ms, there were always out-layers above 10ms if I did test for more than a few seconds.

You must have done that on XP.

Wimpdoes Vista and up are ~1ms resolution.

Link to comment

I do not work on RT or times loop, so this isn't any official NI answer, but I've been told several times from various sources that the Timed Loop is pretty much meaningless outside of a real-time operating system and that the only reason it compiles on Windows is to facilitate debugging in simulations. I can easily believe that NI devs have tweaked that simulation to be as faithful as possible, but I suspect it is still straightforward to find a Windows configuration where it just can't keep time.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.