Jump to content
mwebster

Timestamp Behavior

Recommended Posts

Greetings,

I've been playing around with the timestamp recently and I'd like to throw out some of my findings to see if they're expected behavior or not.

One issue is that if you call Get Date/Time in Seconds, the minimum interval is ~1ms. Now this is much better than ~2009 where the minimum interval was ~14ms, if I recall correctly. Meaning, even if you call Get Date/Time in Seconds a million times in a second, you only get ~1000 updates.

The other oddities involve converting to double or extended and from double/extended back to timestamp. It seems that when converting to double, at least with a current day timestamp, your resolution is cut off at ~5e-7 (0.5us). This is expected given that you only have so many signficant digits in a double and a lot of them are eaten up by the ~3.4 billion seconds that have elapsed since 7:00PM 12-31-1903 (any story behind that start date, btw?) However, when you convert to extended, you get that same double-level truncation, no extra resolution is gained.

Now, when you convert from a double to a timestamp, you have that same 0.5us step size. When you convert an extended to a timestamp, you get an improved resolution of 2e-10 (0.2ns). However, the internal representation of the timestamp (which, as far as I know, is a 128-bit floating point) allows down to ~5E-20 (this is all modern era with 3.4 billion to the left of the decimal).

One other oddity, if you convert a timestamp to a date-time record instead of an extended number (which gets treated like a double as noted above), you once again get the 2e-10 step size in the fractional second field.

I've tried a few different things to "crack the nut" on a timestamp like flattening to string or changing to variant and back to extended, but I've been unable to figure out how to access the full resolution except via the control face itself.

Attached is one of my experimentation VI's. You can vary the increment size and see where the arrays actually start recording changes.

Ah, one final thing I decided to try, if I take the difference of two time stamps, the resulting double will resolve a difference all the way down to 5e-20, which is about the step size I figured the internal representation would hold.

I would like to be able to uncover the full resolution of the timestamp for various timing functions and circular buffer lookups, but I would like to be able to resolve the math down to, say 100MHz or 1e-8. I guess I can work around this using the fractional part of the date-time record, but it would be nice to be able to just use the timestamps directly without a bunch of intermediary functions.

Time Tester.vi

Share this post


Link to post
Share on other sites

Some things perhaps you didn't know about the timestamp that may shed some light.

In 2009 it is also 1ms. The 14ms you are talking about is probably that you were using Windows XP where the timeslice was about 15 ms (windows 2000 was ~10ms). .

LabVIEW timestamps are 12 byte (96 bit not 128). The upper 4 bytes are not used and are always zero.

Edited by ShaunR

Share this post


Link to post
Share on other sites

Have you seen this post on the NI forums? High Resolution Relative Seconds

It was reported as having an open loop resolution of ~ 1.2u sec

W/R/T resolution of a timestamp, this is documented here: http://www.ni.com/white-paper/7900/en

You might try typcasting the timestamps as arrays of U64 and then performing addition and subtraction on those.

After performing the integer operations, the U64 that represents the fractional seconds can be converted back by multiplying it by 2-64

http://forums.ni.com/t5/LabVIEW/Unable-to-replicate-frations-of-seconds-when-reading-a-timestamp/m-p/2116706#M687792

Edited by Phillip Brooks

Share this post


Link to post
Share on other sites

That High Resolution Relative Seconds looks like it could be just what the doctor ordered, but the whole number value appears to be linked, at least on my system, to the millisecond timer. I wonder what it's behavior is at rollover (I'm trying to move away from a millisecond timer based circular buffer now right now because of the headache in handling rollovers).

Share this post


Link to post
Share on other sites

You might try typcasting the timestamps as arrays of U64 and then performing addition and subtraction on those.

Technically you ought to typecast it to a cluster of {I64, U64}. The whole seconds element is a signed value allowing for dates prior to the epoch. The fractional seconds though is indeed an unsigned integer.

Indeed not all of the precision of the U64 is used, however this is likely a function of the timing source resolution not the format itself. My virtual machine here only reports significance in the highest three bytes, though I seem to remember my main desktop reported four bytes of resolution...

Phillip is correct in that the format is technically capable of a time resolution of 2^-64 = 5.4 x 10^-20 s = 54 zs. You'll be hard-pressed to find hardware that can pull off precision like that for quite some time.

Share this post


Link to post
Share on other sites

LabVIEW timestamps are 12 byte (96 bit not 128). The upper 4 bytes are not used and are always zero.

I've never looked at the actual bitwise representation of a timestamp, how sure are you of this? I've read the whitepaper Phillip linked before and that pretty much cemented a 16-byte representation. Their interpretation examples seem to contest what you're saying.

Share this post


Link to post
Share on other sites

I've never looked at the actual bitwise representation of a timestamp, how sure are you of this? I've read the whitepaper Phillip linked before and that pretty much cemented a 16-byte representation. Their interpretation examples seem to contest what you're saying.

Hmmm. Not sure where I got that from. Certainly in the LabVIEW Timestamp Whitepaper I just found it shows it is indeed 128 bit so I;m obviously wrong. But I have recollections of it being 12 bytes as it was one of the improvements (adding a timestamp) to the Transport.lib (which after some research I made 12 bytes). Since then it's just stuck as one of those anomalies to my expectations since 12 is a bizarre number.

Share this post


Link to post
Share on other sites

One issue is that if you call Get Date/Time in Seconds, the minimum interval is ~1ms. Now this is much better than ~2009 where the minimum interval was ~14ms, if I recall correctly. Meaning, even if you call Get Date/Time in Seconds a million times in a second, you only get ~1000 updates.

The 14 ms is an OS limitation (for WinXP that has an internal timer that has an update rate just above 50 Hz).

If I use Get Current Time is has a resolution of 1 uSec. When converting this to double you'll have rounding issues limiting you to 0.5 uSec.

Technically a timestamp can store much greater resolution. If you go subtracting almost similar timestamps (like your TS1 and TS2) you'll better use Cluster substraction:

post-2399-0-62667600-1348911923.png

Ton

  • Like 1

Share this post


Link to post
Share on other sites

Hmmm. Not sure where I got that from. Certainly in the LabVIEW Timestamp Whitepaper I just found it shows it is indeed 128 bit so I;m obviously wrong. But I have recollections of it being 12 bytes as it was one of the improvements (adding a timestamp) to the Transport.lib (which after some research I made 12 bytes). Since then it's just stuck as one of those anomalies to my expectations since 12 is a bizarre number.

I'd be interested to know more about that, if not only out of academic interest.

Technically a timestamp can store much greater resolution. If you go subtracting almost similar timestamps (like your TS1 and TS2) you'll better use Cluster substraction:

post-2399-0-62667600-1348911923.png

I assume if you cast back to a timestamp, you'll lose the benefit of that extra precision because it goes back to a formatted value?

Share this post


Link to post
Share on other sites

The 14 ms is an OS limitation (for WinXP that has an internal timer that has an update rate just above 50 Hz).

If I use Get Current Time is has a resolution of 1 uSec. When converting this to double you'll have rounding issues limiting you to 0.5 uSec.

Technically a timestamp can store much greater resolution. If you go subtracting almost similar timestamps (like your TS1 and TS2) you'll better use Cluster substraction:

post-2399-0-62667600-1348911923.png

On Win7, I get ~1ms.

Also, for (very) closely spaced time stamps, you can subtract them directly and get the ~50 zeptoseconds. For further spaced ones your method would be preferable if you wanted the maximum resolution as the raw subtractions casts the result to a double.

I don't know why I didn't think to try a direct typecast. I completely forgot that primitive existed (almost every cast I do is when converting from variant or flattened string).

Asbo:

The full resolution is retained if you cast to a timestamp using the reverse method. The formatting is just a control/indicator property. You can set the number of displayed decimal places out to 20 to get the full range.

Of course, now I'm concerned about the year 292 billion problem...

Share this post


Link to post
Share on other sites

...seconds that have elapsed since 7:00PM 12-31-1903 (any story behind that start date, btw?)

Yes. LV uses that epoch (which is actually the first midnight of 1904, UTC), because that's the time format the Mac used and LV was first developed on the Mac. My understanding was that the reason the Mac used that as an epoch was because it made leap year calculations very simple - just check for divisibility by four. Years divisible by 100 are excluded in this calculation, which is why it starts from 1904 (no need to handle this for positive dates). 2000 was also divisible by 100, but it's also divisible by 400, so it was a leap year and the next year that would break the code was 2100. I'm assuming whoever designed the code decided to sacrifice past and future years for the sake of practicality.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Similar Content

    • By drjdpowell
      In an attempt to standardize my handling of formatting timestamps as text, I have added functions to "JDP Science Common Utilities" (a VI support package, on the Tools Network).  This is used by SQLite Library (version just released) and JSONtext (next release), but they can also be used by themselves (LabVIEW 2013+).  Follows RFC3339, and supports local-time offsets.


    • By Manudelavega
      A part of my code converts a time stamp into a string. Another part will have to read this string and convert it back to a time stamp. However, when I run this quick example I've created, I get an error. Does anybody know what I'm missing?
       
      Thanks

    • By Phillip Brooks
      My new job has me starting from scratch DB wise. I worked with a homegrown schema for many years, but it doesn't match my needs here and I want to make this supportable log term (next poor soul), so I was thinking of using the out-of-box schema with MySQL.
       
      I've got MyQL 5.1 and TS 2010 installed; the TS 2010 MySQL implementation uses INSERT statements which lack my one pet peeve, a timestamp data type. I think this may be because MySQL at the time 2010 was developed only had a date-time data type.
       
      Does anyone know if a newer version of TestStand includes support for a timestamp data type with the MySQL template?
       

    • By Bjarne Joergensen
      Hi
       
      I need the UTC time and convert it to tics.
      I use this code(LV2012/DotNet 2.0):

       
       
       
      Why is the Now time(local) the same as UTCtime. There should be a difference of 2 hours!
       
      I made a program in C# and it is showing the right time (2 hours between Local and UTC)
       
      regards Bjarne
    • By Runjhun
      Hi,
      I'm plotting a bar graph which takes only double data input.
      My X-axis should be the timestamp and my Y-axis is the double data.
      Bt here I can't figure out any way to convert the timestamp to double so that I can use it as an input.
      Any suggestions.
      Regards,
      Runjhun A.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.