on
TAI64 in the wild is (sometimes) not TAI
The very short version of the clickbait title: if you create TAI64 timestamps and your implementation mimics libtai, then your timestamps are likely not TAI.
The rest of this post is an explanation of what’s going on.
I was tinkering with WireGuard, and its wire format in particular. The protocol’s handshake initiation packet contains a timestamp to prevent replays from disrupting established sessions. The timestamp is a 12-byte TAI64N value, representing an instant in time with nanosecond precision.
Both WireGuard’s kernel implementation and wireguard-go produce TAI64N timestamps with the same basic logic, inherited from the libtai reference implementation. Focusing on the seconds portion, the logic is:
time_t seconds = time(NULL);
uint64 offset = 0x400000000000000A;
uint64 tai = offset + (uint64)seconds;
That offset looks weird. TAI64’s definition is that the integer 2^62
represents the time instant 1970-01-01 00:00:00 TAI
, and the counter
changes by 1 for every second of difference from that reference. So,
for example, 1970-01-01 00:00:45 TAI
is represented by 2^62 + 45
.
2^62 translated to hex is 0x4000000000000000
. But the reference
implementation offsets by an additional 0xA
, or 10 seconds. What’s
that about?
The extra offset is correcting for the difference between UTC and TAI time standards, and unfortunately, as far as I can tell, that correction is wrong on most modern systems. To understand why, we need a quick detour into what these time standards are, so that we can then see how computers end up holding them wrong.
(spoiler alert: even though WireGuard includes this error, it does not affect WireGuard’s security or functionality in any way, it’s just a mild oddity.)
A brief history of time standards
(note to timekeeping enthusiasts: I’m skipping over details, nuances and caveats throughout this bit, to try and keep it less than book length. Sorry.)
In the beginning, we measured time by the position of the sun in the sky. Noon is when the angle of a shadow cast by the sun points straight north or south, and other times derive from that.
Today we call this “apparent solar time”, where “apparent” means “slightly wrong”. If you measure days with this method, you find that days aren’t a nice consistent duration. Rather, they stretch and contract by tens of seconds depending on the time of year.
This change is due to many aspects of existing in an imperfect universe, but the major contributors are Earth’s slightly eccentric orbit around the sun, and the tilt of its axis of rotation relative to the orbital plane.
To account for this and make timekeeping more consistent, we invented “mean solar time”. The exact definition is a little intricate, but long story short it averages out the seasonal wobble of the real sun, so that the interval between noons is nice and consistent.
The difference between apparent solar time and mean solar time can be graphed as the very metal sounding Equation of Time, which you might find engraved on sundials to help you convert their reading to the correct time.
Fast forward to the mid 19th century. With trains, the UK needed a consistent time that all cities use, rather than each one deriving mean solar time for its own longitude. So, we got Greenwich Mean Time (GMT). As the name suggests, GMT was defined as mean solar time at the Royal Observatory at Greenwich.
Advance another few decades, and the International Meridian Conference of 1884 made this an international standard. It enshrined the meridian passing through Greenwich as the “prime meridian” (0 degrees longitude), and defined the “universal day” as being the mean mean solar day, with hour zero being midnight GMT.
Later, GMT was renamed to Universal Time (UT), and then to UT0 when it was replaced by UT1. UT1 incorporates an additional correction factor, for a few more milliseconds of accuracy.
The Power of the Atom
Just about when UT1 was introduced, the world got its first atomic clocks. Instead of measuring time through the wibbly-wobbly motion of the planet through space, we could suddenly listen directly to the universe’s metronome! (No, we’re not going to talk about general relativity and the death of absolute time. For this post we get to pretend time doesn’t do that.)
And so, a few international conferences later, the second was redefined by reference to the physical properties of cesium-133 atoms, and International Atomic Time (TAI, for “Temps Atomique International”) was born.
This led to a bit of a problem: TAI’s days don’t agree with UT1, because UT1’s seconds stretch and squeeze slightly depending on how fast the planet feels like rotating, a rate which changes on a timescale of years. This was a problem: atomic time is obviously superior for precise timekeeping, but UT1 makes sense for celestial navigation, where you’re actively exploiting the link between clock time and the Earth’s motion.
And so after a few false starts, Coordinated Universal Time (UTC) was born. UTC ticks according to the same atomic metronome as TAI, but also remains within one second of the legacy UT1. This is accomplished by scientists measuring the delta between UTC and UT1, and inserting or deleting “leap seconds” when the delta gets too large.
And so, we arrive at the present: the world mostly ticks according to UTC, a messy compromise between UT1 and TAI. Scientists use TAI when they need to strike precise timestamps that aren’t going to grow mysterious extra seconds by political fiat (like UTC does). UT1 still exists, but mainly as the reference value to decide whether UTC needs another tweak. GPS and other GNSS constellations - another triumph of atomic timekeeping - reduced the importance of celestial navigation.
Owing to UTC’s leap second corrections, as I write this UTC timestamps are 37 seconds behind TAI timestamps. As you go back in time, the UTC-TAI offset shrinks, and the only way to accurately translate between the two is to maintain a lookup table of when each leap second was inserted.
It’s the same story for future timestamps, strictly speaking UTC as currently defined doesn’t let you identify a future instant, because future leap seconds might change what instant that timestamp identifies. Thankfully the world has agreed to phase out leap seconds by 2035, so fingers crossed we’ve experienced our last 61-second minute already.
Computers and timekeeping
Annoyingly, all these recent advances and changes to timekeeping happened concurrently with the development of computers. As a result, computer timekeeping is also a bit of a mess.
Looking at operating systems in the Unix family, time is defined in
POSIX
as the number of seconds elapsed since Epoch. The Epoch is described
indirectly, but amounts to 1970-01-01 00:00:00 UTC
.
It also says that you must calculate seconds-since-Epoch by taking the current wall clock year/month/day/hour/minute/second value, and calculate how many seconds have elapsed since the Epoch timestamp assuming that all days are 86400 seconds long.
In other words, Unix time is supposed to not include any leap
seconds in its accounting. You could think of it as returning the
number of TAI seconds that have elapsed since 1970-01-01 00:00:10
TAI
. This is the same instant as 1970-01-01 00:00:00 UTC
, since
UTC-TAI offset at the Epoch was 10 seconds.
10s is 0xA
hex, and now you might be thinking ah, I know what libtai
is doing! It’s getting Unix time from the OS, assumes it’s TAI seconds
since 1970-01-01 00:00:10 TAI
, and adds 10s to make the value
referenced to 1970-01-01 00:00:00 TAI
. And yup, that’s exactly what
it’s doing.
Regrettably, this appears to also be wrong on most computers. On modern systems with network time synchronization, the integer you get from the OS is the number of elapsed UTC seconds since Epoch, including leap seconds.
Here’s abridged output from the test program in Linux’s
clock_gettime
man page:
$ TZ=UTC date; ./clocks
Sat Feb 1 10:00:04 PM UTC 2025
CLOCK_REALTIME : 1738447204.773 (20120 days + 22h 0m 4s)
CLOCK_TAI : 1738447241.773 (20120 days + 22h 0m 41s)
I cross-checked the output of date
with a couple of UTC references
such as time.gov, and confirmed that its
output is a correct UTC timestamp for the time I ran those
commands. The output of CLOCK_REALTIME
is the “seconds since Epoch”
value from POSIX, and it matches the time of day in the UTC
timestamp - in other words, the integer value includes UTC leap
seconds.
Linux also handily provides a TAI clock as a cross-check, and indeed
CLOCK_TAI
is running ahead of CLOCK_REALTIME
by 37 seconds - the
current UTC-TAI leap second offset.
It’s not just Linux. I don’t have a wide zoo of OSes ready to test, but here’s the same output on FreeBSD (minus the TAI clock, which FreeBSD lacks):
% TZ=UTC date; ./clocks
Sat Feb 1 22:01:55 UTC 2025
CLOCK_REALTIME : 1738447315.786 (20120 days + 22h 1m 55s)
Again, the timestamp provided by date
is the correct UTC timestamp
(including UTC’s leap seconds) for when the program executed, and the
integer value produced by CLOCK_REALTIME
agrees with it.
The novel UTC+10s standard
It seems that libtai’s author believes that system clocks should be
set to TAI. Therefore, to convert a
Unix timestamp value to TAI64, you add 10s to correct for the UTC-TAI
offset that existed at the Epoch, and then shift the values such that
1970-01-01 00:00:00 TAI
is 2^62. That’s why libtai adds
0x400000000000000A
to the OS-provided Unix timestamp to compute the
TAI64 timestamp.
(Edit: this part used to speculate on why libtai made the adjustment
as it did, but Gustaf Erikson pointed me to
tai_now’s manpage, which
explicitly states that libtai only works correctly on systems where
time()
returns the number of elapsed TAI seconds since 1970-01-01
00:00:10 TAI
. Evidently many others also missed this one mention of
the fact, as the post is about to go over.)
But as shown above, the timestamp you get from modern systems in a standard configuration does include UTC’s leap seconds. In other words, Unix timestamps issued as I write this have a 37s offset to TAI. libtai’s magic constant reduces that offset to 27s, not 0s (for timestamps issued today - as you go back in time the offset shrinks further). And so, the resultant TAI64 timestamps do not identify a TAI instant, but rather an instant in the bespoke time standard “UTC plus 10 seconds”.
Unfortunately, since libtai is the reference implementation, other implementations assumed it was calculating correctly, and also produce incorrect results on most systems. A small sample survey of TAI64 implementations on GitHub found that most libraries follow libtai’s “UTC plus 10 seconds” calculation, while a minority accurately accounts for UTC leap seconds and produces correct TAI64 timestamps. Hopefully those two library populations don’t need to interact in the wild!
What does this mean for WireGuard, one of the victims of this unfortunate history? Fortunately, this has zero impact on WireGuard. It’s producing “UTC+10s” timestamps instead of TAI64N, just like libtai. However, timestamps sent by a peer are only ever compared against other timestamps sent by that same peer, and so the error cancels out.
EOM
To summarize: on at least Linux and FreeBSD, in a normal
configuration, time
and clock_gettime
’s CLOCK_REALTIME
return
the number of seconds between the current UTC timestamp and the Epoch
timestamp, which does include UTC’s leap seconds. On other OSes,
test the behavior and see what comes out, but I suspect that these
days they’re all outputting values that agree with UTC.
If you need TAI on Linux, and your system is synchronizing its time
from the network (you really should), you can use CLOCK_TAI
to get
that directly. Elsewhere, you need to add the appropriate number of
leap seconds to the CLOCK_REALTIME
value to translate to TAI.