(800) 940-2248  

Defining & Using TraceabilityDr. Demetrios Matsakis


Why do these rules exist? The short answer, from the criminal end, is because crooks can make big money by manipulating clocks…Think of Paul Newman and Robert Redford, who starred in The Sting. The longer answer is we want to prevent a repeat of 1815, when the Rothschilds legally and brilliantly used carrier pigeons to give them news of Waterloo before ships could bring the message to the London stock exchange. But nowadays, the unbridled capitalism of yesteryear has since given way to government regulations requiring safeguards to prevent any entity from having a timing advantage in the financial markets. They don’t want any computers to inherently be the first to observe large orders being placed, with the power to quickly access other markets to buy the same stock, to then be sold to the original orderer at a slightly higher rate. To be sure the safeguards are functioning correctly, you need traceability.

So how do you buy a traceable clock? Well, technically you can’t—not even if you buy a top-of-the-line Masterclock GMR. That’s because the definition for traceability is that the difference between your clock and Coordinated Universal Time (UTC), as realized at any official timekeeping lab, along with the uncertainty of that difference, must be known via an unbroken chain of documented measurements. So it’s not only a matter of how good your clock is, it’s how you installed its GNSS antenna, what cabling and amplifiers you have installed, and what records you kept. Two years ago I published some papers, along with NIST superstars Mike Lombardi and Judah Levine [3, 4], which gave some traceability advice and warned about a few pitfalls. If you follow that advice and have an expert review your setup, you should be ok.

Yet even in the cold, emotionless forum of international timekeeping, there is no unanimity. While many companies, not just Masterclock, proudly boast how their products can use GNSS to establish traceability and how long they can maintain that traceability to the microsecond when GNSS fails, there is now a very heated tempest in a nanosecond-sized teapot. The focus of this dispute is the International Bureau of Weights and Measures (BIPM), which generates UTC and is located in Paris, France. Every month it publishes not just the time offsets of the national labs with respect to UTC, but also the uncertainties of those offsets. For example, on Bastille Day, NIST’s time was off by .4 nanoseconds (ns), plus or minus 2 ns.

What’s the problem?

No problem, in my opinion, although the algorithm currently used by the BIPM was conceived by me after attending a meeting of the Consultative Committee of Time and Frequency in 2004 [5, 6]. It has only been partially implemented [7], but I gave it another look to be sure it can efficiently handle the increasing quantity and diversity of time-transfer systems that are coming online. So last October I published an article in the BIPM’s journal Metrologia on how the algorithm could be cast in matrix form to deal with these complexities.

There was nothing surprising about that. The shocker was to find another paper published in the same issue as mine, by BIPM staff, with similar equations but very different conclusions! The difference was in whether adding a poorly calibrated lab, whose measurements have a very large uncertainty but whose weight in the average does not take this into account, would result in every other lab’s difference with UTC becoming more uncertain. Since UTC is an average of clocks world-wide, it should be obvious that if any clock in that average is assigned a much higher uncertainty, then the average will have a higher uncertainty, and the uncertainty of the difference between any clock and that average will also be higher. “Not so”, say the trio.

How can two contradictory papers, by people well-known in the field, get published at the same time? Certainly there was a breakdown in the journal’s referee process. But the beauty of science, in fact the story of science, is that Truth is not set by authority. Metrology is one arena where no one should care or ask who got the most votes!

How to resolve? I went over the other paper extremely carefully, and I think their mistake was double-counting. They combined two equations, but then re-used one of them as if it was providing new information. Kind of like reading a headline in the morning paper at breakfast and then seeing the same headline in the same paper at the newsstand on your way to work and thinking the story is twice as believable. I’ve teamed up with my own heavyweight, BIPM retiree Wlodek Lewandowski, and we will be giving our analysis at the ION-PTTI meeting at the end of January. Part of our paper is a Monte Carlo simulation of the entire process used by the BIPM to generate UTC, and the results confirm our common-sense expectations.

Will the community be convinced? I can’t tell you that, but I can tell you that we argue about nanoseconds so the real world can be secure at the microsecond. And while I may disagree with these particular scientists, brilliant as they are, I would in a heartbeat recommend your company hire them as expert witnesses should its traceability ever end up in court. The trick of distorting a truth by telling it twice might be just what you need!

But even better than a good lawyer would be following these best practices:

  1. Be aware of the exact meaning of the required standards. Is the traceability expressed as a multiple of the standard deviation (+/- 1-sigma error), or is it an absolute limit?
  2. Adequate record-keeping and retention are important indicators of due diligence. Your accountant, lawyer, or regulatory agency may tell you how long you should retain them.
  3. Careful attention to calibration is needed to show your accuracy. Ease and resilience of verification is one reason why a simple timing chain is preferable to a long, multicomponent one.
  4. Re-calibration should be at least twice a year, or any time you have a secondary time source threatening to diverge from the principal time source by more than your tolerance.
  5. Your system should have a way to generate alarms and send them to you. An alarm condition should exist not just when a tolerance is exceeded, but when it appears a tolerance will be exceeded or when a secondary system diverges from the primary system. Regular human monitoring is always a good idea.
  6. Traceability at the sub-microsecond level is achievable via GNSS. The best way to do it is by end-to-end comparison with a calibrated system (antenna, receiver, cables) available from a recognized authority, such as NIST. Another way is to sum the measured or manufacturer-provided delays of each component of your system and compute the accuracy (inaccuracy) of the total calibration as the square root of the sum of the squares of the accuracies of all the components. The uncertainties of time from GNSS, at the point of the reception at the antenna, can be conservatively taken as 10 ns, with another 10 ns for the ionosphere correction in single-frequency systems. Common-view comparisons with GNSS data that are made available by national timing labs can reduce that uncertainty somewhat. Standard techniques for detecting and then ignoring outliers should be followed.
  7. Compare the timing results from different GNSS systems. Even if your regulatory authority specifies a particular GNSS, be sure to save all the results as an integrity check.
  8. Traceability at the millisecond level can be achieved via NTP to a timing lab, where the accuracy of each measurement should be taken as the round-trip time of a packet. Outliers can be excluded, and data weighted by the inverse of the round-trip time. (Technically it should be the square of the inverse, but I recommend a gentler approach for robustness. Your call.) Users should configure their NTP clients to follow the lab’s certified service and to point to two or more geographically separate servers. Timing labs do not give out the accuracy of their service, but the accuracy budget is dominated by millisecond-level (at best) internet-related errors. A long-term comparison of NTP values with the GNSS-derived values can verify the technique. If this is a supplementary system, the standards can be relaxed. An automated system should be set up to generate and send you an alarm.
  9. A backup system, or at least an alternate means of verification, is always a good idea. It could be a totally independent secondary GNSS reception system. Internet-based NTP is inexpensive and recommended as well.
  10. Your system should be able to continue without external input for a known amount of time. The holdover time typically depends on your local oscillator, with rubidium being the oscillator of choice in GNSS-disciplined oscillators. While the manufacturer’s specs should be reliable, you can easily test the delivered product. The length of time you wish to rely on holdover depends on the availability of an alternate verification means, and on the consequences of a momentary lapse.
  11. The failover mode, whether it is due to one or all GNSS becoming unavailable or your own system breaking down, should be tested.
  12. Precise Time Protocol (PTP) is preferred for transferring time over your local network. However, NTP can also give sub-millisecond RMS provided the network is entirely under your control and the delays are constant and measured.
  13. Your system should be able to handle leap seconds, including negative leap seconds. Depending on your intended uses, it should also be able to remove the smears intentionally inserted by companies such as Amazon, Bloomberg, Google, and Microsoft to avoid discontinuities due to leap seconds.
  14. Smart people always test. Smarter people do idiot-tests.

[1] https://www.sec.gov/divisions/marketreg/rule613-info.htm

“Commission Delegated Regulation (EU) 2017/574 of 7 June 2016 supplementing Directive 2014/65/EU of the European Parliament and of the Council with regard to regulatory technical standards for the level of accuracy of business clocks,” 2017, Official Journal of the European Union, March 2017, L 87/148-151

[2] https://www.sec.gov/divisions/marketreg/rule613-info.htm

[3]Timing Traceability of GPS Signals, D. Matsakis, J. Levine, and M. Lombardi, ION-PTTI-18.

[4] Metrological and Legal Traceability of Time Signals, D. Matsakis, J. Levine, and M. Lombardi, InsideGNSS, Mar/APR/2019.

[5] The Evaluation of Uncertainties in UTC-UTC(k), W. Lewandowski, D.N. Matsakis, G. Panfilo, and P. Tavella, Metrologia 42, 2005, pp. 1-9.

[6] Analysis of Correlations and Link and Equipment Noise in the Uncertainties of [UTC-UTC(k)], W. Lewandowski, D.N. Matsakis, G. Panfilo, and P. Tavella, Trans. UFFC. (4), 2008, 750-760.

[7] D. Matsakis "A Generalizable Formalism for Computing the Uncertainties in UTC", D. Matsakis, Metrologia, 2020

[8] A First step towards the introduction of redundant time links for the generation of UTC: the calculation of the uncertainties of [UTC-UTC(k)], G. Panfilo, G.Petit, A. Harmegnies, Metrologia, 2020.

[9] On Systematic Uncertainties in Coordinated Universal Time (UTC), D. Matsakis, ION-PTTI, 2017


Dr. Demetrios Matsakis Dr. Demetrios Matsakis attended MIT as an undergraduate and received his PhD in physics from UC Berkeley, where he studied under the inventor of the maser and laser; and built specialized ones in order to observe interstellar dust clouds where stars are born. His first job was at the U.S. Naval Observatory, building water vapor radiometers and doing interferometry to observe quasars and galaxies at the edge of the observable universe. After developing an interest in clocks, Dr. Matsakis would spend the next 25 years working hands on with most aspects of timekeeping – from clock construction, to running the USNO’s Time Service Department, to international policy. He has published over 150 papers and counting, but gets equal enjoyment out of beta-testing his personal ensemble of Masterclock products.

Latest Scientific Editorial by Dr. Demetrios MatsakisIntroducing Modern Timekeeping & Time Transfer

Read Article