* High-speed stock trading, where regulations often require precise timestamping.
* Cell phone networks, where adjacent towers have to hold their frequencies precisely, and synchronise their transmit and receive time windows to avoid interference.
* Certain types of distributed database [1]
At a broader level, guarantees/specifications about NTP accuracy are hard to find. If you google 'NTP accuracy' you'll find a very wide range of claims, from sub-millisecond to >1 second [2]
Indeed, a lot of the NTP userbase has timing requirements so lax they can 'smear' a leapsecond [3] over the course of 24 hours, tolerating a sustained error of hundreds of milliseconds. They've got good reasons for doing it - but it goes to show big errors are possible even under normal, correct operation.
Therefore, if you have an application that needs single-millisecond-level accuracy, you won't find an NTP product specification saying it's what you want.
PTP is a great choice when you need your time-sync to leave the software domain, for instance, locking an external PLL for frequency and phase aligned clocks.
PTP enabled MAC's and PHY's usually have a dedicated hardware output for a PPS (pulse-per-second) or faster clock, generated directly from the hardware PTP counters.
In the video world, we get great results synchronizing software events with NTP, but whenever you jump to a physical interface (DisplayPort, HDMI, SDI, etc.), NTP-based clocks are far too jittery.
I've seen some work using the PTP hardware in the MAC with the NTP protocol, a sort of hybrid approach, but don't have any first hand experience.
- PTP is for sub-nanosecond local networks, with possibly hundreds of packets per second, local (in network card) clocks, hardware timestamping everywhere
- NTP if for long distance sync, with sophisticated statistical smoothing from several upstream servers
Basically, PTP assumes that network delays are deterministic. If that's true, it is very precise. If not, PTP is the wrong tool.
NTP assumes that network delays are stochastic, and uses sophisticated algorithms to account for this.
PTP has much simpler algorithms, which can be implemented in electronics, where internal timings can be characterized. NTP is more complex, and tends to run on a general purpose computer. This adds internal OS timings to the uncertainty.
Audio over IP solutions. Having all your workstations synchronized within a few samples makes it possible to greatly reduce I/O buffer size and thus latency.
Any organization that requires nanosecond precision uses PTP. In my experience this usually includes financial exchanges that want to measure latency in trade routing.
Certain regulations in the financial industry lead you towards using it.
For example, Europe's MiFID II requires some systems to time stamp with 1 µs increments and 100 µs accuracy. PTP is the most common way to sync systems that have this requirement.
Maybe a stupid question; if you're using GNSS as a time source, is it difficult to get reception? Do you have to put the receiver near a window or something? Or is it able to get signal even in cases where, for instance, my GPS watch would fail to get a lock?
> is it difficult to get reception? Do you have to put the receiver near a window or something?
The answer to that is "it depends".
Windows are known to work, but not all windows are made equal. Which direction your window is facing and whether you are in an urban canyon can make significant difference. Some windows also have films or coatings.
If you are working in tricky situations, then the quality of your antenna and the quality of your receiver will make a difference (e.g. multi-constellation support, interference mitigation, multipath rejection etc. etc.).
If you do not have access to a workable window, then your alternative options are:
- Accept slightly lower accuracy and use LW radio instead (e.g. MSF, DCF etc.)
- Rent a leased line to your country's national time laboratory. They will deliver you certified, traceable time with no roof access required and microsecond accuracy (backed by an SLA, i.e. accurate or your money back).
- Run a long cable to the roof (if money is no object you can even get super fancy setups that run exclusively on fibre, no coax in sight)
- Install an internal GNSS repeater system (still involves roof install, but gives you *much* more flexibility indoors for obvious reasons).
I guess that's the point of the highly accurate onboard clock. Even if you don't have GPS reception for some period of the day, as long as it does connect at least once every two days you'll still get accurate timekeeping.
For professional setups, typically you'd have a coax going from the roof somewhere to the rack inside the data center where your time server is located.
Same. Mine is in a basement, and didn't get any reception with the included tiny antenna. Bought one of those active antennas shaped like a small puck, and place it near the ground-level window. Worked like a charm.
One alternative would be to place the whole GNSS receiver outside and only run serial port and PPS signal from it (cat5 has just the right amount of pairs for RS-422 UART, RS-422 PPS and power. And is significantly cheaper than coax that is good enough for GNSS).
It needs at least the same signal your GPS watch needs. The better signal you can get it, the better time solution it'll get, and the better holdover timing it'll keep. Here's why:
Imprecision in GPS (GNSS in general) solutions comes from a number of sources: Orbital errors in the satellite ephemerides, atmospheric distortions to the signal, multipath at the receiver antenna, and a bunch of minor stuff we're not gonna worry about. Those are the big three.
Orbital errors, you can't do anything about. Post-processing allows _position_ solutions to be corrected once the errors are known (a process that takes hours or days after the fact, to work back where the satellites must've actually been) in a process called Precise Point Solution, but that doesn't apply to [this kind of] timing.
Atmospheric errors you can kindof do something about. Signals from satellites lower to the horizon take a longer path through the atmosphere, so they tend to have wackier pseudoranges. Most timing-grade receivers have a default "elevation mask" of 15 or 20 degrees above the horizon, to simply not even bother with these crap signals, and derive timing only from what's left. (There are of course atmospheric models and corrections to apply, but these are applied to everything. Still, start with a cleaner signal and you'll end up with a cleaner result.) This means you need clear sky view to the middle of the sky, in order to raise the chances of an adequate number of satellites passing the elevation mask.
Multipath is huge. The reason survey-grade antennas have these enormous choke-ring structures around them is to eliminate multipath reflections off the surface of the ground below. Even cheap little antennas try to be more sensitive to signals from above than below, to limited success. (Especially if they're wrist-worn and thus not orientation-controlled. Ugh.) Any situation where the antenna is subject to multipath will have demonstrably worse precision as a result. So, most timing antennae are considerably elevated (gives the reflections an opportunity to fall apart), shielded (choke-ring mounts), or both. Cellular sites usually go with the elevation option since there's already a giant lightning-rod nearby, elevating the GPS antenna doesn't come with additional risk for the receiver.
Worse GPS reception means the receiver has a noisier signal to start with, and can't discipline its local oscillator as tightly. Even if you don't care about nanoseconds in the final application, you probably do care about _holdover performance_. And if your LO can be disciplined with nanosecond-accurate signals in the first place, it'll hold millisecond-accurate timing for days or weeks in a GPS outage. If the disciplining process is crap, the holdover will also be crap.
Doubtful of the last paragraph: What kind of clock oscillator 'learns' to keep better time by being adjusted from a better source? A normal crystal doesn't have any memory effect AFAIK.
In a typical real life scenario where the GPS / antenna malfunctions, your server clock will just slowly.. drift.. I.e. the crystal runs a bit too fast or a bit too slow.
The last paragraph is what the FB's timecard (and essentially any other rubidium based frequency normal) does. You use the GNSS frequency output as a reference for PLL that does fine tuning (on the order of tens of ppm) of the rubidium oscilator (or even normal crystal oscilator). The reference oscilator modules usually have some kond of input for this kind of fine tuning (in xtal case this analog input typically controls biasing of varactor that introduces parasitic capacitance to the crystal, in rubidium case it is usually done digitally and it changes the division factor in the feedback path of internal PLL).
Edit: and for that matter even ntpd/chrony does this tuning in software and can compensate for imprecise and/or slowly wandering host clock generator (obviously it cannot compensate for abrubt changes and intentionally does not even try to).
For a quartz oscillator, it's typically done by holding the crystal at a constant temperature ("ovenized", "oven controlled"), and then varying its loading capacitance by manipulating the reverse bias voltage of a varactor (a diode whose junction capacitance varies in a well-defined way according to applied voltage). You connect a lots-of-bits DAC and a very stable amplifier to the varactor, and keep all those pieces inside the same oven so they're all isothermal.
Apply a very long time constant (typically on the order of 11h58m out to maybe 2 weeks) to the control loop that drives the DAC, and after a few loop-times you've got a quartz crystal whose period is precisely tuned and whose behavior will remain incredibly stable when the disciplining reference goes away. (As long as you're smart enough to notice the loss of reference and freeze the DAC value, rather than naively assuming there's an enormous difference and slamming the control loop against a limit and destroying your carefully-honed performance. Which is what pretty much everyone implementing this from scratch does at least once.)
For rubidium atomic clocks, the disciplining process is usually done by varying the magnetic field applied to the vapor cell. Since these work by measuring the energy absorbed by a specific hyperfine transition, but that energy is only well-defined at zero static magnetic field, it's necessary for the physics package to cancel out the Earth's magnetic field. This is done with a set of Helmholtz coils, but how do you know how much current to put through them? By comparing the measured frequency with an external reference of higher quality. Since Rubidium oscillators are inherently several orders of magnitude more stable than quartz, this is done with very long time-constants (since any apparent error is probably jitter on the GPS receiver's part).
Ovenized quartz GPSDOs are common at cellular tower sites, whereas rubidium finds more application at central offices. I own clocks that exploit both of these mechanisms. (The rubidium is down right now while I build a new DAC for the tuning voltage that drives the coil current amplifier.)
Our central NTP servers are getting, per "ntpq -c kerninfo", sub-millisecond error:
That's talking to a couple of Stratum 1, and a bunch of S2, servers.