To me that looks like they are reinventing NTP, but not addressing all the issues of PTP.<p>A big problem with the PTP unicast mode is an almost infinite traffic amplification (useful for DDoS attacks). The server is basically a programmable packet generator. Never expose unicast PTP to internet. In SPTP that seems to be no longer the case (the server is stateless), but there is still the follow up message causing a 2:1 amplification. I think something like the NTP interleaved mode would be better.<p>It seems they didn't replace the PTP offset calculation assuming a constant delay (broadcast model). That doesn't work well when the distribution of the delay is not symmetric, e.g. errors in hardware timestamping on the NIC are sensitive to network load. They would need to measure the actual error of the clock to see that (the graphs in the article seem to show only the offset measured by SPTP itself, a common issue when improvements in time synchronization are demonstrated).<p>I think a better solution taking advantage of existing PTP support in hardware is to encapsulate NTP messages in PTP packets. NICs and switches/routers see PTP packets, so they provide highly accurate timestamps and corrections, but the measurements and their processing can be full-featured NTP, keeping all its advantages like resiliency and security. There is an IETF draft specifying that:<p><a href="https://datatracker.ietf.org/doc/draft-ietf-ntp-over-ptp/" rel="nofollow">https://datatracker.ietf.org/doc/draft-ietf-ntp-over-ptp/</a><p>An experimental support for NTP-over-PTP is included in the latest chrony release. In my tests with switches that work as one-step transparent clocks the accuracy is same as with PTP (linuxptp).
Facebook continues to follow the Yahoo and AOL trajectory of exceptional and generous engineering contributions amidst an increasingly disliked suite of commercial offerings.<p>Reminds me of a project idea where you list out all the big companies that have GitHub projects like Comcast, Walmart, Verizon, Target and even <a href="https://github.com/mcdcorp">https://github.com/mcdcorp</a>
Does anyone know the differences between Meta's application of Precision Time Protocol and Google TrueTime? I was hoping to find some discussion in the article but found none.<p>The 2022 article on the Precision Time Protocol says (<a href="https://engineering.fb.com/2022/11/21/production-engineering/precision-time-protocol-at-meta/" rel="nofollow">https://engineering.fb.com/2022/11/21/production-engineering...</a>):<p>- Adding precise and reliable timestamps on a back end and replicas allows us to simply wait until the replica catches up with the read timestamp...<p>- As you may see, the API doesn’t return the current time (aka time.Now()). Instead, it returns a window of time which contains the actual time with a very high degree of probability...<p>Which sounds similar to TrueTime (<a href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" rel="nofollow">https://static.googleusercontent.com/media/research.google.c...</a>):<p>- A read-only transaction executes in two phases: assign a timestamp sread [8], and then execute the transaction’s reads as snapshot reads at sread. The snapshot reads can execute at any replicas that are sufficiently up-to-date...<p>- TT.now() returns TTinterval: [earliest, latest]<p>I tried Googling "Precision Time Protocol TrueTime" but the only reference I could find is a HN comment by someone else from 2022 :) <a href="https://news.ycombinator.com/item?id=33696752">https://news.ycombinator.com/item?id=33696752</a>