University of Maryland Cyclone Time Technology Deriving Consistent Time Base Using Local Clock Information Ashok Agrawala Moustafa Youssef Bao Trinh University of Maryland College Park, MD 20742
Some Common Characteristics Peer-to-Peer Architecture The scheme relies only on local information or what they can obtain from their immediate neighbors No central/master clock Fast convergence
Clock Model Each node has a local clock which ticks at a constant rate The clock is stable in that its drift rate does not change rapidly Local time can be recorded at any instant by reading the clock which is an integer register incremented every tick time Local time at any node A is represented as Where is the current local time at node A at time instant is the drift rate of the clock at node A, and is the offset
Two Nodes
Assumptions All nodes are connected in that there is a path from any node to every other node. The transit time for a message from Node A to Node B, ΔAB, is fixed ( relaxed later). Each node is capable of timestamping an incoming message with its local clock time to within a clock tick. Each node is capable of sending a message at a defined local time to within a clock tick If there is a variability in timestamping operation, this gets lumped into the variability in the transit time
Outline Introduction Drift correction scheme (Cyclone) Results Virtual Clock Scheme Conclusions
Scheme 1 Drift Correction (Cyclone) Goal : Correct drift at all nodes Each node sends a beat message at times it determines from its local information This message is only a time marker with no other information bits Each node uses a common constant number whose value is fixed at design time
Two Nodes pA(0) pA(1) pA(2) … Node A Node B pB(0) pB(1) pB(2) …
Algorithm Initially each node sends the 0th beat at 2. Each node sends the 1st beat at
Algorithm 3. For subsequent beats Node B calculates 4. It sends the (n+1)st beat at Kb πx2B πx1B πxkbB B πBB
Analysis Similarly Therefore
Analysis Matrix Notation Thus at each step we carry out a distributed calculation of As A is a stochastic matrix, this iteration converges with all items in the vector taking the same value. Convergence rate is determined by the second dominant eigen value.
Practical Considerations Transit delay We assume it to be a constant. If it has some variability, it will have to be treated as a random variable. In that case the degree of clock synchronization depends on the jitter in the transit delay. When transit time is much larger than the cycle time, special steps are required in the beginning of the operations Finite precision The development above assumed an infinite precision arithmetic and infinite resolution clocks. These are small perturbations to the calculations but have to make sure that their affects do not accumulate. Require phase locking.
Outline Introduction Drift correction scheme (Cyclone) Results Virtual Clock Scheme Conclusions
Simulation Parameters Clock Tick Time – 100 ps (10 GHz) Cycle Time – 125 ms (8000/sec) Latencies – Random 0 and 80 cycles Topologies Chain Star Random Drift rate - ±100 ppm
Simulation Results Convergence achieved every time On convergence, jitter 1-2 clock ticks Long Term Stability Similar jitter and no drift after 12 hours of simulation time.
Convergence Time for Different Network Topologies # Nodes Convergence time in Seconds Star Chain Bidirectional Random 20 2 5 4 50 15 62.5 31.125 100 200 N/A 500 1000 Convergence numbers are in term of cycles. Assuming 8000 cycles/sec (for 10 GHz clock), worse case is 100-chain at 500K cycles, or 62 secs. 1000-random converges in ½ the time, probably due to a smaller diameter.
Effects of Perturbations Transit time Varied by 10% No more than the transit time change Stabilizes very fast after that Clock Drift Varied again by 10% Again, a step change results in immediate jitter determined by the change in clock drift Stabilizes very fast.
Transit Delay and Convergence
Latency Perturbations Node ID 0.01% 0.10% 1% CTJ 1 0.000% 2 0.001% 0.008% 0.055% 3 0.007% 0.064% 4 0.074% 5 0.072% 6 0.083% 7 0.073% 8 9
Drift Rate Perturbation Node ID 100 PPM 10 PPM 1 PPM CTJ 1 0.000% 2 0.014% 0.001% 3 0.004% 4 0.008% 5 0.007% 0.002% 6 0.011% 7 0.012% 8 0.015% 9 0.010% 10 0.009%
Implications This scheme achieves clock synchrony in that all nodes achieve a common cycle value of p* The local value pA is different at each node The beat instants are not synchronized They do not drift How to achieve a common clock reading ??
Outline Introduction Drift correction scheme (Cyclone) Results Virtual Clock Scheme Conclusions
Virtual Global Clock A clock with parameters b* and a* We define a scheme such that based only on local information any node can convert its local time to the time at this Virtual Global Clock. Key idea is to use a distributed consensus technique
Assumptions For the discussion right now we add two additional assumptions: All connections are bidirectional Transit time in two directions are the same
Approach Carry out the first scheme and reach convergence At convergence we note that The time when node A sends its nath beat
Two Node Case As a part of the beat message node A also sends Its converged cycle length Current cycle number Time A value Node B sends similar values
Calculations Similar calculations are done by node B Node A can convert its local time to the time at Node B as
Multinode Operations When this phase starts It initializes For each of its incoming links node A calculates It initializes
Multinode Operations For each subsequent cycle It calculates the new values of A and B as averages of incoming values of A and B adjusted to the local scale.
On Convergence Node A has values It can convert its local clock values to Virtual global clock as
Current Status Simulation Results confirm the claims Working on prototype implementations using standard NICs
Protocol specifies security Comparisons CTT IEEE-1588 NTP GPS TTP SERCOS Spatial extent General A few subnets Wide area Local bus Communications Network Internet Satellite Bus or star Bus Target accuracy Sub-microsecond Few milliseconds Style Distributed Master/slave Peer ensemble Client/server Master/Slave Resources Small network message and computation footprint Moderate network and computation footprint Moderate computation footprint Moderate Latency correction Yes Configured No Drift Correction Protocol specifies security No (V2 may include security) Administration Self organizing N/A Hardware? For highest accuracy RF receiver and processor Update interval ~2 seconds Varies, nominally seconds ~1 second Every TDMA cycle, ~ms
Concluding Remarks Use of consensus approach simplifies the clock synchronization As the scheme only depends on local information it is highly scalable Primary results to date Analytic Simulation Implementations ? Appropriate estimators/filters Practical considerations Node Failure Node Joining …