Download presentation
Presentation is loading. Please wait.
Published byMadeleine Robertson Modified over 9 years ago
1
October 8, 2010 - session 2REFAG2010 Paris1 Distributed processing systems for large geodetic solutions IAG WG 1.1.1 “Comparison and combination of precise orbits derived from different space geodetic techniques” Henno Boomkamp
2
October 8, 2010 - session 2REFAG2010 Paris2 Introduction Objectives of IAG WG 1.1.1 1.Study systematic errors between orbits based on different data 2.Improve models and solution strategies 3.Develop the means to allow the above Key solution mechanism: simultaneous analysis –simultaneous (re-)processing of geodetic datasets is either free of biases, or allows estimation & analysis of inter-system biases Concrete targets –Analysis of all GPS / GNSS data in a single solution Dancer –Analysis of all geodetic data in a single solution Digger
3
October 8, 2010 - session 2REFAG2010 Paris3 POD module Digger (… reprocessing) - data types: all - long arc (years) - long latency (1 week) - key technology: BOINC Digger (… reprocessing) - data types: all - long arc (years) - long latency (1 week) - key technology: BOINC Consistency among techniques Dart (Dancer-RTK) - data types: GNSS only - very short arc (RT) - zero latency - key technology: BURST Dart (Dancer-RTK) - data types: GNSS only - very short arc (RT) - zero latency - key technology: BURST Real-time access to accurate ITRF ITRF Dancer - data types: GNSS & VLBI - short arc (48 hrs) - short latency (30 min) - key technology: JXTA Dancer - data types: GNSS & VLBI - short arc (48 hrs) - short latency (30 min) - key technology: JXTA Large scale, direct access ITRF Three WG projects Effort: Dancer 90 % Digger 9 % Dart 1 %
4
October 8, 2010 - session 2REFAG2010 Paris4 The trouble with GPS Limited processing capacity of AC –Current IGS approach would require hundreds of Analysis Centres … Most data not available at short latency –Reduces statistical quality ITRF –Reduces relevance of ITRF 20,000 permanent GPS receivers 5,000 public 400 ITRF If we want to run a conventional batch LSQ solution for all receivers –Distribution over many computers is inevitable –Geographical separation of the computer cluster is inevitable If we want to run a conventional batch LSQ solution for all receivers –Distribution over many computers is inevitable –Geographical separation of the computer cluster is inevitable
5
October 8, 2010 - session 2REFAG2010 Paris5 10 GPS sites = 10 PC = 10 AC
6
October 8, 2010 - session 2REFAG2010 Paris6 Dancer overview Dancer brings the analysis to the data rather than vice versa LSQ solution implemented as a peer-to-peer process on the internet Based on existing JXTA P2P software (SUN Microsystems) Natural separation of analysis is by receiver, not by AC Geographical distribution of data is at the level of receivers Solution becomes scalable in the number of stations 99% of estimated parameters can be pre-eliminated at receiver level … the required computers are readily available! Every permanent receiver is connected to a local or remote computer Most of these computers do not do anything apart from RINEX ftp Processing capacity is perfectly collocated with the data owners
7
October 8, 2010 - session 2REFAG2010 Paris7 Separation of LSQ in tasks per receiver (1) Local parameters are pre-eliminated at the receiver: global (orbits, sat clocks, pole) local j = 1 … N With 10,000 receivers: Total solution 30 million parameters 170 million observations After pre-elimination 90,000 global parameters Single process 15,000 non-zero eq. With 10,000 receivers: Total solution 30 million parameters 170 million observations After pre-elimination 90,000 global parameters Single process 15,000 non-zero eq.
8
October 8, 2010 - session 2REFAG2010 Paris8 Global normal equation represents the average equation of all receivers Dancer averages diagonal D before solution, and the vector afterwards Same solution, thanks to distributive property of multiplication with Separation of LSQ in tasks per receiver (2)
9
October 8, 2010 - session 2REFAG2010 Paris9 Averaging N vectors without a central server square dance algorithm N/2 pairs can be formed by toggling one bit of each number 1...N 011010110 111010110 Pair-wise exchange of vectors: both computers find the same sum x11010110 first bit has now become irrelevant… New exchange pairs are formed by toggling the second bit 011010110 101010110 xx1010110 first two bits are now irrelevant; etc… Some additional operations are necessary: –folding nodes are introduced to make N an exact power of two –N splits into 50% core nodes and 50% spare nodes for contingencies After log 2 N exchange cycles, all N computers have the same vector
10
October 8, 2010 - session 2REFAG2010 Paris10 core node data volume (1-way) arc length / epoch rate MB N
11
October 8, 2010 - session 2REFAG2010 Paris11 Sched beta Dancer project status
12
October 8, 2010 - session 2REFAG2010 Paris12 Summary Rigorous LSQ solutions for all GPS receivers are possible –workload can be distributed over (some) existing hardware Dancer has no data centres, analysis centres, combination centres, product centres, central bureau… –anonymous participation avoids political issues of data access –differences regional vs. global reference frames disappear GNSS receivers become smart receivers –Dancer process can be embedded on future receiver hardware –Smart receiver generates products, not (just) observations Other distributed processes to follow DIGGER simultaneos reprocessing of all geodetic techniques DARTRTK layer on top of Dancer for global access
13
October 8, 2010 - session 2REFAG2010 Paris13 www.GPSdancer.com........................................ for further information more details on solution mathematics download latest version of the software check project status e-mail contacts and web links for further information more details on solution mathematics download latest version of the software check project status e-mail contacts and web links Dancer screenshot
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.