Guaranteed Approach For Orbit Determination With Limited Error Measurements Zakhary N. Khutorovsky Interstate Joint-Stock Corporation “Vympel”, Moscow,

Slides:



Advertisements
Similar presentations
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Advertisements

M 1 and M 2 – Masses of the two objects [kg] G – Universal gravitational constant G = 6.67x N m 2 /kg 2 or G = 3.439x10 -8 ft 4 /(lb s 4 ) r – distance.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
1 Detection and Analysis of Impulse Point Sequences on Correlated Disturbance Phone G. Filaretov, A. Avshalumov Moscow Power Engineering Institute, Moscow.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
The General Linear Model. The Simple Linear Model Linear Regression.
A Short Introduction to Curve Fitting and Regression by Brad Morantz
Visual Recognition Tutorial
SES 2007 A Multiresolution Approach for Statistical Mobility Prediction of Unmanned Ground Vehicles 44 th Annual Technical Meeting of the Society of Engineering.
Pattern Recognition and Machine Learning
Point estimation, interval estimation
SYSTEMS Identification
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Linear and generalised linear models
July 3, Department of Computer and Information Science (IDA) Linköpings universitet, Sweden Minimal sufficient statistic.
Basics of regression analysis
Maximum likelihood (ML)
Statistical Methods for long-range forecast By Syunji Takahashi Climate Prediction Division JMA.
Lecture II-2: Probability Review
Modern Navigation Thomas Herring
Separate multivariate observations
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Lecture 10: Inner Products Norms and angles Projection Sections 2.10.(1-4), Sections 2.2.3, 2.3.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Summarized by Soo-Jin Kim
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Roman Keeney AGEC  In many situations, economic equations are not linear  We are usually relying on the fact that a linear equation.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
SACE Stage 2 Physics Motion in 2 Dimensions.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Modern Navigation Thomas Herring
Progress in identification of damping: Energy-based method with incomplete and noisy data Marco Prandina University of Liverpool.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Probability and statistics review ASEN 5070 LECTURE.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
TRACKING OF PHOBOS-GRUNT MISSION IN LOW EARTH ORBIT Alexander S. Samotokhin, Andrey G. Tuchin M.V. Keldysh Institute of Applied Mathematics of Russian.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
Introduction to Optimization
Chance Constrained Robust Energy Efficiency in Cognitive Radio Networks with Channel Uncertainty Yongjun Xu and Xiaohui Zhao College of Communication Engineering,
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 9: Least.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Computacion Inteligente Least-Square Methods for System Identification.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Estimating standard error using bootstrap
Terrain Reconstruction Method Based on Weighted Robust Linear Estimation Theory for Small Body Exploration Zhengshi Yu, Pingyuan Cui, and Shengying Zhu.
SUR-2250 Error Theory.
Probability Theory and Parameter Estimation I
ASEN 5070: Statistical Orbit Determination I Fall 2014
Ch3: Model Building through Regression
Vladimir Agapov, Igor Molotov, Victor Stepanyants
Learning with information of features
Introduction to Instrumentation Engineering
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Generally Discriminant Analysis
7-th European Space Debris Conference 2017
Presentation transcript:

Guaranteed Approach For Orbit Determination With Limited Error Measurements Zakhary N. Khutorovsky Interstate Joint-Stock Corporation “Vympel”, Moscow, Russia Alexander S. Samotokhin M.V. Keldysh Institute of Applied Mathematics of Russian Academy of Sciences, Moscow, Russia Sergey A. Sukhanov Interstate Joint-Stock Corporation “Vympel”, Moscow, Russia and Kyle T. Alfriend USA, Texas A&M University, College Station, TX 77843

2 THE PREVENTING POTENTIAL COLLISIONS BETWEEN ARTIFICAL SPACE OBJECTS (SO) IS MAIN PURPOSE Two types of the errors are possible: the miss of collision (probability α) and a false alarm (probability β ). The errors α and β depend on: 1. the accuracy of a the method used for determination of SO predicted position. 2. correspondence level of actual object predicted position errors to their calculated values. IS IT POSSIBLE TO REDUCE THE FREQUENCY OF THE MISS OF COLLISION AND THE FREQUENCY OF THE FALSE ALARM ESSENTIALLY USING ALREADY AVAILABLE MEASURING INFORMATION?

3 FEATURES OF EXISTING ALGORITHMS 1. The statistical approach is applied. 2. The algorithms are based on a method of the least squares and its recurrent modifications. 3.The algorithms have the property of a global optimality for Gaussian distribution of measurement errors and the property of an optimality in a class of linear algorithms for any error distributions. 4.The errors of real measurements are non-Gaussian. The nonlinear algorithms may exist too. Therefore the existing algorithms does not provide the minimal errors of SO predicted positions generally. 5.The algorithms does not provide the computation of correlation matrixes of object predicted position errors, which correspond to actual values of these errors.

4 IS IT POSSIBLE TO CREATE THE ALGORITHMS, WHICH CALCULATE SPACE OBJECT PREDICTED POSITIONS USING THE MEASUREMENTS AND WHICH HAVE THE FEATURES LISTED BELOW? 1. CORRESPONDENCE TO THE PROPERTIES OF REAL ERRORS (the distribution is not known, the non-abnormal values are limited by known constants) IS BETTER, THAN IN THE EXISTING ALGORITHMS. 2. CALCULATED VALUES OF THE ERRORS CORRESPOND TO THEIR REAL VALUES. 3. ACTUAL ERRORS OF SPACE OBJECT PREDICTED POSITION DEFINITIONARE NOT GREATER, THAN THE ERRORS OF THE LEAST SQUARES METHOD ? 1. YES 2. YES 3. sometimes YES

5 STATEMENT OF THE PROBLEM Let y = (y 1, y 2, … y n ) are the measurements of some functions I(x) = (i 1 (x), i 2 (x), … i n (x)), where x is of m -dimensional vector of parameters. Let t 1 ≤ t 2 ≤…≤ t n are times of measurements. Let  = (  1,  2, …  n ) are measurement errors limited by the known constants  above, i.e.. y = I(x) + δ |δ k | ≤ ε k = 1,2,…,n Let target function is z = S(x). In our problem x are orbit parameters and z is position vector in a certain time t. It is required to find an estimation z ga = z ga (y) of parameter z with the least errors z ga - z and to estimate maximal values of these errors max |z ga - z|. LINEARIZATION LS estimation: x LS = arg min x (y - I(x))′∙W∙(y - I(x)), W – weight matrix of measurements. Linearised problem: y = I∙x + δ z = S∙x |δ k | ≤  k=1,2,…,n y := y - I(x LS ) x := x - x LS z := z - z LS I = ∂I/∂x(x LS ) S = ∂S/∂x(x LS )

6 Not statistical approach 1.In in mathematics the theory of optimum algorithms in the normalized spaces has been developed. It was applied to the decision of mathematical problems of approximation in the theory of functions, the differential both integrated equations and other abstract mathematical disciplines. 2.In 1980th years this theory has been adapted to the decision of estimating and predicting problems at the only thing of assumptions of measurement errors - limitation by known values. Since years this approach has found practical application in applied areas: power, electronics, chemistry, biology, medicine, etc. 3.In space information-analytical systems the USA and Russia the given approach was not applied. The possible reasons are complexity, uncertainty in its efficiency, unwillingness to reconstruct already created systems radically. THE WORK IS DEVOTED TO THE DEVELOPMENT AND THE RESEARCH OF NEW METHODS BASED ON THIS APPROACH FOR ORBITS DETERMINATION.

7 STATEMENT OF THE PROBLEM FOR NON-STATISTICAL APPROACH R={r} ─ the linear normalized space Norm n(r) = ║r║ ─ is a functions on R, which satisfies to the conditions: ─ nonnegative n(r) ≥ 0 (if n(r) = 0, then r = 0) ─ linearly-scalable n(  r) = |  |  n(r) ─ convex n(r 1 +r 2 ) ≤ n(r 1 ) + n(r 2 ). Example of norm: l p = (∑ k |r k | p ) 1/p, r k – k-th component of vector r. ─ p = 2 regular vector magnitude l 2 = (∑ k |r k | 2 ) 1/2, ─ p = 1 the sum of modules of vector components l 1 = ∑ k |r k | ─ p = ∞ the maximum modulus of component l ∞ = max k  r k . X ─ unknown element space (m-dimensional) Y ─ known measurement space (n-dimensional, n ≥ m. ) Z ─ solution space (p-dimensional). In our applied problem the elements of space X are orbit parameters, Y are the measurements of known functions of these parameters, Z are position coordinates in a certain time. The condition of limitation of measurement errors |δ k | ≤ ε k = 1,2,…,n is equivalent to a condition ║δ║≤ ε, where l ∞ is understood as norm.

8 S: X → Z solution operator S(x). I: X → Y information operator I(x) (mapping I(x) is to be one-to-one). The measurement y  Y is known. It is necessary to find a solution element z  Z. If y = I(x) (measurement is correct), then x =I -1 (y) and z = S(x) The problem is solved. Usually y ≠ I(x) and y - I(x) =  (measurement error) ║  ║ ≤ ε ( ε ─ known constant). A: Y → Z algorithm (gives the approximation of solution element z by the y measurement. U x (y) = {x  X ║I(x) - y║ ≤ ε} ─ the uncertainty area x for the measurement y U y (x) = {y  Y ║I(x) - y║ ≤ ε} ─ the uncertainty area y for unknown element x U z (y) = S{U x (y)} ─ the uncertainty area z for the measurement y

9 X ─ unknown element space (m-dimensional) Y ─ known measurement space (n-dimensional, n ≥ m. ) Z ─ solution space (p-dimensional).

10 THE EXAMPLES OF THE ALGORITHMS 1. Correct algorithm A cr. A cr (I(x)) = S(x) for each x  U x (y) It is similar to the concept of unbiased parameter estimate in statistical approach. 2. Interpolated algorithm A in. A in (y)  U z (y) for each y  U y (x) D = max (z1,z2  Ux(y)) ║z 1 - z 2 ║ ─ the diameter of uncertainty area U z (y) is the guaranteed ranges of any interpolated estimate parameters for any realization of measurement errors. It is serves as a one-dimensional analog of Fisher information matrix in statistical approach. The interpolated algorithm has property of adaptation to real measurement errors. The more frequent error values close to the maximum possible value and having different signs are in particular realizations of measurements, the less is such algorithm error. 3. Central algorithm A сn A сn (y) = arg min z (max v  Uz(y) ║z - v║) A сn (y) ─ Chebyshev center of uncertainty area U z (y), i.e. a point of set Z, for which the maximum distance to points U z (y) is minimal. 4. Projected algorithm A pr A pr (y) = S(x pr ) x pr = arg min║y – I(x)║= min ε U x (y,ε) A pr is the projection of Y space y n -vector to m -dimensional subspace I(X). A pr is a limit of uncertainty area U x (y) = U x (y,ε), when it is subtended to the point by decreasing of upper limit of errors . Projective estimation x pr is interpolated and robust (it is independent from  ).

11 ONE DIMENTIONAL CASE (m=1) y k = x + δ k |δ k | ≤  (k=1,2,…,n) z = x U x (y) = {x  R 1 y max - ε ≤ x ≤ y min + ε} x ga = A cn (y) = A pr (y) = (y max + y min )/2 D = 2ε - y max + y min |x ga - x| ≤ ε - (y max - y min )/2

12

13 The comparison of RMS estimation errors LS and GA (asymptotic) estimation formulaRMS estimation errors for different distribution errors uniformtriangulargaussian x НК = (Σy i )/nσ/sqrt(n) x ГП = (y max + y min )/2≈ 2.5σ/n≈ 1.1σ/sqrt(n)≈ 1.3σ/log(n) The ratio of mean errors of GA and LS estimations (simulation –1000) error measurements distribution number of measurements n uniform triangular gaussian-3σ

14 THE ALGORITHM ERRORS AND OPTIMALITY e(A, x, y) = ║A(y) – S(x)║ ─ the error of algorithm A e x (A) = max y  Uy e(A, x, y) ─ the error of algorithm А for the worst measurement errors e y (A) = max x  Ux e(A, x, y) ─ the error of algorithm А for the worst x  U x (y) optimal algorithm A opt,y e x (A opt,y ) = min A e x (A) ─ the algorithm minimizes the error e x (A) for each of A and x optimal algorithm A opt,x e y (A opt,x ) = min A e y (A) ─ the algorithm minimizes the error e y (A) for each of A and y The central algorithm is A opt,x algorithm. optimal algorithm A opt e(A opt ) = min A e x (A) = min A e y (A) ─ the algorithm is optimal for each of the two tipes

15 THE ALGORITHMS IN NORM l ∞ AND THEIR OPTIMALITY The problem is linear: I(x) = I·x S(x) = S·x I, S ─ matrixes n×m and p×m. 1. А сn = A opt ─ the central algorithm is optimal. z cn = А сn (y) = 0.5∙(z M + z m ) z  (z cn -  z max, z cn +  z max )  z max = 0.5∙(z M - z m ) z k,m =min x  Ux (s′ k ∙x) z k,M =max x  Ux (s′ k ∙x) -  ≤ y q -i′ q ∙x ≤  k=1,2,…,p q=1,2,…,n z k,m, z k,M ─ components p-vectors z m, z M s′ k, i′ q ─ k-th и q-th rows of matrixes S, I It is equivalent to 2p linear programming problems in m-dimentional space X. 2. The projective algorithm А pr is interpolated and it is not optimal. The projective estimation x pr is a solution of linear programming problem: min α x  Ux,α -α∙  ≤ y q -i′ q ∙x ≤ α∙  U x,α = {x  X: ║I∙x - y║ ≤ α∙ε} q=1,2,…,n in (m+1)-dimentional space (X,  ). x pr loses in error e y to the optimal estimation x cn not more than two times. THE CENTRAL AND PROJECTIVE ALGORITHMS ARE NONLINEAR

16 THE ALGORITHMS IN NORM l 2 AND THEIR OPTIMALITY The problem is linear: I(x) = I·x S(y) = S·y I, S ─ matrixes n×m and p×n. А сn = А pr = A opt ─ central, projective and optimal algorithms are the same. An estimate of the solution parameters is the estimate of LS z cn = z pr = z LS = S·(I′·I) -1 ·I′·y. The estimate z LS is linear function of measurements y. The estimation z LS is neither optimal nor interpolated in norm l ∞. THE ALGORITHMS IN NORM l 1 AND THEIR OPTIMALITY The problem is linear: I(x) = I·x S(y) = S·y I, S ─ matrixes n×m and p×n. Optimal and central algorithms are unknown. Projective estimation x pr is a solution of the least modules (LM) problem x pr = arg min x (║I∙x - y║= ∑ q |y q - i′ q ∙x |) q=1,2,…,n which can be reduced to linear programming problem. The estimation x pr is interpolated and it is not central.

17 TWO DIMENTIONAL CASE (m=2) y q =x 1 +(q-n/2)∙x 2 +δ q |δ q |≤  q=0,1,…,n z = x U x (y)=∩ q {y q -ε≤x 1 +(q-n/2)∙x 2 ≤y q +ε} central estimation x k,m = min x  Ux x k z k,M = max x  Ux x k y q -ε ≤ x 1 +(q-n/2)∙x 2 ≤y q +ε k=1,2 q=0,1,…,n projective estimation min α y q -α∙ε ≤ x 1 +(q-n/2)∙x 2 ≤ y q +α∙ε in space (x 1, x 2,  ) q=0,1,…,n 0<α≤1 n parameter x 1 parameter x 2 uniform distrib. gaussian-3σ distrib.uniform distrib.gaussian-3σ distr. δx 1,cn /δx 1,LS The ratio of mean errors of GA and LS estimations (simulation –1000)

18 Realizations of LS and central estimations for model m=2 (distribution of error measurements: Gauss truncated of level 3σ, n=100)

19 Realizations of LS and central estimations for model m=2 (distribution of error measurements: triangle, n=100)

20 Realizations of LS and central estimations for model m=2 (distribution of error measurements: uniform, n=100)

21 The comparison of algorithms used two different approaches (real situation, simulation) SCENARIO:  1-1 ─ one passage of SO through one radar coverage zone  1-N ─ all passages of SO through one radar coverage zone during 10 days  3-N ─ all passages of SO through three radars coverage zone during 10 days. INITIAL DATA:  coordinates of radar stand points : 1:  =0.7, =0 2:  =1.0, =0.6 3:  =0.75, =-0.6 h=0.  radar coverage zone : 5°≤  ≤60°,  – elevation angle.  measurement times : 1-1: each 5 sec from SO entry into radar coverage zone up to exit, 1-N,3-N: each 5sec during 50sec  measured parameters : the spherical coordinates d, ,  – range, azimuth, elevation angle.

22  measurements errors: the distribution of uncorrelated errors: uniform, triangular, Gaussian truncated at the level of three "sigmas"; root-mean square value of errors  d =0.15 km,   =0.003 radians,   =0.003 radians; systematic distance error is equal to half the maximum value of the uncorrelated error with arbitrary sign.  motion model: Earth - EGM-96 i gg =75, Moon, Sun, atmospheric perturbations, solar radiation pressure.  orbit parameters: 6-vector of Lagrange elements in the moment t n.  solution parameters: the coordinates of position vector propagated to n d = 0,1,3,5,7,10 days in the first orbital coordinate system (r, t, n).  ratio of area to SO mass: is a constant and equal to 0.01 m 2 /kg.  SO trajectories: eccent. e  0, inclin. i  57°, altitudes above ground h  800 km и 1500 km.  number of measurements: n = , 550, 1750 in 1-1, 1-N, 3-N.  number of realization in simulation: n r =100. COMPARATING INDEX OF ALGORITHMS: δξ=(∑ i |ξ i -ξ|)/n r - mean error

23 1 radar, 1 passage, uncorrelated errors δξ(X LS ) [m,m/s] and δξ(X CN )/δξ(X LS ) for various measurement error distributions

24 1 radar, all passages during 10 days, uncorrelated errors δξ (Z LS ) [m] and δξ(Z CN )/δξ(Z LS ) for various measurement error distributions LS [m]uniformtriangulargaussian-3σ ndnd rtn rt n rt n rtn

25 3 radars, all passages during 10 days, uncorrelated errors δξ(Z LS ) [m] and δξ(Z CN )/δξ(Z LS ) for various measurement error distributions LS [m]uniformtriangulargaussian-3σ ndnd rtn rt n rt n rtn

26 3 radars, all passages during 10 days, uncorrelated and systematic errors δξ (Z LS ) [m] and δξ (Z CN ) / δξ (Z LS ) for various measurement error distributions LS [m]uniformtriangulargaussian-3σ ndnd rt nrtn rt n rtn

27 3 radars, all passages during 10 days, uncorrelated errors, ε̃ = 2ε δξ(Z LS ) [М] and δξ(Z CN )/δξ(Z LS ) for various measurement error distributions LS [m]uniformtriangulargaussian-3σ ndnd rt n rt n rt n rtn

28 1 radar, all passages during 10 days, uncorrelated errors with uniform distrib. δξ( Z LS, igg ) / δξ( Z LS, igg=75 ) and δξ( Z CN, igg ) / δξ( Z CN, igg=75 ) with i gg = 8, 16, 32 LS estimationcentral estimation (CN) coordinates rtnrtn model i gg n d = n d = n d = n d = n d = n d =

29 THE CONCLUSIONS 1.The non-statistical approach corresponds to properties of real measurements errors. 2.The non-statistical approach provides the computation of the solution parameters errors corresponding to their real values. 3.The interpolated algorithms of the non-statistical approach have property of adaptation to real measurements errors. 4.The estimation of solution parameters in interpolated algorithms turns out the more precisely and advantage in accuracy in comparison with method LS the more, than the big part of errors is concentrated in near-border area. 5.The problem is more informative, the central algorithm in relation to LS better works. 6.The correlations in measurement errors worsens quality of solution parameters estimation in central algorithm less, than for method LS. 7.The projected algorithm is not critical to accuracy of knowledge of errors upper limit. The central algorithm has not such feature. The projected algorithm can lose in accuracy to the central algorithm no more than in 2 times. 8.The central and projected algorithms are more critical to methodical errors of SO movement model, than the LS method. WHEN AND HOW TO APPLY ALGORITHMS OF NON-STATISTICAL APPROACH IN PRACTICE? THE ANSWER TO THIS QUESTION DEPENDS ON ERROR CHARACTERISTICS OF REAL MEASURING TOOLS

30

31 р с = k  ехр(- 0.5  k rr ) k = S  v  (4  2  k vv  det K 1  det K 2  det (К К 2 -1 )) -0,5 (1) S=  (d l +d 2 ) 2 /4 k rr =  r  (К 1 + K 2 ) -1  r ' k vv =  v  (К 1 + K 2 ) -1  v ' t min – the time when two objects approach the minimum distance;  r,  v – vectors of the relative position and velocity in t min ; К 1, К 2 – covariance matrices of the errors of determination of the positions of both objects for the time t min ; d 1, d 2 – dimensions of the objects. р с ≈ k  ехр(- A) k = (d 1 +d 2 ) 2 /(σ u ∙(11σ u sin 2 α+16σ w cos 2 α)) (2) A = (δu) 2 /(4σ u 2 )+ (δv) 2 /(2σ v 2 )+ (δw) 2 /(2σ w 2 cos 2 α+2σ v 2 sin 2 α)  u,  v,  w – the projections of  r in the directions u, v, w of the orbital coordinate system of an object in a near-circular orbit;  – angle between the object velocity vectors v 1 и v 2 at time t min ;  u  (  u1,  u2 ),  v  €(  v1,  v2 ),  w  (  w1,  w2 ) It follows from (1) and (2) that if a collision occurs in this dangerous approach  if the errors in the determination of the approaching SO positions decrease, then p c increases. For example, if the errors decrease by an order of magnitude, p c increases by two orders of magnitude;  overestimation or underestimation of the calculated errors result in a significant decrease in p c. For example, if the calculated errors are increased by an order of magnitude, p c decreases by two orders of magnitude and it tends to zero if errors are decreased by an order of magnitude.