Download presentation
Presentation is loading. Please wait.
Published byBarbara Cocking Modified over 9 years ago
1
Target Tracking in Sensor Networks 17 th Oct 2005 Presented By: Arpit Sheth
2
Introduction One of the most important applications of sensors is target tracking. Each node can sense in multiple modalities such as acoustic, seismic and infrared. The type of signals to be sensed are determined by the objects to be tracked.
3
Many challenges must be overcome before using sensor networks for tracking. Two critical areas are: 1. Efficient Networking Techniques 2. Because the data collected by the sensors may be redundant, correlated and/or inconsistent, it is desirable to have sensors collaborate on processing data and transporting a concise digest to subscribers.
4
Objectives to be satisfied: 1. Collaborative Signal Processing (CSP) 2. Distributive processing 3. Goal oriented, on-demand processing 4. Information fusion 5. Multi-resolution processing
5
1. Collaborative Signal Processing (CSP) To facilitate detection, identification and tracking of targets, global information in both time and space must be collected and analyzed over a specified space-time region. However individual nodes provide spatially local information only CSP provides data representation and control mechanisms to collaboratively process and store sensor information, respond to external events and report results.
6
2. Distributive processing Raw signals are sampled and processed at individual nodes but are not directly communicated over the wireless channel. Instead each node extracts relevant summary statistics from the raw signal, which are typically smaller in size. The summary statistics are stored locally in individual nodes and may be transmitted to other nodes upon request.
7
3. Goal oriented, on-demand processing To conserve energy, each node should perform signal processing tasks that are relevant to the current query. In the absence of a query, each node should retreat into a standby mode to minimize energy consumption. A sensor node should not automatically publish extracted information, but should forward information only when needed.
8
4. Information fusion To infer global information over a certain space-time region, CSP must facilitate efficient hierarchical information fusion. High bandwidth time series data must be shared between neighboring nodes for classification purposes. Lower bandwidth data may be exchanged between more distant nodes for tracking purposes.
9
5. Multi-resolution processing Depending on the nature of the query, some CSP tasks may require higher spatial resolution involving a finer sampling of sensor nodes, or higher temporal resolution involving higher sampling rates. Example: Reliable detection is achievable with relatively coarse space-time resolution, whereas classification typically requires higher resolution. Multiresolution space-time processing should be fruitfully exploited in this context.
10
Efficient sensor placement for tracking [5] Placement of sensors in the surveillance zone is an important issue in the design of these networks. Several types of sensors are available which differ from each other in their monitoring range, detection capabilities and cost Sensors which can accurately detect targets at longer distances have higher cost, but a few number of these are required for effective surveillance If low cost, small range sensors are used, effective surveillance can be achieved with a large no. of these sensors
11
If the sensor field is represented as a grid, target location refers to the problem of pinpointing a target at a grid point at any point in time. The target location can be simplified considerably if the sensors are placed in such a way that every grid point is covered by a unique subset of sensors.
12
Sensor placement problem: Given a surveillance region (grid points) and sensors of different types, determine the placement and type of sensors in the sensor field such that the desired coverage is achieved and the cost is minimized. How do we solve this problem? We formulate the problem in terms of cost minimization under coverage constraints.
13
Minimum Cost Sensor Placement: Let the sensor field consist of n x, n y, n z grid points in the x, y and z dimensions. We assume two types of sensors (Type A and Type B) are available for deployment, with costs C A and C B and ranges R A and R B The separation between the grid points in any dimension is at least min{ R A, R B } Another assumption that the sensor always detects a target that lies within its range
14
A sensor with range R A (R B ) placed on a grid point (x 1,y 1,z 1 ) can detect a target at grid point (x 2,y 2,z 2 ), if the distance between these two points is less than R A (R B ). Every grid point must be covered by at least m>=1 sensors. The parameter m measures the amount of fault tolerance inherent in the deployment scheme. The optimization problem: Given a parameter m>=1, a set of grid points, two types of sensors with respective costs and ranges, find an assignment of sensors to grid points such that every grid point is covered by at least m sensors and the total cost is minimum
15
Solution: Let a ijk be a binary variable defined as : a ijk = 1 : if A type sensor is placed at grid point (i,j,k) 0 : otherwise Likewise, b ijk = 1 : if B type sensor is placed at grid point (i,j,k) 0 : otherwise The total cost C of sensor deployment is given by:
16
Let cov A (( i 1, j 1, k 1 ),( i 2, j 2, k 2 )) be a binary variable defined as follows: cov A (( i 1, j 1, k 1 ),( i 2, j 2, k 2 )) = 1: if type A sensor placed at grid point (i 1,j 1,k 1 ) covers grid point (i 2,j 2,k 2 ) 0: otherwise Similarly it can be defined for type B sensor. cov B (( i 1, j 1, k 1 ),( i 2, j 2, k 2 )) = 1: if type B sensor placed at grid point (i 1,j 1,k 1 ) covers grid point (i 2,j 2,k 2 ) 0: otherwise
17
Objective : Minimize the cost function Subject to: Drawback: Case d = R A not considered and and assumed that range is an integer and distance is not.
18
Important Conclusions from the Case Study: 1. As the value of m increases, it is more economical to use Type B sensor as it costs 1.5 times more, it has the range that is twice that of Type A sensor. 2. This model takes an excessive amount of time for larger problem instances. Therefore, a ‘divide and conquer’ near optimal approach should be adopted when no. of grid points is very large.(>50)
20
Dual Space Approach to tracking [3] This approach is used to track the edge of a shadow. It is based on the dual space principle in Computational Geometry Dual Space Transformation: -A line in the primal space y=α.x+β is represented by a single point (-α,β) in another space (called the dual space) -Similarly a point in the primal space (a,b) uniquely defines a line in the dual space :φ=a.θ+b.
21
Properties: 1. In the primal space, if a point (a,b) is on a line y = α.x+β, then in the dual space, the corresponding line φ=a.θ+b does through the corresponding point (-α,β), and vice versa. 2. If a point in the primal space is above a line, then in the dual space, the corresponding line is above the corresponding point
22
S1 S2 S3 S4 Y X S1 S2 S3 S4 Movement of the shadow line in the primal space Movement of the corresponding point in the dual space
23
Performance Evaluation: 1. The expected number of lines bounding a cell is four independent of the overall no. of sensors present. Thus, the no. of sensors active at a given time is very small which leads to energy savings. 2. More the no. of sensors, smaller the size of cells, more accurate our estimation of shadows. 3. Assumption was made during testing that no two motes were crossed at the same moment as they led to RF collisions. 4. Tracking more complicated shadows is difficult and does not lead to accurate estimations.
24
Detecting convex shadows through sensor node clustering
25
Distributed Prediction Tracking (DPT) [6] Assumptions: 1. The Cluster Head has following information about all the sensors within the cluster: Sensor Identity, Location and Energy Level 2. All sensors have same characteristics. 3. Sensors are randomly distributed across the entire area with uniform density 4. Each sensor has two sensing radii: Low Beam (default) and High Beam (turned on only when necessary). 5. In order to provide accurate information, there should be atleast 3 sensors to sense the target jointly.
26
DPT distinguishes between border and non-border sensors. Border sensors are required to keep sensing at all times in order to detect all targets entering the sensing region whereas non-borders sensing channel goes into hibernation. Main components of the algorithm: 1. Target Descriptor Formulation Algorithm 2. Sensor Selection Algorithm 3. Failure Recovery
27
Target Descriptor Formulation Algorithm: In order to identify the target and provide the target’s location information, cluster heads use a Target Descriptor (TD). The following items are incorporated in the TD: 1. Target identity 2. Target’s present location 3. Target’s next predicted location 4. Time stamp
28
Sensor Selection Algorithm After cluster head CH i predicts the location of the target, the downstream cluster head CH i+1 towards which the target is headed receives a message from CH i indicating this predicted location. The search algorithm running at CH i+1 is able to locally decide the sensor-triplet to sense the target. There are 3 modes of sensor selection: 1. Search for sensor triplet with normal beam 2. Search for sensor triplet with high beam 3. Coordination between multicluster
29
Search for Sensor Triplet Using Normal Beam
30
Search for sensor triplet with high beam
31
Co-ordination between multi-cluster
32
Failure Recovery Possible failure scenarios: 1. If the upstream cluster head does not get any confirmation from the downstream cluster head after a given period of time, then it assumes that the downstream cluster head is no longer available and the target has been lost. 2. The target changes it direction or speed so abruptly that it moves significantly away from the predicted location and falls out of the detectable region of the sensor-triplet selected for the sensing task. The Recovery process is broken into 3 steps:
33
First level of recovery: Let the currently selected sensor triplet switch to high beam if they were using the normal beam previously. If this succeeds, then follow the normal “sense-predict-communicate- sense” cycle. Second level of recovery: If the first level of recovery fails, a group of sensors which are around r meters away from it are activated. These sensors will be able to monitor a circular area of radius 2r. Nth level of recovery: If the second level of recovery does not succeed, then another group of sensors that are (2N - 3)r meters away from it are activated to locate the target.
34
Simulation results
35
Tracking Resolution is the time length between two consecutive sensing points with the intuition that as the resolution becomes finer, the miss probabilities will decrease.
36
Other tracking algorithms: 1. Dynamic Clustering Algorithm for Acoustic Target Tracking: [4] It consists of (a) Static backbone of sparsely placed high-capability sensors which assume the role of a cluster Head (CH). (b) Densely populated low-end sensors who provide sensor information to Cluster Heads upon requests. A Cluster Head (CH) becomes active when the acoustic signal strength detected by the CH exceeds a certain pre- determined threshold. The active CH then broadcasts a packet in the vicinity to join the cluster and provide their sensing information.
37
2. UW – CSP Algorithm [1] Assume that nodes in a cell detect the target. These are termed active nodes and the cell is termed active cell. Active nodes report their energy outputs to manager nodes at N successive time instants. At each time instant, the manager nodes determine the location of target from energy detector outputs of the active nodes. The manger node uses locations of target at N successive time instants to predict the location of the target at M(<N) future instants. The predicted positions are used to create new cells that the target is likely to enter. Once the target is detected in the new cell, it is designated as the active cell.
38
Conclusion and Future Research Thus we have covered algorithms which deal with sensor placement for effective tracking, detection and tracking of objects and line shadows. This is a very active area of research. Many algorithms have been developed, but most of them are based on assumptions, which make them usable only in certain scenarios. Some of the research areas are: 1. Tracking multiple closely spaced targets effectively. 2. Intra sensor Collaboration (Modal fusion) 3. Inter sensor Collaboration (Centralized processing)
39
References: 1. Dan Li; Wong, K.D.; Yu Hen Hu; Sayeed, A.M.;- Detection, classification and tracking of targets - Signal Processing Magazine, IEEE Volume 19, Issue 2, March 2002 Page(s):17 - 29 2. Brooks, R.R.; Ramanathan, P.; Sayeed, A.M.;- Distributed target classification and tracking in sensor networks - Volume 91, Issue 8, Aug. 2003 Page(s):1163 – 1171 3. Jie Liu; Patrick Cheung; Feng Zhao; Leonidas Guibas; - A dual-space approach to tracking and sensor management in wireless sensor networks - Proceedings of the 1st ACM international workshop on Wireless sensor networks and applications - 2002 – Pages 131-139 4. Wei-Peng Chen; Hou, J.C.; Lui Sha; - Dynamic clustering for acoustic target tracking in wireless sensor networks - Mobile Computing, IEEE Transactions on - Volume 3, Issue 3, July-Aug. 2004 Page(s):258 – 271 5. Chakrabarty, K.; Iyengar, S.S.; Hairong Qi; Eungchun Cho; - Grid coverage for surveillance and target location in distributed sensor networks - Computers, IEEE Transactions on - Volume 51, Issue 12, Dec. 2002 Page(s):1448 - 1453 6. Yang, H.; Sikdar, B.; - A protocol for tracking mobile targets using sensor networks - Sensor Network Protocols and Applications, 2003. Proceedings of the First IEEE. 2003 IEEE International Workshop on - 11 May 2003 Page(s):71 – 81
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.