Download presentation
Presentation is loading. Please wait.
Published byWalter Hines Modified over 6 years ago
1
MURI Kickoff Meeting Randolph L. Moses November, 2008
CONSTRAINT GENERATION INTEGER PROGRAMMING APPROACH TO INFORMATION THEORETIC SENSOR RESOURCE MANAGEMENT MURI Kickoff Meeting Randolph L. Moses November, 2008
2
Selection structures Different problems involve different selection structures One common selection structure allows you to select any K observations from a larger set (“K-element subset selection”) MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
3
Selection structures Another common selection structure is one involving a number of sets, in which you may select one or more observations from each set MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
4
Submodularity Constraint generation approach builds on earlier work developing computable bounds for greedy information gathering. Greedy generation of constraint sets Computable bound gives guarantee on performance relative to an upper bound on optimal constraint generation. Steps: 1. Definition of MI 2. Observations zC are independent of zB conditioned on X 3. Conditioning reduces entropy 4. Definition of MI MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
5
Directed search Due to submodularity:
A family of tighter upper bounds: ( denotes the observations corresponding to the elements in set ) We utilize this concept by using a collection of candidate subsets and exploration subsets Suppose we want to find an upper bound to the reward of this subset of observations. We can easily do so by applying the chain rule, and then dropping some subset of conditionings. Here we have removed all conditionings. The fewer conditioning observations we drop, the tighter upper bound we obtain. Here we’ve obtained an upper bound as the reward of subset A plus the incremental reward of each element in the difference set B\A conditioned on only subset A (it is an upper bound as we have dropped the conditioning on the previous elements in the sum). Since we can choose any set A < B, this provides a family of upper bounds, which becomes tighter as A grows closer to B. We exploit this family of upper bounds by constructing a solution which utilizes candidate subsets — these sets A for which we have evaluated the true reward, and exploration subsets — additional elements that can be added to a given candidate subset, and for which we have evaluated the incremental reward conditioned on the candidate subset A. Notice that if there is only one element in B\A, the upper bound is tight; otherwise we need to grow A closer to B to make the bound closer to tight. MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
6
Integer programming formulation
This is the integer program that we would ideally like to solve. The variables over which we are optimizing are these binary indicator variables; omegaiAi is one if we select subset Ai for object i and zero otherwise; this constraint requires that exactly one subset should be selected for each object. The reward is the sum of the rewards for the subsets selected for each object. This constraint requires that each resource is used at most once. The structure of the integer program is similar to an assignment problem, assigning subsets to objects. However, in general each subset utilizes multiple resources, so the structure required for efficient solution is not present. Furthermore, the number of subsets from which we will be choosing will be prohibitively large for problems of realistic size. MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
7
Iterative integer programing formulation
To address this difficulty, we solve a series of integer programs which exploit the structure examined a couple of slides ago. Each integer program is an upper bound to the optimal solution, and an algorithm uses the results of each integer program to tighten the upper bounds. The integer program is very similar to the previous one except that rather than explicitly enumerating all possible subsets, we use a compact representation consisting of candidate subsets script T, which may be combined with exploration subset elements. Again, each resource can be used at most once, either by a candidate subset or an exploration subset element, and exactly one candidate subset must be chosen for each object. This additional constraint specifies that the exploration subset elements corresponding to a given candidate subset can only be selected if the candidate subset is selected. MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
8
Comments At every iteration, the solution of the integer program for that iteration is an upper bound to the optimal reward. The bound becomes tighter with each iteration. At termination an optimal solution is found. We can also add a small number of constraints to the integer program to find an auxiliary problem that provides a lower bound to the optimal reward, which also becomes tighter with each iteration and converges to the optimum. The important results of the integer program is that at it provides an upper bound to the optimal reward. The update algorithm which uses the results of the integer program ensures that the bound tightens with each iteration, and converges to the optimal solution. We can also add a small number of constraints to the integer program to find an auxiliary problem that provides a lower bound, along with a solution attaining that lower bound, which also becomes tighter with each iteration and converges to the optimal reward. By combining the upper and lower bounds, we can terminate when we are within a desired fraction of optimality. In our experiments, we terminate when we are within 5% of optimality. MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
9
Experiment Tracking 50 objects
Sensor can provide (for any single object) either Azimuth and range Azimuth and range rate Sensor moves in a race-track pattern, azimuth noise varies with actual azimuth (smallest when object is broadside) Observation noise increases when objects are closely spaced The sensor can also obtain a more accurate azimuth/range or azimuth/range rate observation in 3 time steps MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
10
Results – performance MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
11
Results – computation time
Brute force for 20 steps requires >> 1040 reward evaluations MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
12
Tracking 50 objects for 50 time slots
Experiment Tracking 50 objects for 50 time slots The initial uncertainty of the first 25 objects is slightly lower Any object can be observed in any time slot The observation noise for the first 25 objects increases by a factor of 106 half-way through the simulation MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
13
Results – performance MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.